Category Archives : Development


Nooks and Crannies – Programming Language Curiosities – C# 3

Nooks and Crannies explores some interesting techniques and methods that are specific to a given language. While these techniques may not be useful in everyday development they could come in handy in niche application and scenarios. Word of warning though; the fact that these techniques could be considered obscure should warrant a second thought before they are included in production code.

Unions

A popular technique I used often when developing system communication modules in C; this allows us to reference the same section of memory as different formats. Commonly this is used to handle transitions between little-endian and big-endian systems or manipulate byte level data (for say IP address modeling).
(more…)


10 Software Development Myths 3

Myth Pic

Software is created in a realm of intangibility. This necessary but foreign situation has given rise to common myths; some go against intuition and some stem from older technology or a misunderstanding. Let’s debunk the top 10 software myths!

1. Adding developers to a late project will get the project back on schedule.

This is a very common and intuitive response to a schedule slip in a project. Surprisingly after additional manpower is added the project actually slips later. Brooks’ Law is often cited to refute the common myth; stating ‘adding manpower to a late software project makes it later’ (Fred Brooks – The Mythical Man-Month). While an oversimplification, it is a good rule of thumb. The counter-intuitive result of a later project is due to the increase in necessary communication between the team; which increases exponentially as additional people are added to a team.

(more…)


Hyperlapse – First person videos finally become watchable! 3

We present a method for converting first-person videos, for example, captured with a helmet camera during activities such as rock climbing or bicycling, into hyperlapse videos: time-lapse videos with a smoothly moving camera.

At high speed-up rates, simple frame sub-sampling coupled with existing video stabilization methods does not work, because the erratic camera shake present in first-person videos is amplified by the speed-up.

Source: research.microsoft.com

Ryan Seifert‘s insight:

We have all seen the helmet videos from skydivers (if you haven’t, Jeb Corliss has one of the best) more recently are the emergence of helmet cams for bicyclist, surfers, and even pets! I have even spotted helmet cameras on my jogs around my relatively mundane neighborhood. Normally these videos are watched at an increased speed (who wants to watch a 45 minute ride for the 30 seconds of action) but the speed change is painful to view. Hyperlapse is a newly created method to stabilize and smooth out these videos.

Johannes Koph, Michael Cohen, and Richard Szeliski developed the new method to generate the smoother video. The process (See Technical Video below) is substantially more complicated than the familiar stabilizer functionality commonly used. The new system consists of three stages, Scene Reconstruction, Path Planning, and Image-based Rendering. Scene reconstruction allows the system to build a 3D model of view, leveraging multiple frames from the video to do so. This provides the system the ability to actually change the viewpoint in the resulting rendering, moving from an abrupt viewpoint change to a smoother option. This is one of the key properties that allow the system to generate the silky smooth resulting videos. Path Planning is split into two stages, the first optimizes for smooth transitions, length, and approximation (the path should be near the input frames). The second stage optimizes for rendering quality. The resulting path can be slightly different than the path actually taken by the camera person (or pet!); but will still be approximately the same. The final step of the process is actually rendering the video. Because each new shot can be slightly different than the original video, the system merges multiple frames together; selecting the areas in each frame for the best quality of the resulting video.

The result is quite amazing; but there are still some artifacts you can notice when watching the videos. Watching or stepping through the video frame by frame you will notice that objects can suddenly appear or boundary areas where the images are merged are easily identifiable. These sections are hard to notice when viewing full speed though.

The new technique is very resource intensive. The research paper mentions that it took roughly 305 hours to process a 10 minute video! Most of the computational time is consumed during the source selection with computes at roughly one minute per frame. I suspect that cloud computing (such as Amazon Web Services and Azure) will be strongly utilized to allow even a mobile phone app to be used in the video editing process. It will be interesting to see how this video editing will be used!


Cross-Platform Mobile Development: PhoneGap vs Xamarin 2

3 different operating systems, 3 separate languages and development environments, and counting.  To be able to cater to all of these operating systems natively, mobile app developers need to have someone able to be an expert in each of these programming languages and also be an expert in the nuances of how each mobile operating system works.  Task lifecycles, multi-threading, memory limitations, garbage collection, etc.

Thus to have 1 app developed in its native language and environment will take 3 times as long. Thus tripling the cost of development.

Ryan Seifert‘s insight:
An interesting article on which platforms to use for different projects. We have been scouring the current state of cross-platform development before launching into a new mobile application; but we have run into many questions on which system to use.

This quick read does a fantastic job listing the strengths and weaknesses of these two popular platforms. I walked away from the article with a better understanding of the platforms and even a general overview of the architecture involved in the development.

The closing argument for utilizing Xamarin for larger projects was very sound and a point not often encountered in comparison articles. I am looking to run a couple small apps to gain some first hand experience on them; but with the size of the projects and the possibility for growth the choice for now seams clear.


Why do dynamic languages make it difficult to maintain large codebases?

Dynamic languages have costs associated with them that static languages don’t. Let me begin by saying that it is hard to maintain a large codebase, period. Big code is hard to write no matter what tools you have at your disposal. Your question does not imply that maintaining a large codebase in a statically-typed language is “easy;” rather the question presupposes merely that it is an even harder problem to maintain a large codebase in a dynamic language than in a static language. That said, there are reasons why the effort expended in maintaining a large codebase in a dynamic language is somewhat larger than the effort expended for statically typed languages.

 

 

Ryan Seifert‘s insight:
This article touches on a topic I have been wrestling with. I keep returning to a rule of thumb: if it is a prototype, under 300 lines of code, or a single (or very limited) use program; dynamic language is easily the best choice. There is a reflection point where static languages make maintaining code bases easier; but that point is much harder to define.

I find it interesting how some large projects written in Javascript have chose to attack the increase codebase issue. We have seen inclusion of type-checking (Typescript), modules (jQuery et all), and much stronger test suites. Many of these techniques are common in static languages and have been used to great success in maintaining codebases.

Unfortunately, the article does not touch on if and when a module rewrite would be useful. At which point does the overhead of leveraging a dynamic language incur a cost greater than a migration to a language easier to maintain or which a better tool set exists? The emergence of language agnostic testing frameworks have reduced this barrier substantially (assuming tests provide significant code coverage). I am interested in seeing if language migrations start to occur at a larger rate (perhaps as language-to-language conversion becomes more common). Quick, painless, and verifiable language migrations would be a great boon for any project.