Category Archives : News

Hyperlapse – First person videos finally become watchable! 3

We present a method for converting first-person videos, for example, captured with a helmet camera during activities such as rock climbing or bicycling, into hyperlapse videos: time-lapse videos with a smoothly moving camera.

At high speed-up rates, simple frame sub-sampling coupled with existing video stabilization methods does not work, because the erratic camera shake present in first-person videos is amplified by the speed-up.


Ryan Seifert‘s insight:

We have all seen the helmet videos from skydivers (if you haven’t, Jeb Corliss has one of the best) more recently are the emergence of helmet cams for bicyclist, surfers, and even pets! I have even spotted helmet cameras on my jogs around my relatively mundane neighborhood. Normally these videos are watched at an increased speed (who wants to watch a 45 minute ride for the 30 seconds of action) but the speed change is painful to view. Hyperlapse is a newly created method to stabilize and smooth out these videos.

Johannes Koph, Michael Cohen, and Richard Szeliski developed the new method to generate the smoother video. The process (See Technical Video below) is substantially more complicated than the familiar stabilizer functionality commonly used. The new system consists of three stages, Scene Reconstruction, Path Planning, and Image-based Rendering. Scene reconstruction allows the system to build a 3D model of view, leveraging multiple frames from the video to do so. This provides the system the ability to actually change the viewpoint in the resulting rendering, moving from an abrupt viewpoint change to a smoother option. This is one of the key properties that allow the system to generate the silky smooth resulting videos. Path Planning is split into two stages, the first optimizes for smooth transitions, length, and approximation (the path should be near the input frames). The second stage optimizes for rendering quality. The resulting path can be slightly different than the path actually taken by the camera person (or pet!); but will still be approximately the same. The final step of the process is actually rendering the video. Because each new shot can be slightly different than the original video, the system merges multiple frames together; selecting the areas in each frame for the best quality of the resulting video.

The result is quite amazing; but there are still some artifacts you can notice when watching the videos. Watching or stepping through the video frame by frame you will notice that objects can suddenly appear or boundary areas where the images are merged are easily identifiable. These sections are hard to notice when viewing full speed though.

The new technique is very resource intensive. The research paper mentions that it took roughly 305 hours to process a 10 minute video! Most of the computational time is consumed during the source selection with computes at roughly one minute per frame. I suspect that cloud computing (such as Amazon Web Services and Azure) will be strongly utilized to allow even a mobile phone app to be used in the video editing process. It will be interesting to see how this video editing will be used!

Accurate real-time Indoor Localization 2


Yesterday was the long-anticipated kick-off at San Francisco City Hall. Our CEO Hannes gave a brief presentation about what has been achieved in cooperation with San Francisco International Airport and The LightHouse during the 16 week Entrepreneurship-in-Residence Program of Mayor Ed Lee.

In early 2014, we were selected to help the San Francisco Airport (SFO) create a means for assisting blind and visually challenged travelers as they move from curb-to-gate in Terminal 2.


Ryan Seifert‘s insight:

What a great idea! Audio alerts would go a long way towards providing more independence to visually-impaired travelers.

I have been waiting for this sort of update since I first heard of beacon technology last year. The open source standard ‘AltBeacon‘ should begin picking up steam soon; we will find our shopping experience changed drastically (Imagine getting a unique deal or timed special). I am happy to see it getting applied to more altruistic areas as well.

The beacons are about the size of a bottle cap, cost 20 dollars apiece, and will reportedly run about 4 years without needing to change the batteries. The devices were painted and installed above eye level in order to reduce the visual impact of the roughly 300 beacons installed to the terminal.

There is a similar system going on over-seas at Heathrow airport in London. The coverage does not seem to be quite as complete; as it limits the information it provides to only simple notifications. One interesting difference is that the beacons in Heathrow also report local data back to an integrated system. This allows the system to provide alerts back to airport personnel; such as drop in temperatures or if the beacon begins moving for any reason.

Beacons are just beginning to be adopted in general areas. Shopping centers, museums, movie theaters (Think about a partnership with RunPee!), and sports stadiums are all slowing starting to roll out beacon technology. It will be interesting when my phone can navigate me to the best price for a coffee or a specific sandwich while I am waiting for a plane or my wife to finish shopping!

Judge orders Microsoft to turn over data held overseas 1

In a case closely watched in the United States and overseas, a federal judge in New York held Thursday that Microsoft must comply with a U.S. search warrant to turn over a customer’s e-mails held in a server overseas.
Judge Loretta Preska — in a surprise ruling from the bench — upheld a magistrate judge’s opinion in December ordering the Redmond, Wash., company to allow federal authorities to obtain copies of the data, which is stored in Ireland.


Ryan Seifert‘s insight:
This is really a painful ruling for cloud providers operating in the States. While it was still possible to comply with the requests before; there were substantially more steps involved (Mutual Legal Assistance procedures). This ruling sidesteps those steps to grant quicker access. 

The major concern of the ruling is that it could easily push international companies to select cloud hosts that are not US based (for instance a medical software service selecting to host with an Open Stack cloud provider rather than AWS or Azure).


On a good note, Microsoft is appealing again and the judge actually suspended her order until the appeal is decided. It is not surprising to see Microsoft fight this; their cloud services have been quickly growing in the last couple years. I would expect to see Amazon and Google take the same position; as both also provide cloud base services.

How I ended up conducting the most successful technical interviews with a single question 3

How I ended up conducting the most successful technical interviews with a single question

i conducted my first tech interview in 2008. at that time, the company already had a working process that i followed: interviews were 1 hour. the candidates would have 30 minutes to answer a 15 questions quiz. then we would spend 15 minutes talking about their answers plus an additional 15 minutes answering questions about the job. i quickly realized how terrible that questionnaire was.




Ryan Seifert‘s insight:

An interesting article for me; since I believe I started interviewing roughly the same time. We took similar paths (but I did not attempt to automate the process); migrating from some technical questions to a much more open ended process.


I recall one particularly rough interview; where after I started into my 10 technical questions the candidate quickly grew agitated. A agitated interviewee was a wholly new experience for me; I felt the blood drain from my face as he yelled, “I WON’T TAKE A TEST!” over the phone. Trying to regain my composure and calm him down; I let him know there was no score or grade. The questions are really discussion points to see how you code and design. The statement did little to soothe him; as I only heard a louder, “I WON’T TAKE A TEST!” in response. I decided at this point he was not a good cultural fit and started to bring the interview to a close. My salutation of “I appreciate your time and am sorry if I offended you. If you have any other questions for me, please ask.” was met with a quick “Click” and dial tone. Needless to say; this did spark a review of my interview process.


While the author coalesces on a single question, I have left mine at two:

  • What has been your favorite project to build and what challenges did you face during the development?
  • In your opinion what has been the most interesting bug you have had to locate and squash?


The follow up discussions on these questions easily leads into some technical questions on the language they used and which design patterns they implemented. More importantly it allows you to see which areas of software they are most passionate and interested in. Finding great talent and putting them into an environment where they can flourish will ensure excellent results. Switching to this open ended format took me from around a 20% success rate (ouch) to more than 80%; which have failed due to reasons other than technical prowess.


Looking back at the interview experience I touched on earlier; how much different could it have gone if I had asked my 2 questions and delved deeper into a subject near to them.

Cross-Platform Mobile Development: PhoneGap vs Xamarin 2

3 different operating systems, 3 separate languages and development environments, and counting.  To be able to cater to all of these operating systems natively, mobile app developers need to have someone able to be an expert in each of these programming languages and also be an expert in the nuances of how each mobile operating system works.  Task lifecycles, multi-threading, memory limitations, garbage collection, etc.

Thus to have 1 app developed in its native language and environment will take 3 times as long. Thus tripling the cost of development.

Ryan Seifert‘s insight:
An interesting article on which platforms to use for different projects. We have been scouring the current state of cross-platform development before launching into a new mobile application; but we have run into many questions on which system to use.

This quick read does a fantastic job listing the strengths and weaknesses of these two popular platforms. I walked away from the article with a better understanding of the platforms and even a general overview of the architecture involved in the development.

The closing argument for utilizing Xamarin for larger projects was very sound and a point not often encountered in comparison articles. I am looking to run a couple small apps to gain some first hand experience on them; but with the size of the projects and the possibility for growth the choice for now seams clear.