Probably the best way to view this video is with the annoying voiceover turned down.

Basically, what is going on here is that researchers at the University of Washington have come up with a way to use high-resolution still photos to provide detail to low-res video.  The results are definitely eye-catching.

Enhancing and Experiencing Spacetime Resolution with Videos and Stills from pro on Vimeo.

I’ve noticed the artifacts and “slices” in some morphing footage in the past, and wondered what was up with that.

What this means is that a news reporter can, with Fred Flintstone-esque technology, capture some fantastically detailed footage … say, by duct-taping a 21-megapixel camera to an HD or even SD camera (the closer the lenses are to source point, the less you’ll have to mess about with parallax), and shooting a series of stills to go along with the video footage.

Imagine how good slo-mo footage at a sporting event can be with this.  Or how amazing footage of, say, Obama’s inauguration would have been if you’d been able to zoom in from the Washington Monument all the way to a close-up of his hand on the Bible. 

They say:

Our algorithm targets the emerging consumer-level hybrid cameras that can simultaneously capture video and high-resolution stills. Our technique produces a high spacetime resolution video using the high-resolution stills for rendering and the low-resolution video to guide the reconstruction and the rendering process.

I say: using one camera is not an optimal solution. Most of the “hybrid” cameras that can capture both hi-res and video suffer from one big flaw: cheap glass.  The lenses on these cameras are not really up to the task.  Having a second still camera on hand to do the work would mean that you’d get the benefit of a decent lens & its attendant sharpness.

It’ll be fun to see what happens in a few years, when this is commercially available as a plug-in on Final Cut or Premiere…

Technorati Tags: , ,