Thanks for reading Part I of this article published at LiDAR News. Below I examine some of the plenoptic technology providers as well as provide some predictions about 3D imaging in 2014 and beyond. If you have been directed here from LiDAR News certainly skip ahead to the section starting with Technology Providers below. Happy Holidays!
Light Field Cameras for 3D Capture and Reconstruction
Plenoptic cameras, or light field cameras, use an array of individual lenses (a microlens) to capture 4D light field about a scene. This lens arrangement means that multiple light rays can be associated to each sensor pixel and synthetic cameras (created via software) can then process that information.
Phew, that’s a mouthful, right? It’s actually easier to visualize –
Image from Raytrix GmbH Presentation delivered at NVIDIA GTC 2012
This light field information can be used to help solve various computer vision challenges – for example, allowing images to be refocused after they are taken, to substantially improve low light performance with an acceptable signal to noise ratio or even to create a 3D depth map of a scene. Of course the plenoptic approach is not restricted to single images, plenoptic “video” cameras (with a corresponding increase in data captured) have been developed as well.
The underlying algorithms and concepts behind a plenoptic camera have been around for quite some time. A great technical backgrounder on this technology can be found in Dr. Ren Ng’s 2005 Stanford publication titled Light Field Photography with a Hand-Held Plenoptic Camera. He reviews the (then) current state of the art before proposing his solution targeted at synthetic image formation. Dr. Ng ultimately went on to commercialize his research by founding Lytro, which I discuss later. Another useful backgrounder is the technical presentation prepared by Raytrix (profiled below) and delivered at the NVIDIA GPU Technology Conference 2012.
In late 2010 at the NVIDIA GPU Conference, Adobe demonstrated a plenoptic camera system (hardware and software) they had been working on – while dated, it is a useful video to watch as it explains both the hardware and software technologies involved with light field imaging as well as the computing horsepower required. Finally, another interesting source of information and recent news on developments in the light field technology space can be found at the Light Field Forum.
Light field cameras have only become truly practical because of advances in lens and sensor manufacturing techniques coupled with the massive computational horsepower unlocked by GPU compute based solutions. To me, light field cameras represent a very interesting step in the evolution of digital imaging – which until now – has really been focused on improving what had been a typical analog workflow.
Light Field Cameras and 3D Reconstructions
Much of the recent marketing around the potential of plenoptic synthetic cameras focuses on the ability of a consumer to interact and share images in an entirely different fashion (i.e. changing the focal point of a captured scene). While that is certainly interesting in its own right, I am personally much more excited about the potential of extracting depth map information from light field cameras, and then using that depth map to create 3D surface reconstructions.
Pelican Imaging (profiled below) recently published a paper at SIGGRAPH Asia 2013 detailing exactly that — the creation of a depth map, which was then surfaced, using their own plenoptic hardware and software solution called the PiCam. This paper is published in full at the Pelican Imaging site, see especially pages 10-12.
There is a lot of on-going research in this space, some use traditional stereo imaging methods acting upon the data generated from the plenoptic lens array but others use entirely different technical approaches for depth map extraction. A very interesting recent paper presented at ICCV 2013 in early December 2013 titled Depth from Combining Defocus and Correspondence Using Light Field Cameras and authored by researchers from the University of California, Berkley and Adobe proposes a novel method for extracting depth data from light field cameras by combining two methods of depth estimation. The authors of this paper have made available their sample code and representative examples and note in the Introduction:
The images in this paper were captured from a single passive shot of the $400 consumer Lytro camera in different scenarios, such as high ISO, outdoors and indoors. Most other methods for depth acquisition are not as versatile or too expensive and difficult for ordinary users; even the Kinect is an active sensor that does not work outdoors. Thus, we believe our paper takes a step towards democratizing creation of depth maps and 3D content for a range of real-world scenes.
Technology Providers
Let’s take a look at a non-exhaustive list of light field technology manufacturers – this is no way complete, nor does it even attempt to cover all of the handset manufacturers and others who are incorporating plenoptic technologies – nor those who are developing “proxy” solutions to replicate some of the functionalities which true plenoptic solutions offer (e.g. Nokia’s ReFocus app software). Apple recently entered the fray of plenoptic technologies when it was reported in late November that it had been granted a range of patents (originally filed in 2011) covering a “hybrid” light field camera setup which can be switched traditional and plenoptic imaging.
Lytro
Lytro (@Lytro) was founded in 2010 by Dr. Ren Ng, building on research he started at Stanford in 2004. Lytro has raised a total of $90M with an original $50M round in mid-2011 from Andreesen Horowitz ((@a16z, @cdixon), NEA (@NEAVC), Greylock (@GreylockVC), and a new $40M round adding North Bridge Venture Partners (@North_Bridge). In early 2012 Lytro begin shipping its consumer focused light field camera system, later in that year Dr. Ng stepped down as CEO (he remains the Chairman), with the current CEO, Jason Rosenthal, joining in March 2013.
Inside the Lytro Camera from Lytro
I would suspect that Lytro is pivoting from focusing purely on a consumer camera to instead the development of an imaging platform and infrastructure stack (including cloud services for interaction) that it, along with third party developers, can leverage. This may also have been the strategy all along – in many cases to market a platform you have to first demonstrate to the market how the platform can be expressed in an application. Jason Rosenthal seems to acknowledge as much in a recent interview published in the San Francisco Chronicle’s SF Gate blog in August 2013 (prior to their most recent round being publicly announced) where is quoted as saying that the long term Lytro vision is to become “the new software and hardware stack for everything with a lens and sensor. That’s still cameras, video cameras, medical and industrial imaging, smartphones, the entire imaging ecosystem.” Jonathan Heiliger, a general partner at North Bridge Venture Partners, in his quote supporting their participation in the latest $40M round supports that vision – [t]he fun you experience when using a Lytro camera comes from the ability to engage with your photos in ways you never could before. But powering that interactivity is some great software and hardware technology that can be used for everything with a lens and a sensor.”
I am of course intrigued by the suggestion from Rosenthal that Lytro could be developing solutions useful for medical and industrial imaging. If you are Pelican Imaging, you are of course focusing on the comments relating to “smartphones.”
Pelican Imaging
Image from Pelican Imaging
Pelican Imaging (@pelicanimaging) was founded in 2008 and its current investors include Qualcomm (@Qualcomm), Nokia Growth Partners, Globespan Capital Partners (@Globespancap), Granite Ventures (@GraniteVentures), InterWest Partners (@InterwestVC) and IQT. Pelican Imaging has raised more than $37M since inception and recently extended its Series C round by adding an investment (undisclosed amount) from Panasonic in August 2013. Interesting to me is of course the large number of handset manufacturers who have participated in earlier funding rounds, as well as early investment support from In-Q-Tel (IQT), an investment arm aligned with the United States Central Intelligence Agency.
Pelican Imaging has been pretty quiet from a marketing perspective until recently, but no doubt with their recent additional investment from Panasonic and other hardware manufacturers they are making a push to become the embedded plenoptic sensor platform.
Raytrix
Raytrix is a German developer of plenoptic cameras, and has been doing so since 2009. They have, up until now, primarily focused on using this technology for a host of industrial imaging solutions. They offer a range of complete plenoptic camera solutions. A detailed presentation explaining their solutions can also be found on their site and a very interesting video demonstration of the possibilities of a plenoptic video approach for creating 3D videos can be found hosted at the NVIDIA GPU Technology Conference website. Raytrix has posted a nice example of how they created a depth map and 3D reconstruction using their camera here. Raytrix plenotpic video cameras can be used for particle image velocimetry (PIV), a method of measuring velocity fields in fluids by tracking how particles move across time. Raytrix has a video demonstrating these capabilities here.
The Future
For 2014, I believe we will see the following macro-level trends develop in the 3D capture space (these were originally published here).
- Expansion of light field cameras – Continued acceleration in 3D model and scene reconstruction (both small and large scale using depth sense and time of flight cameras but with an expansion into light field cameras (i.e. like Lytro, Pelican Imaging, Raytrix, as proposed by Apple, etc).
- Deprecation of 3D capture hardware in lieu of solutions – We will see many companies which had been focusing mostly on data capture pivot more towards a vertical applications stack, deprecating the 3D capture hardware (as it becomes more and more ubiquitous – i.e. plenoptic cameras combined with RTK GPS accurate smartphones).
- More contraction in the field due to M&A – Continued contraction of players in the capture/modify/make ecosystem, with established players in the commercial 3D printing and scanning market moving into the consumer space (e.g. Stratasys acquiring Makerbot, getting both a 3D scanner and a huge consumer ecosystem with Thingiverse) and with both ends of the market collapsing in to offer more complete solutions from capture to print (e.g. 3D printing companies buying 3D scanner hardware and software companies, vice versa, etc.)
- Growing open source software alternatives – Redoubled effort on community sourced 3D reconstruction libraries and application software (e.g. Point Cloud Libraries and Meshlab), with perhaps even an attempt made to commercialize these offerings (like the Red Hat model).
- 3D Sensors everywhere – Starting in 2014, but really accelerating in the years that follow, 3D sensors everywhere (phones, augmented reality glasses, in our cars) which will constantly capture, record and report depth data – the beginnings of a crowd sourced 3D world model.
Over time, I believe that light field cameras will grow to have a significant place in the consumer acquisition of 3D scene information via mobile devices. They have the benefit of having a relatively small form factor, are a passive imaging system, and can be used in a workflow which consumers already know and understand. They are of course not a panacea, and ultimately currently suffer similar limitations as does photogrammetry and stereo reconstruction when targets are not used (e.g. difficulty in accurately computing depth data in scenes without a lot of texture, accuracy dependent on depth of the scene from the camera, etc.) but novel approaches to extract more information from a 4D light field hold promise for capturing more accurate 3D depth data from light field cameras.
For consumers, and consumer applications driven from mobile, I predict that light field technologies will take a significant share of sensor technologies, where accuracy is a secondary consideration (at best) and the ease of use, form factor and the “eye candy” quality of the results are most compelling. Active imaging systems, like those which Apple acquired from PrimeSense certainly have a strong place in the consumer acquisition of 3D data, but in mobile their usefulness maybe limited by the very nature of the sensing technology (e.g. relatively large power draw and form factor, sensor confusion in the presence of multiple other active devices, etc.).
Hi Tom,
wonderful, I’m so glad to find your rambling. I woke up at 2 am and have reached your blog. Please contact me per Email. During the past year I had a very massive project to scan quite a big number of salesrooms in Germany. We are working with FARO 3D Scanners and I want to get rid of them until I experienced the PrimeSense technology.
I’m your brother in mind….
Best regards
Tom