LazeeEye – 3D Capture Device Phone Add-On

There has been a continuing strong push on the consumer/prosumer 3D reality capture side of the capture/modify/make ecosystem – whether that captured content is to be used in an object or scene based scanning workflow.  New processing algorithms along with orders of magnitude improvement in processing power are unlocking new capabilities.

DIY scanning solutions have been around for a while – ranging from pure photogrammetric approaches, to building structured light/laser scanning setups (e.g. see the recommendations which DAVID 3D Solutions GbR makes on the selection of scanning hardware, by leveraging commercial depth sense cameras in interesting new ways (e.g. leveraging PrimeSense, SoftKinetic or other devices to create a 3D depth map or by utilizing light field cameras for 3D reconstructions . Occipital raised $1M in their Kickstarter campaign to develop their Structure Sensor (which is powered by PrimeSense technology) hardware attachment for Apple devices and 3D Systems is white labeling that solution.   Google has been working on Google Tango with its project partners (and apparently Apple – because the Google Tango prototype included PrimeSense technology)!

Early in 2014 I looked at the various market trends that were impacting the capture/modify/make ecosystem — the explosion of low cost, easy to use 3D reality capture devices (and associated software solution stack and hardware processing platforms) were part of the key among them –

2014 Market Trends

For a graphical evolution of how some of the lower cost sensors have developed over time, see:

3D Sensor Progression

Along comes an interesting Kickstarter project from Heuristic Labs for the LazeeEye, which so far has raised roughly $67K (on a goal of $250K) to develop a laser emitter which attaches to a phone, which flashes a pattern of light onto the object or scene to be capture, and stereo vision processing software on the phone creates/infers a depth map from that.  According to Heuristic Labs, the creators of the LazeeEye:

LazeeEye? Seriously? The name “LazeeEye” is a portmanteau of “laser” and “eye,” indicating that your phone’s camera (a single “eye”) is being augmented with a second, “laser eye” – thus bestowing depth perception via stereo vision, i.e., letting your smartphone camera see in 3D just like you can!

The examples provided in the funding video are pretty rough, and because it is a “single shot” solution, only those surfaces which can be seen from the camera viewpoint are captured.  In order to capture full scene, multiple shots would need to be captured, registered and then stitched together.   This is not a problem that is unique to this solution (it is a known element of “single shot” solutions).  More from the LazeeEye Kickstarter project pages:

How does LazeeEye work? The enabling technology behind LazeeEye is active stereo vision, where (by analogy with human stereo vision) one “eye” is your existing smartphone camera and passively receives incoming light, while the other “eye” actively projects light outwards onto the scene, where it bounces back to the passive eye. The projected light is patterned in a way that is known and pre-calibrated in the smartphone; after snapping a photo, the stereo vision software on the phone can cross-reference this image with its pre-calibrated reference image. After finding feature matches between the current and reference image, the algorithm essentially triangulates to compute an estimate of the depth. It performs this operation for each pixel, ultimately yielding a high-resolution depth image that matches pixel-for-pixel with the standard 2D color image (equivalently, this can be considered a colored 3D point cloud). Note that LazeeEye also performs certain temporal modulation “magic” (the details of which we’re carefully guarding as a competitive advantage) that boosts the observed signal-to-noise ratio, allowing the projected pattern to appear much brighter against the background.

Note that a more in-depth treatment of active stereo vision can be found in the literature: e.g., http://www.willowgarage.com/sites/default/files/ptext.pdf and https://cvhci.anthropomatik.kit.edu/~manel/publications/mva2013RGBD.pdf

[Side note, I found it interesting that Heuristic Labs is using Sketchfab to host its 3D models – yet another 3D content developer/provider who is leveraging this great technical solution for 3D content sharing.]

Depending on the funding level you select during the campaign you get different hardware – varying laser colors (which impact the scan quality), whether it is aligned, SDK access, etc.  They readily acknowledge that 3D capture technologies will become more ubiquitous in the coming years with the next generations of smartphones (whether powered by active technology like the PrimeSense solutions or passive solutions such as light field cameras) – their answer – why wait (and even if you wanted to wait, their solution is more cost effective).

Why wait indeed.  Interesting application of existing technical solutions packaged in a cheap approachable package for a DIY consumer, will be curious to see how this campaign finishes up.

[Second side note, I guess my idea of hacking the newest generator of video cameras, with built in DLP projectors (like those Sony makes), to create a structured light video solution is worthwhile pursuing.  The concept?  Use the onboard projector to emit patterns of structured light, capture that using the onboard CCD, process on a laptop, in the cloud, on your camera, etc.  Wala, a cheap 3D capture device that you take with you on your next vacation.  Heck, you are going to do that, why not just mount a DLP pico projector directly to your phone and do the same thing. . .  ;-)]