[Update: Apple has confirmed the acquisition of PrimeSense for roughly $350M, when originally published the acquisition was still only rumored.]
It has been reported that Apple (@Apple) has acquired PrimeSense (@GoPrimeSense) for $345M.
I have been long on PrimeSense’s depth sensing cameras for a while – I started following them in the months leading up to the original launch of the Microsoft Kinect in the “Project Natal” days (late 2009). Photogrammetry was always interesting to me as an approach to create 3D models – but the reconstructions tended to fail frequently (and without warning) and always required post-processing.
My interest in PrimeSense technology was primarily twofold: (1) to find a way to leverage the installed base of Microsoft Kinect devices as 3D capture devices (as well as the Xbox Live payment infrastructure) and (2) to build an inexpensive stand-alone 3D scanner based on PrimeSense technology. I was only more interested after Microsoft published their real-time scene reconstruction research known as KinectFusion. Hacks like the Harvard “Drill of Depth”s (a Kinect made mobile by attaching it to a battery powered drill, screen and software, circa early 2011) only further piqued my interest about the possibilities.
The writing was on the wall for PrimeSense after Microsoft decided to abandon PrimeSense technology and develop their own depth sensing devices for use with the new Xbox One. PrimeSense had to transition from a lucrative relationship with one large customer (~30M+ units) to a developer of hardware and firmware solutions seeking broader markets. The OpenNI initiative (an open source project to develop middleware SDK for 3D sensors which was primarily sponsored by PrimeSense) was an attempt to broaden the potential pool of third party developers who would ultimately build solutions around PrimeSense technologies.
There are many PrimeSense powered 3D scanners in the market today – it will be interesting to see whether this pool expands or contracts after the planned Apple acquisition (e.g. will the direction be inward, focusing the PrimeSense technology to be delivered directly with Apple only devices or will they continue to court third parties developers across all types of hardware and software solutions). The new PrimeSense Capri form factor already allows for entirely new deployment paradigms for this technology, with one more generation the sensor will have shrunk so much that they can be comfortably embedded directly in phone and tablet devices (but with a trade off in data quality if the sensor shrinks too much).
Here is a quick run-down on a non-exhaustive list of PrimeSense powered 3D scanner hardware technology and vendors (note, this isn’t a profile of the universe of software companies that offer solutions around 3D model and scene reconstruction – as there are many):
Standard Microsoft Kinect – the initial movement for using the PrimeSense technology as a 3D scene reconstruction device came from hacks to the original Microsoft Kinect. The Kinect was hacked to run independently of the Kinect, and ultimately Microsoft decided to embrace these hacks and develop a standalone Kinect SDK.
Microsoft Kinect for PC – Microsoft began selling a Kinect which would directly interface with Windows devices, it also enabled a “near” mode for the depth camera.
Asus XTION (Pro) – This is an Asus OEM of the PrimeSense technology which provides essentially the same functional specifications as delivered in the Microsoft Kinect (they use the same PrimeSense chipset and reference design).
Matterport – Matterport (@Matterport) has raised $10M since the middle of 2012 to develop a camera system, software and cloud infrastructure for scanning interior spaces. The camera system itself is built around PrimeSense technologies (along with 2D cameras to capture higher quality images to be referenced to the 3D reconstruction created from the PrimeSense cameras). Most interesting to me that Matterport counts Red Swan and Felicis Ventures as investors, both which are also invested into Floored (see below). A few days ago Forbes profiled the use of the Matterport system, the article is worth a read.
Floored – (@Floored3D), formerly known as Lofty, concentrates primarily on developing software to help visualize interior spaces and is concentrating first on the commercial real estate industry. Floored has raised at least a little over $1M now, including common investors with Matterport. For more on the relationship between Matterport and Floored, see this TechCrunch article. Floored’s CEO is Dave Eisenberg, and he gave a great presentation at the TechCrunch NYC Startup Battlefield in late April 2013 explaining Floored’s value proposition. Floored is definitely filled with brilliant minds, and obviously a whole lot of computer vision folks who understand how difficult it is to attempt to automatically generate 3D models of interior spaces from scan data (of any quality). To get a sense of what they are currently thinking about, check out the Floored blog.
Lynx A – This was an offering from a start-up in Austin, Texas known as Lynx Labs (@LynxLabsATX) who launched an early 2013 KickStarter campaign for an “all in one” point and shoot 3D camera. This device was a sensor, combined with a computing device and software which would allow for the real time capturing and rendering of 3D scenes. The first round of devices shipped in the middle of September 2013. I do not know for sure, but my assumption is that this device is PrimeSense powered.
DotProduct (@DotProduct3D) with their DPI-7 scanner. As with the Lynx A camera, this is a PrimeSense powered device, combined with a Google Nexus, and their scene reconstruction software called Phi.3D. DotProduct claims 2-4mm accuracy at 1m, achieved through a combination of individual sensor calibration, their software, and rejecting sensors which do not achieve spec. DotProduct announced in late October 2013, at the Intel Capital Global Summit, that Intel Capital (@IntelCapital) had made a seed investment into DotProduct, spearheaded by Intel’s Perceptual Computing Group.
Occipital Structure Sensor – Occipital (@occipital) is an extremely interesting company based in Boulder and San Francisco, filled with amazing computer vision expertise. After cutting their teeth on some computer vision applications for generating panoramas on Apple devices, they have bridged into a complete hardware and software stack for 3D data capture and model creation. Occipital counts the Foundry Group (@foundrygroup) as one of its investors (having invested roughly $7M into Occipital in late 2011). Occipital completed a very successful KickStarter campaign for its Structure Sensor raising nearly $1.3M.
The Structure Sensor is a PrimeSense powered device which is officially supported on later generation Apple iPad devices. What is compelling is Occipital’s approach to create an entire developer ecosystem around this device – no doubt building on the Skanect (@Skanect) technology they acquired from ManCTL in June of 2013. Skanect was one of the best third party applications available which had implemented and made available the Microsoft Fusion technology (allowing for real time 3D scene reconstruction from depth cameras). If it is true, and Apple in fact does buy PrimeSense, then that is potentially problematic for Occipital’s current development direction if Apple has aspirations for embedding this technology in mobile devices (as opposed to Apple TV). Even if Apple did want to embed in their iDevices, it would seem then that Occipital becomes an immediately interesting acquisition target (in one swoop you get hardware, and most importantly the computer vision software expertise). Given the depth of talent at Occipital, I’m sure things are going to work out just fine.
Sense™ 3D Scanner by 3D Systems – This is the newest 3D scanner entrant in this space (announced a few weeks ago) delivered by 3D Systems (@3dsystemscorp), which acquired my former company, Geomagic. The Sense uses the new PrimeSense Carmine sensor – a further evolution of the PrimeSense depth camera technology, allowing for greater depth accuracy across more pixels in the field (and ultimately reconstruction quality). PrimeSense has a case study on the Sense.
What Are Competitive/Replacement Technologies for PrimeSense Depth Sensors?
In my opinion, the closest competitor in the market today to PrimeSense technologies are made by a company called SoftKinetic (@softkinetic) with their line of DepthSense cameras, sensors, middleware and software.
On paper, the functional specifications of these devices stack up well against the PrimeSense reference designs. Unlike PrimeSense, SoftKinetic sells complete cameras, as well as modules and associated software and middleware. SoftKinetic uses a time of flight (ToF) approach to capture depth data (which is different than PrimeSense). Softkinetic has provided software middleware to Sony for the PS4 providing a middleware layer for third party developers to create gesture tracking applications using the PlayStation(R)Camera for PS4. Softkinetic announced a similar middleware deal with Intel to accelerate perceptual computing in the early summer of 2013 too.
There are other companies in the industrial imaging space (who presently develop machine vision cameras or other time of flight scanners) which could provide consumer devices if they chose to (e.g. such as PMD Technologies in Germany).
I believe the true replacement technology, at least in the consumer space, for 3D data acquisition and reconstruction will come from light field cameras as a class in order to provide range data (e.g. z depth), and not necessarily from active imaging solutions. See my thoughts on this below.
Predictions for 2014 and Beyond
Early in 2013, when I was asked by my friends at Develop3D to predict what 2013 would bring, I said:
In 2013 we will move through the tipping point of the create/modify/make ecosystem.
Low cost 3D content acquisition, combined with simple, powerful tools will create the 3D content pipeline required for more mainstream 3D printing adoption.
Sensors, like the Microsoft Kinect, the LeapMotion device, and [Geomagic, now 3D Systems’] Sensable haptic devices, will unlock new interaction paradigms with reality, once digitized.
Despite the innovation, intellectual property concerns will abound, as we are at the dawn of the next ‘Napster’ era, this one for 3D content.
I believe much of that prediction has come/is coming true.
For 2014 I believe we will see the following macro-level trends in the 3D capture space:
- Expansion of light field cameras – Continued acceleration in 3D model and scene reconstruction (both small and large scale using depth sense and time of flight cameras but with an expansion into light field cameras (i.e. like Lytro (@Lytro) and Pelican Imaging (@pelicanimaging)).
- Deprecation of 3D capture hardware in lieu of solutions – We will see many companies which had been focusing mostly on data capture pivot more towards a vertical applications stack, deprecating the 3D capture hardware (as it becomes more and more ubiquitous).
- More contraction in the field due to M&A – Continued contraction of players in the capture/modify/make ecosystem, with established players in the commercial 3D printing and scanning market moving into the consumer space (e.g. Stratasys acquiring Makerbot, getting both a 3D scanner and a huge consumer ecosystem with Thingiverse) and with both ends of the market collapsing in to offer more complete solutions from capture to print (e.g. 3D printing companies buying 3D scanner hardware and software companies, vice versa, etc.)
- Growing open source alternatives – Redoubled effort on community sourced 3D reconstruction libraries and application software (e.g. Point Cloud Libraries and Meshlab), with perhaps even an attempt made to commercialize these offerings (like the Red Hat model).
- 3D Sensors everywhere – Starting in 2014, but really accelerating in the years that follow, 3D sensors everywhere (phones, augmented reality glasses, in our cars) which will constantly capture, record and report depth data – the beginnings of a crowd sourced 3D world model.
The Use of Light Field Cameras and 3D Data Acquisition and Reconstruction Will Explode
While the use of light field cameras to create 3D reconstructions are just at their infancy, just like the PrimeSense technology (which was designed to be used for an interaction paradigm, not for capturing depth data), I can see (no pun intended) this one coming. Light field cameras have a strong benefit of being a passive approach to 3D data acquisition (like photogrammetry). For what is possible in depth map creation from these types of camera systems, check out this marketing video from Pelican Imaging (note the 3D Systems Cube 3D printer)] and a more technical one here .
Image from Pelican Imaging.
I will have a separate post looking in more depth at light field cameras as a class including Lytro’s recent new $40M round of funding and the addition of North Bridge. I believe, after refinement, that they ultimately become a strong solution for consumer mobile devices for 3D content capture because of their size, power needs, passive approach, etc. In the interim, if you have interest in this space you should read the Pelican Imaging presentation recently made at SIGGRAPH Asia on the PiCam and reproduced in full at the Pelican Imaging site. Fast forward to pages 10-12 in this technical presentation for an example of using the Pelican Imaging camera to produce a depth map which is then surfaced.
What could ultimately game changing is if we find updated and refined depth sense technology embedded and delivered directly with the next series of smartphones and augmented reality devices (e.g. Google Glass). In that world, everyone has a 3D depth sensor, everyone is capturing data, and the potentials are limitless for applications which can harvest and act upon that data once captured.
Let the era of crowd sourced world 3D data capture begin!
(. . . but wait, who owns that 3D world database once created. . .
This article was original published on DEVELOP3D on November 18th, 2013, it has been modified since that original posting.