PrimeSense 3D reconstruction tech in next Apple devices to power AR/VR?

Can it be? Is the multiple year wait finally over?

Apple acquired PrimeSense and their range of depth sensing camera technologies late in late 2013 for roughly $350M. PrimeSense had provided the technology behind the original Microsoft Kinect. The PrimeSense tech (whether exploited in the Kinect or as a separate standalone device like the Structure sensor from Occipital) kicked off a wave of cheap 3D capture devices and software leveraging the ground-breaking KinectFusion paper from Microsoft Research. I blogged about the Apple acquisition and the potential impact on 3D capture in late 2013 in Apple Buys Tech Behind Microsoft Kinect (PrimeSense) – 3D Scanning Impact.

In 2017 a version of the PrimeSense tech stack (since rebranded as the “TrueDepth” camera) was incorporated into the iPhoneX as the core technology behind 3D facial recognition.I had hoped that this limited use (roughly 30,000 points, in a narrow field of view) would quickly expand into a “general purpose” 3D scanner for objects and scenes in Apple’s iPhoneX – Bringing PrimeSense Technology to the Masses. Well, I was wrong. Despite tremendous progress by Apple with the ARKit (get ready for ARKit 3!). There seemed to be little public progress on the use of active imaging systems within the Apple hardware ecosystem for 3D capture (as a predicate for AR/VR among other things). Active imaging systems have historically been power hungry on the capture side of the equation as opposed to passive reconstruction solutions (e.g. photogrammetry, light field cameras, etc.) which require more compute to deliver the reconstruction.

 I caught a Bloomberg report which claims that the new iPad Pro release in 1H 2020 will feature a “new module with two camera sensors, up from one on the current model, and a small hole for the 3-D system, letting people create three-dimensional reconstructions of rooms, objects and people.” Top of the line iPhones will get the 3D sensor later in 2020, along with 5G modems. This is presumed to be the foundational layer necessary for a combined VR and AR headset that Apple will release in 2021/2022.  According to Bloomberg “Chief Executive Officer Tim Cook has talked up AR for some time, and the technology is the core of Apple’s next big hardware push beyond the iPhone, iPad and Apple Watch. The new 3-D sensor system will be the centerpiece of this. It has been in development inside Apple for several years, and is a more advanced version of the Face ID sensor on the front of Apple’s latest mobile devices, said the people.”

Despite the social and not to mention intellectual property implications of a crowdsourced 3D world reality model (captured, analyzed and monetized at different degrees of precision depending on the application) — I personally cannot wait for the mainstreaming of easy to use 3D capture, reconstruction and analysis technologies. I wrote about this in 2018 – Introducing: The Crowdsourced 3D World Reality Model (Let’s Make Sure We Are Ready for It!)

I’ve been waiting since at least 2010 for the mainstream consumer capture and reconstruction opportunities of low precision reality data. What’s a few more years to wait. 😉
As for the use cases of high precision 3D reality data for rail, road, curb and telco, heck we are already delivering on that to our customers at Allvision.