Apple’s iPhone X – Bringing PrimeSense 3D Scanning Technology to the Masses

Way back in 2013 (it feels way back given how fast the market continues to move on reality capture hardware and software, AR/VR applications, etc) I blogged about Apple’s acquisition of PrimeSense, and what that meant for the potential future of low cost 3D capture devices.  At the time of the acquisition, PrimeSense technology was being incorporated into a host of low cost (and admittedly relatively low accuracy) 3D capture devices, almost all leveraging the Microsoft Research KinectFusion algorithms developed as against the original Microsoft Kinect (which was based on PrimeSense tech itself).

I, and many others, have wondered when the PrimeSense technology would see the light of day.  After many rumored uses (e.g. use to drive gesture control of Apple TV, as one among many), the PrimeSense tech pipeline has emerged as the core technology behind the 3D face recognition technology which has replaced the fingerprint reader on the iPhone X.  Apple has branded the PrimeSense module as the “TrueDepth” camera.

It would surprise me if there wasn’t work already underway to use the PrimeSense technology in the iPhone X to act as a 3D scanner of objects generally –  as ultimately as enabled by/through the Apple ARKit.  Others, like those at Apple Insider, have come to the same conclusion. As one example, the TrueDepth camera could be used to capture higher quality objects to be placed within the scene that the ARKit can otherwise detect and map to (surfaces, etc.). In another, the TrueDepth camera combined with the data generated from the onboard sensor package combined with known SLAM implementations, and cloud processing, could turn the iPhone X into a mapping and large scene capture device as well as enabling the device to better localize itself within an environment that would be difficult for the device to currently work in (e.g. a relatively featureless space). The challenge with all active sensing technologies (the Apple “TrueDepth” camera, the Intel RealSense camera, or the host of commercial data acquisition devices that are available) is that they are all relatively power hungry, and therefore inefficient as a small form factor, mobile, sensing device (that, oh yeah, needs to be a phone and have long battery life).

Are we at the point where new mobile sensor packages (whether consumer or professional) coupled with new algorithms, fast(er) data transmission and cloud based GPU compute solutions will create the platform to enable crowd sourced world 3D data capture (e.g. Mapillary for the 3D world?). The potential applications working against such a dataset are virtually limitless (and truly exciting!).

Leave a Reply

Your email address will not be published. Required fields are marked *

Unable to load the Are You a Human PlayThru™. Please contact the site owner to report the problem.