PrimeSense 3D reconstruction tech in next Apple devices to power AR/VR?

Can it be? Is the multiple year wait finally over?

Apple acquired PrimeSense and their range of depth sensing camera technologies late in late 2013 for roughly $350M. PrimeSense had provided the technology behind the original Microsoft Kinect. The PrimeSense tech (whether exploited in the Kinect or as a separate standalone device like the Structure sensor from Occipital) kicked off a wave of cheap 3D capture devices and software leveraging the ground-breaking KinectFusion paper from Microsoft Research. I blogged about the Apple acquisition and the potential impact on 3D capture in late 2013 in Apple Buys Tech Behind Microsoft Kinect (PrimeSense) – 3D Scanning Impact.

In 2017 a version of the PrimeSense tech stack (since rebranded as the “TrueDepth” camera) was incorporated into the iPhoneX as the core technology behind 3D facial recognition.I had hoped that this limited use (roughly 30,000 points, in a narrow field of view) would quickly expand into a “general purpose” 3D scanner for objects and scenes in Apple’s iPhoneX – Bringing PrimeSense Technology to the Masses. Well, I was wrong. Despite tremendous progress by Apple with the ARKit (get ready for ARKit 3!). There seemed to be little public progress on the use of active imaging systems within the Apple hardware ecosystem for 3D capture (as a predicate for AR/VR among other things). Active imaging systems have historically been power hungry on the capture side of the equation as opposed to passive reconstruction solutions (e.g. photogrammetry, light field cameras, etc.) which require more compute to deliver the reconstruction.

 I caught a Bloomberg report which claims that the new iPad Pro release in 1H 2020 will feature a “new module with two camera sensors, up from one on the current model, and a small hole for the 3-D system, letting people create three-dimensional reconstructions of rooms, objects and people.” Top of the line iPhones will get the 3D sensor later in 2020, along with 5G modems. This is presumed to be the foundational layer necessary for a combined VR and AR headset that Apple will release in 2021/2022.  According to Bloomberg “Chief Executive Officer Tim Cook has talked up AR for some time, and the technology is the core of Apple’s next big hardware push beyond the iPhone, iPad and Apple Watch. The new 3-D sensor system will be the centerpiece of this. It has been in development inside Apple for several years, and is a more advanced version of the Face ID sensor on the front of Apple’s latest mobile devices, said the people.”

Despite the social and not to mention intellectual property implications of a crowdsourced 3D world reality model (captured, analyzed and monetized at different degrees of precision depending on the application) — I personally cannot wait for the mainstreaming of easy to use 3D capture, reconstruction and analysis technologies. I wrote about this in 2018 – Introducing: The Crowdsourced 3D World Reality Model (Let’s Make Sure We Are Ready for It!)

I’ve been waiting since at least 2010 for the mainstream consumer capture and reconstruction opportunities of low precision reality data. What’s a few more years to wait. 😉
As for the use cases of high precision 3D reality data for rail, road, curb and telco, heck we are already delivering on that to our customers at Allvision.

Introducing: The crowdsourced 3D world reality model (let’s make sure we are ready for it!)

For those of you who are semi-regular readers of this blog, you know that I have been talking for several years about the exciting convergence of low cost reality capture technologies (active or passive), compute solutions (GPU or cloud), and new processing algorithms (KinFu to VSLAM). I am excited about how this convergence has transformed the way reality is captured, interacted with (AR), and even reproduced (remaining digital, turned into something physical, or a hybrid of both). I ultimately believe we are on the path towards the creation and continuous update of a 3D “world model” populated with data coming from various types of consumer, professional and industrial sensors. This excitement is only mildly tempered by the compelling legal, policy and perhaps even national security implications that have yet to be addressed or resolved.

My first exposure to reality capture hardware and reconstruction tools was in the late 90s when I was at Bentley Systems and we struck up a partnership with Cyra Technologies (prior to their acquisition by Leica Geosystems). I ultimately negotiated a distribution agreement for Cyra’s CloudWorx toolset to be distributed within MicroStation which we announced in late 2002. I remember that Greg Bentley (the CEO of Bentley Systems) strongly believed that reality capture was going to be transformative to the AEC ecosystem. As can be seen by their continuing investments in this space, he must continue to believe this, and it is bearing dividends for Bentley customers (active imaging systems, photogrammetric reconstructions, and everything in between)!

Fast forward to circa 2007 when Microsoft announced the first incarnation of Photosynth to the world at TED 2007 (approx 2:30 min mark). Photosynth stitched together multiple 2D photos and then related them together spatially (by back computing the camera positions of the individual shots and then organizing them in 3D space). Blaise Aguera y Arcas (then at Microsoft, now leading up Machine Intelligence at Google) showed a point cloud of Notre- Dame Cathedral (approx 3:40 min mark) generated computationally from photos downloaded from Flickr. One of the “by-products” of Photosynth was the ability to create 3D point clouds of real world objects.. Of course photogrammetric reconstruction techniques (2D photo to 3D) have been known for a long time – but this was an illustration of a cloud based service, working at scale, enabling a computational 3D reconstructions using photos provided by many. This was 11 years ago. It was stupefying to me. I immediately starting looking at all of the hacks to extract point clouds from the Photosynth service.  In 2014, a expanded version of the Photosynth 3D was launched, but it never achieved any critical mass. Even though Photosynth was ultimately shut down in early 2017, it was bleeding edge, and it was amazing.

It was likewise exciting (to a geek like me) when I was at Geomagic and the first hacks of the Microsoft Kinect (powered by the PrimeSense stack) began appearing in late 2010, and particularly when Microsoft Research published their KinectFusion paper (publishing algorithms for dense, real-time scene reconstructions using depth sensors). While there is no doubt that much of this work was built on giants (years of structure from motion and SLAM research), the thought that room sized spaces could be reconstructed in real-time using a handheld depth sensor was groundbreaking.  This was happening with the parallel rise of cheap desktop (and mobile) “supercomputer” like GPU compute solutions.  I knew the reality capture ecosystem had changed forever.

There has been tons of progress on the mobile handset side as well — leveraging primarily “passive” sensor fusion (accelerometer + computer vision techniques). Both Apple and Google (with their ARKit and ARCore, now released, respectively) have exposed development platforms to accelerate the creation of reality interaction solutions. I have previously written about how the release of the iPhoneX widely exposed an active scanning solution to millions of users in a mobile handset. Time will tell how that tech is leveraged.

I have long been interested in the crowd-sourced potential that various sensor platforms (mobile handsets, “traditional” DSLRs, UAVs, autonomous vehicles) will unlock. It was exciting to see the work done by Mapillary in using a crowd sourced model to capture the world using photos (leveraging Mapbox and OpenStreetMap data). Mapbox themselves recently announced their own impressive AR toolkit and platform called Mapbox AR — which provides developers with access to live location data from 300 million monthly users combined with geotagged information from 125 million locations, along with 3D DTM models, and satellite imagery of various resolutions.

I was therefore intrigued to read about 6D.ai (not much there on the website) which is emerging from Oxford’s Active Vision Lab. 6D.ai is building a reality-mesh platform for mobile devices leveraging ARCore and ARKit. Their solution will provide the necessary spatial context for AR applications  — it will create and store 3D reconstructions generated as a background process which will then be uploaded and merged with other contributions to fill out a crowdsourced reconstruction of spaces. While my guess a few years ago was this type of platform for near-scale reconstructions would have been generated on depth data generated from passive capture solutions (e.g. light field cameras), and not 2D image based, but it absolutely makes sense that for certain workflows this is absolutely the path forward – in particular when leveraging the reconstruction frameworks exposed in each of the respective handset AR toolkits.

It will be incredibly exciting in time to see the continuing progress 6D.ai and others will make in capturing reality data as a necessary predicate for AR applications of all sorts. We are consuming all types of reality data to create rich information products at Allvision, a new company that I have co-founded along with Elmer Bol, Ryan Frenz and Aaron Morris. This team knows a little bit about reality data — more on that to come in the coming weeks and months.

The era of the crowdsourced 3D world model is truly upon us – let’s make sure we are ready for it!

Apple’s iPhone X – Bringing PrimeSense 3D Scanning Technology to the Masses

Way back in 2013 (it feels way back given how fast the market continues to move on reality capture hardware and software, AR/VR applications, etc) I blogged about Apple’s acquisition of PrimeSense, and what that meant for the potential future of low cost 3D capture devices.  At the time of the acquisition, PrimeSense technology was being incorporated into a host of low cost (and admittedly relatively low accuracy) 3D capture devices, almost all leveraging the Microsoft Research KinectFusion algorithms developed as against the original Microsoft Kinect (which was based on PrimeSense tech itself).

I, and many others, have wondered when the PrimeSense technology would see the light of day.  After many rumored uses (e.g. use to drive gesture control of Apple TV, as one among many), the PrimeSense tech pipeline has emerged as the core technology behind the 3D face recognition technology which has replaced the fingerprint reader on the iPhone X.  Apple has branded the PrimeSense module as the “TrueDepth” camera.

It would surprise me if there wasn’t work already underway to use the PrimeSense technology in the iPhone X to act as a 3D scanner of objects generally –  as ultimately as enabled by/through the Apple ARKit.  Others, like those at Apple Insider, have come to the same conclusion. As one example, the TrueDepth camera could be used to capture higher quality objects to be placed within the scene that the ARKit can otherwise detect and map to (surfaces, etc.). In another, the TrueDepth camera combined with the data generated from the onboard sensor package combined with known SLAM implementations, and cloud processing, could turn the iPhone X into a mapping and large scene capture device as well as enabling the device to better localize itself within an environment that would be difficult for the device to currently work in (e.g. a relatively featureless space). The challenge with all active sensing technologies (the Apple “TrueDepth” camera, the Intel RealSense camera, or the host of commercial data acquisition devices that are available) is that they are all relatively power hungry, and therefore inefficient as a small form factor, mobile, sensing device (that, oh yeah, needs to be a phone and have long battery life).

Are we at the point where new mobile sensor packages (whether consumer or professional) coupled with new algorithms, fast(er) data transmission and cloud based GPU compute solutions will create the platform to enable crowd sourced world 3D data capture (e.g. Mapillary for the 3D world?). The potential applications working against such a dataset are virtually limitless (and truly exciting!).

Facebook Acquires Source3.io

Recognizing the importance of protecting and monetizing intellectual property in the world of user generated content – Facebook has acquired Source3.

First publicly reported by Recode, and now picked up by numerous other publications, including TechCrunch, Business Insider, Fortune, Variety and others — Facebook has acquired the Source3 technology and many of the Source3 team members will join Facebook and work out of Facebook’s NYC offices.

To all of our investors – a profound “Thank You”.

I am so incredibly proud to have been part of the early Source3 journey along with Patrick Sullivan, Scott Sellwood, Ben Cockerham, Tom Simon and Michael March.   I am also so incredibly thankful to each of them for their vision, energy and diligence in completing the first phase of this journey — and looking forward to seeing what they do next at Facebook and elsewhere!

Autodesk REAL 2016 Startup Competition

I had the opportunity to attend the Autodesk REAL 2016 event which is currently taking place at Fort Mason over March 8th and March 9th.   This event focuses on “reality computing” – the ecosystem of reality capture, modeling tools, computational solutions and outputs (whether fully digital workflows or a process that results in a physical object).

The event kicked off with the first Autodesk REAL Deal pitch competition.  Jesse Devitte from Borealis Ventures served as the emcee for this event.   A VC in his own right (as the Managing Director and Co-founder of Borealis), Jesse understands the technical computing space and has a great track record of backing companies that impact the built environment.   The VC panel judging the pitches consisted of: (1) Trae Vassalo, an independent investor/consultant who was previously a general partner of Kleiner Perkins Caufield & Byers; (2) Sven Strohband, CTO, Kholsa Ventures, and (3) Shahin Farshchi, a Partner with Lux Capital.

The winner of the competition will be announced, in conjunction with a VC panel discussion, at the end of the first day’s events starting at 5:00pm on the REAL Live Stage, Herbst Pavillion.

[Note: These were typed in near real time while watching the presenters and their interactions with the REAL Deal VC panelists – my apologies in advance if they don’t flow well, etc.  I’ve tried to be as accurate as possible.]

Lucid VR

Lucid VR was the first presenting company – pitching a 3D stereoscopic camera for the specific purpose of VR.  Han Jin, the CEO and co-founder, presented on behalf of the company.  He started by explaining that 16M headsets will be shipped this year or VR consumption – however – VR content creation is still incredibly difficult to produce.  It is a “journey of pain” transiting time, money, huge data sets, production and sharing difficulties.  Lucid VR has created an integrated hardware device, called the LucidCam, that “captures an experience” and simplifies the content production and publication process of VR content, which can then be consumed by all VR headsets.  Han pitched the vision of combining multiple LucidCam devices to support immersive 360 VR, real time VR live streaming.  Lucid VR hit its $100K crowdfunding campaign goal in November of 2015.

Panel Questions

Sven’s initially asked a two-part question: (1) which market is the company trying to attack first – consumer or enterprise; and (2) what is the technical differentiation for the hardware device (multi camera setups have been around for a while).   Han said that the initial use cases seem to be focusing on training applications – so more of an enterprise setup.  He explained that while dual camera setups have been around, they are complex, multi-part mechanically driven solutions, where they leverage GPU based solutions to complete on device processing for real time for capture and playback – a more silicon versus mechanical based solution.  Trae then asked about market timing – how will you get to market, what will be the pricing, etc.  Han said that they planned to ship at the end of the year, and that as of right now they were primarily working with consumer retailers for content creation.  They expected a GTM price point of between $300 and $400 for their capture device.   Trae’s follow-up – even if you capture and create the content, isn’t one of the gating factors going to be that the consumers will not have the appropriate hardware/software locally to experience it?

Minds Mechanical

The next presentation was from Minds Mechanical, and led by the CEO, Jacob Hockett.

Jacob explained that Minds Mechanical started as a solutions company – integrating various hardware and software to support the product development needs (primarily by providing inspection and compliance services) of some of the largest Tier 1 manufacturers in the world.   While growing and developing this services business they realized that they had identified a generalized challenge – and were working to disrupt the metrology (as opposed to meteorology, as Jacob jokingly pointed out) space.

Jacob explained that current metrology software is very expensive and is often optimized and paired with specific hardware.  Further compounding the problem is that various third party metrology software solutions often give different results on the same part, and even acting on the same data set.   The expense in adding new seats, combined with potentially incompatible results across third party solutions, results in limited metrology information sharing within an organization.

They have developed a cloud-based solution called Quality to help solve these challenge – Jacob suggested that we think of as a PLM type solution for the manufacturing and inspection value chain; tying inspection data back into the design and build process.  Jacob claims that Quality is the first truly cross platform solution available in the industry.

Given their existing customer relationships, they were targeting the aerospace, defense and MRO markets initially, to be followed by medical and automotive later.  They are actively transitioning their business from a solutions business to a software company and were seeking a $700K investment to grow the team. [Note:  Jacob was previously a product manager and AE at Verisurf Software, one of the market leading metrology software applications prior to starting Minds Mechanical.]  The lack of modern, easy to use tools are barriers to the industry and Minds Mechanical is going to try and change the entire market.

Panel Questions

Trae kicked off the questions – asking Jacob to identify who the buyer is within an organization and what is the driver for purchasing (expansion to new opportunities, cost savings, etc.).  Jacob said that the buy decision was mostly a cost savings opportunity.  Their pricing is low enough that it can be a credit card purchase, avoiding internal PO and purchase approval processes entirely.  Trae then followed up by asking how the data was originally captured – Jacob explained that they abstract data from various third party metrology applications which might be used in an account and provide a publication and analytics layer on top of those content creation tools.   Sven then asked about data ownership/regulation compliance for a SaaS solution – was it a barrier to purchase?   Jacob said that they understand the challenges of hosting/acting upon manufacturing data on the cloud; but that the reality was that for certain manufacturers and certain types of projects it just “wasn’t going to happen”.  Trae then asked whether they were working on a local hosted solution for those types of requirements, and Jacob said yes they were.  Shahin from Lux then asked who they were selling to – was it the OEM (and trying to force them to mandate it within the value chain or to the actual supply chain participants?  Jacob said that they will target the suppliers first, and not try and force the OEMs initially to demand use within their supply chain, focusing initially on a bottom-up sales approach first.

AREVO Labs

The next presentation was from Hermant Bheda, the CEO and founder of AREVO Labs.  AREVO’s mission was to leverage additive manufacturing technologies to produce light and strong composite parts to replace metal parts for production applications.  Hermant explained that they have ten pending patent applications and to execute on this vision they need: (1) high performance materials for production, (2) 3D printing software for production parts; and (3) a scalable manufacturing platform.

AREVO has create a continuous carbon fiber composite material which is five times as strong as titanium – unlocked by their proprietary software to weave this material together in a “true” 3D space (rather than 2.5D which they claim the existing FDM based printers use).   AREVO claims to have transport the industry from 2.5D to true 3D by optimizing the tool path/material deposition to generate the best parts – integrating a proprietary solution to estimate the post production part strength, then optimize the tool path to use lowest cost, lowest time, highest strength solution.

Their solution is based around a robotic arm based manufacturing cell – and could be used for small to large parts (up to 2 meters in size).  Markets from medical for single use applications, aerospace/defense for lightweight structural solutions, on demand industrial spare parts as well as oil & gas applications.  They have current customer engagements with Northrup, Airbus, Bombardier, J&J and Schlumberger.

[FWIW, you and see an earlier article on them at 3DPrint.com here, as well as a video of their process.  MarkForged is obviously also in the market and utilizes continuous carbon fiber as part of an AM process.  One of the slides in the AREVO Labs deck which was quickly clicked through was comparison of the two process, would be interesting to learn more about that differentiation indeed!]

Hermant explained that they were currently seeking a Series A raise of $8M.

Panel Questions

Shahin kicked off the questions for the panel – asking whether customers were primarily interested in purchasing parts produced from the technology or whether they wanted to buy the technology so they could produce their own?  Hermant said that the answer is both – some want parts produced for them, others want the tech, it depends on what their anticipated needs were over time.  Sven asked Hermant how he thought the market would settle out over time between continuous fiber (as with their solution) versus chopped fiber.   Hermant said that they view both technologies as complimentary to each other – but in the metals replacement market, continuous fiber is the solution for many higher value, higher materials properties use cases, but both will exist in the market.

UNYQ

The final presentation of the day during the REAL Deal Pitch competition came from UNYQ – they had previously presented at the REAL 2015 event.   Eythor Bendor, the CEO, presented on behalf of UNYQ.  UNYQ develops personalized prosthetic and orthotic devices leveraging additive manufacturing for production.  In 2016 they will be introducing the UNYQ Scoliosis Brace, having licensed the technology from 3D Systems, who are also investors.  According to Crunchbase data UNYQ has raised right around $2.5M across three funding rounds, and expect to be profitable sometime in 2017.

UNYQ has been working a platform for 3D printing manufacturing, personalization and data integration – resulting in devices that are not only personalized using AM for production, but can also integrate various sensors so that they become IoT nodes reporting back various streams of data (performance, how long it has been worn, etc.) which can be shared with clinicians.   UNYQ uses a photogrammetry based app to capture shape data and then leverages Autodesk technology to compute and mesh a solution.  The information is captured in clinics and the devices are primarily produced on FDM printers – going from photos to personalized products in less than four weeks.  They generated roughly $500K in revenues in 2015 starting with their prosthetic covers and have a GTM plan for their scoliosis offering which would have them generate $1M in sales within the first year after launch in May 2016.

UNYQ is currently seeking a $4M Series Seed round.

Panel Questions

Trae asked how UNYQ could accelerate this into market – given the market need, why wasn’t adoption happening faster?   Eythor said that in 2014/15 they had really been focusing on platform and partnership development – it was only at the very end of 2015 that they started creating a direct sales team. Given that there are only roughly 2,000 clinics in the US it was a known market and they had a plan of attack. The limited number of clinics, plus the opportunity to reach consumers directly via social media and other d2c marketing efforts will only accelerate growth in 2016 and beyond.  Trae followed up by asking – where is the resistance to adoption in the market (is it the middleman or something else that bogging things down).  Eythor said that it is more a process resistance (it hasn’t been done this way before, and with manual labor) than it is with the clinics themselves.  Sven then asked about data comparing the treatment efficacy and patient outcomes using the UNYQ devices versus the “traditional” methods of treatment.  Eythor said that while the sample set was limited, one of their strategic advisors had compared their solutions to those traditionally produced and found that the UNYQ offering was as least as good as what is in the market today – but with an absolutely clear preference on the patient side.  The final question came from Shahin at Lux who asked whether there was market conflict in that the clinics (which are the primary way UNYQ gets to market) has a somewhat vested interest in continuing to do things the old way (potentially higher revenues/margins, lots of crafters involved in that value chain, reluctance to change, etc.).  Eythor explained that they were focusing only on the 10-20% of the market that are progressive and landing/winning them; and then over time pull the rest of the market forward.

Mapillary Raises $8M – Crowdsourced Street Photos

Crowdsourced Street Maps – Mapillary Raises $8M

Mapillary, a Malmo, Switzerland based company that is building a crowdsourced street level photo mapping service has raised $8M in their Series A fundraising round, led by Atomico, with participation by Sequoia Capital, LDV Capital and Playfair Capital.   Some have commented that Mapillary wants to compete with Google Street View using crowd sourced, and then geo-located, photos (and presumably video and other assets over time).  Mapillary uses Mapbox as its base mapping platform.  Mapbox itself sources its underlying street mapping data from OpenStreetMap, as well as satellite imagery, terrain and places information from other commercial sources – you can see the full list here.   Very interesting to see that Mapillary has a relationship with ESRI – such that ESRI ArcGIS users can access Mapillary crowd sourced photo data directly via ArcGIS online.

I previously wrote about MapBox and OpenStreetMap in October 2013 when it closed its initial $10M Series A round led by Foundry Group.  You can see that initial blog post here.  MapBox subsequently raised a $52.6M Series B round, led by DFJ, in June of 2015.  I then examined the intersection of crowdsourced data collection and commercial use in the context of the Crunchbase dispute with Pro Populi and contrasted that with the MapBox and OpenStreetMap relationship.

I am fascinated by the opportunities that are unlocked by the continuing improvement in mobile imaging sensors.  The devices themselves are becoming robust enough for local computer vision processing (rather than sending data to the cloud) and we are perhaps a generation away (IMHO) from having an entirely different class of sensors to capture data from. That combined with significant improvements in location services makes it possible to explore some very interesting business and data services in the future.

In late 2013 I predicted that, in time, mobile 3D capture devices (and primarily passive ones) would ultimately be used to capture, and tie together a crowd sourced world model of 3D data.

What could ultimately game changing is if we find updated and refined depth sense technology embedded and delivered directly with the next series of smartphones and augmented reality devices. . .  In that world, everyone has a 3D depth sensor, everyone is capturing data, and the potentials are limitless for applications which can harvest and act upon that data once captured.

Let the era of crowd sourced world 3D data capture begin!

It makes absolute sense that the place to start along this journey is 2D and video imagery, which can be supplement (and ultimately supplanted) over time by richer sources of data  – leveraging an infrastructure that has already been built.  We still have thorny and interesting intellectual property implications to consider (Think Before You Scan That Building) – but regardless – bravo Mapillary! Bravo indeed!

IP in the Coming World of Distributed Manufacturing: Redux

Late in 2014 I wrote an article outlining why I felt that the transformational changes occurring on both “ends” of the 3D ecosystem were going to force a re-think of the ways that 3D content creators, owners and consumers would capture, interact with, and perhaps even make physical 3D data.  These changes will catalyze a UGC content explosion – forever changing the ways that brands interact with consumers and the ways consumers chose to personalize, and then manufacture, the goods that are relevant to them.

There are no doubt significant technical hurdles which remain in realizing that future.   I am confident that they will be overcome in time.  In addition to the broad question of how the metes and bounds of intellectual property protection will be stretched in the face of these new technologies, I examined some key tactical issues which needed to be addressed.  These were:

IP-for-3D-printing-source3

 

As we exit 2015, let’s take a look at each of these in turn and see what, if anything, has changed in the previous twelve months.

De-facto and proposed new manufacturing file formats do not encapsulate intellectual property information

After outlining the challenges with STL and AMF, I proposed that was needed was:

A file format (AMF or an alternate) for manufacturing which specifically allow for metadata containers to be encapsulated in the file itself.  These data containers can hold information about the content of the file such that, to a large extent, ownership and license rights could be self-describing.   An example of this is the ID3 metadata tagging system for MP3 files.   Of course the presence of tag information alone is not intended to prevent piracy (i.e. like a DRM implementation would be), but it certainly makes it easier for content creators and consumers alike to organize and categorize content, obtain and track license rights, etc.

In late April 2015, the 3MF Consortium was launched by seven companies in the 3D printing ecosystem (Autodesk, Dassault, FIT AG/netfabb (now part of Autodesk), HP, Microsoft, Shapeways and SLM Solutions Group) releasing the “3D Manufacturing Format (3MF) specification, which allows design applications to send full-fidelity 3D models to a mix of other applications, platforms, services and printers.”  3D Systems, Materialise and Stratasys have since joined, the most current membership list can be found here.   While launched under the umbrella of an industry wide consortium, the genesis for 3MF came from Microsoft – concluding that none of the existing formats worked (or could be made to work in a timely fashion) sufficiently well to support a growing ecosystems of 3D content creators, materials and devices.

Adrian Lannin, the Executive Director of the 3MF Consortium (and also Group Product Manager at Microsoft) gave a great presentation (video here) on the genesis of the 3MF Consortium, and the challenges they are attempting to solve, at the TCT Show in mid-October 2015.

The specification for the 3MF format has been published here.  A direct link to the published 1.01 version of specification can be found here.  In addition to attempting to solve some of the current interoperability and functionality issues with the current file formats, the 3MF Specification provides a “hook” to inject IP data into the 3MF package via an extension.

The specification does provide for optional package elements including digital signatures (see figure 2-1), more fully described in Section 6.1.  An extension to the 3MF format covering materials and properties can be found here.

Table 8-1 of the 3MF Specification makes clear that in the context of a model, the following are valid metadata names:

  • Title
  • Designer
  • Description
  • Copyright
  • LicenseTerms
  • Rating
  • CreationDate
  • ModificationDate

The content block associated to any of these metadata names can be any string of data.  Looks like ID3 tags for MP3 to me!  A separate extension specifically addressing ownership data, license rights, etc. could be developed providing for more granularity than the current mechanism.

While it will likely take time for 3MF to displace STL based workflows, the 3MF Specification seems to define the necessary container into which rights holder information can be injected and persisted throughout the manufacturing process.

Inconsistent, and perhaps even inappropriate, licensing schemes used for 3D data

After reviewing the multitude of ways that content creators and rights holders were attempting to protect and license their works, I concluded that what was needed was:

An integrated, harmonized licensing scheme addressing all of the intellectual property rights impacted in the digital manufacturing ecosystem – drafted in a way that non-lawyers can read and clearly understand them.  This is no small project, but needs to be done. Harmonization would simplify the granting and tracking of license rights (assuming stakeholders in the ecosystem helped to draft and use those terms) and could be implemented in conjunction with the file format metadata concept described earlier.

Unfortunately, not a lot of progress has yet been made in this regard.

As I outlined first in 2012, I continue to believe that there is a generalized, misplaced, widespread, reliance on the Creative Commons license framework for digital content which is to be manufactured into physical items.    These licenses, while incredibly useful, only address works protected by copyright – and were originally intended to grant copyright permissions in non-software works to the public.

The Creative Commons Attribution 4.0 International Public License framework specifically excludes trademark and patent licensing (see Section 2(b)(2)) as well as ability to collect royalties (See Section 2(b)(3)) making the framework generally inapplicable for use in all licensing schemes where the rightsholders wish to be paid upon the exercise of license rights.   This shouldn’t be surprising to anyone who knows why the Creative Commons licensing scheme was originally developed – but I suspect it is nevertheless surprising to folks who maybe relying on the framework as the basis for commercial transactions requiring royalties.  Even those who are properly using the CC scheme within its intended purpose may have compliance challenges when licenses requiring attribution are implemented in a 3D printing workflow.

The Creative Commons, no doubt, understands the complexity, and potential ambiguities, of using the current CC licensing schemes for 3D printing workflows.

Safe-harbor provisions of the DMCA apply only to copyright infringement

It is possible, via secondary or vicarious liability, to be held legally responsible for intellectual property infringement even if you did not directly commit acts of infringement.   After examining the Digital Millennium Copyright Act (the “DMCA”) and the “safe harbor” it potentially provides to service providers for copyright infringement (assuming they comply with other elements of the law), I concluded that what was needed was an extension of the concepts in the DMCA to cover the broader bucket of intellectual property rights beyond copyright, most notably, providing protection against dubious trademark infringement claims.

On September 1st, 2015, Danny Marti, the U.S. Intellectual Property Enforcement Coordinator (USIPC) at the White House Office of Management and Budget, solicited comments from interested parties in the Federal Register on the development of the 2016-2019 Joint Strategic Plan on Intellectual Property Enforcement.   Presumably the primary goal was to solicit feedback on intellectual property infringement enforcement priorities.  Several parties used it as an opportunity to provide public comment on the necessity of extending DMCA like “safe harbor” protections to trademark infringement claims.

On October 16th, 2015, Etsy, Shapeways, Foursquare, Kickstarter, and Meetup (describing themselves as “online service providers (OSPs) that connect millions of creators, designers, and small business owners to each other, to their customers, and to the world”) provided comments in response to the USIPC request, which can be found here.   After walking through some representative examples across their businesses, and making the argument that the lack of a notice/counter-notice process for trademark infringement claims can sometimes be chilling, the commentators ultimately conclude that it is time to consider expanding safe harbors:

While the benefits of statutory safe harbors are important, they are currently limited to disputes over copyright and claims covered by section 230 of the [Communications Decency Act]. No such protection exists for similarly problematic behavior with regard to trademark. As online content grows and brings about more disputes, it is necessary to consider expanding existing safe harbors or creating new ones for trademarks.

In the Matter of Development of the Joint Strategic Plan for Intellectual Property Enforcement – Comments of Etsy, Foursquare, Kickstarter, Meetup, and Shapeways, page 6, (October 16th, 2015).  [Note: Additional background on the examples given by the commentators can be found in an article posted on 3ders.org here.]

No doubt that in addition to the benefit to UGC creators on the “wrong” side of spurious trademark infringement claims, clearly, OSPs as a class, would benefit from expanded safe harbors covering potential trademark infringement claims.  That is certainly not a bad result as well.

We are at the dawn of the UGC economy – whether we are talking purely about digital goods, or those that are ultimately made physical. While any process to change the applicable law will be long and winding – the conversation needs to be started now. OSPs that serve the UGC economy need the business model certainty and protection from illegitimate copyright and trademark infringement claims that expanded safe harbors would bring.

This article was originally published on December 15th, 2015 at 3D Printing Industry

Paracosm Seeking CV/CG Engineers and C++ Developers!

If you are a computer vision, computer graphics or C++ developer and are looking for a new opportunity with an exciting venture backed company — look no further.   Paracosm is developing an exciting software platform leveraging the newest wave of 3D hardware capture devices in order to build a perceptual map for devices (and humans!) to navigate within.

Job description provided by Paracosm follows — feel free to reach out to Amir (email below) or to me and I will pass your c.v. along:

———————-

About Paracosm:
Paracosm is solving machine perception by generating 3D maps of every interior space on earth.
We are developing a large-scale 3D reconstruction and perception platform that will enable robots and augmented reality apps to fully interact with their environment. You can see some of our fun demos here:vimeo.com/paracosm3d/demo-reel (pass: MINDBLOWN ) and here: paracosm.io/nvidia
We are a venture-backed startup based in Gainesville, FL, and were original development partners on Google’s Project Tango. We are currently working closely with companies like iRobot to commercialize our technology.
Job Role:
We are looking for senior C++ developers, computer-vision engineers, and computer graphics engineers to help us implement our next-gen 3D-reconstruction algorithms. Our algorithms sit at the intersection of SLAM+Computer Vision+Computer Graphics.
As part of this sweet gig, you’ll be working alongside a team of Computer Vision PhD’s to:
* design & implement & test cloud-based 3D-reconstruction algorithms
* develop real-time front-end interfaces designed to run on tablets (Google Tango, Intel RealSense) and AR headsets
* experiment with cutting edge machine-learning techniques to perform object segmentation and detection
Skills:
Proficiency with C++ is pretty critical, ideally you’ll be experienced enough to even teach us a few tricks! Familiarity with complex algorithms is a huge plus, ideally one of the following categories:
– Surface reconstruction + meshing
– 3D dense reconstruction from depth cameras
– SLAM and global optimization techniques
– Visual odometry and sensor fusion
– Localization and place recognition
– Perception: Object segmentation and recognition
Work Environment:
Teamwork, collaboration and exploration of risky new ideas and concepts is a core part of our culture. We all work closely together to implement new approaches that push the state of the art.
We have fresh, healthy & delicious lunch catered every day by our personal chef, a kitchen full of snacks, and backlog of crazy hard problems we need solved.
We actively encourage people on our team to publish their work and present at conferences (we also offer full stipends for attending 2 conferences each year).
Did I mention we’re big on the team work thing? The entire team has significant input into company strategy and product direction, and everyone’s opinion and voice is valued.
Work will take place at our offices in Gainesville, FL
Contact:
If you are interested, please email the CEO directly: Amir Rubin, amir@paracosm.io