Category Archives: 3D Printing

Apple’s iPhone X – Bringing PrimeSense 3D Scanning Technology to the Masses

Way back in 2013 (it feels way back given how fast the market continues to move on reality capture hardware and software, AR/VR applications, etc) I blogged about Apple’s acquisition of PrimeSense, and what that meant for the potential future of low cost 3D capture devices.  At the time of the acquisition, PrimeSense technology was being incorporated into a host of low cost (and admittedly relatively low accuracy) 3D capture devices, almost all leveraging the Microsoft Research KinectFusion algorithms developed as against the original Microsoft Kinect (which was based on PrimeSense tech itself).

I, and many others, have wondered when the PrimeSense technology would see the light of day.  After many rumored uses (e.g. use to drive gesture control of Apple TV, as one among many), the PrimeSense tech pipeline has emerged as the core technology behind the 3D face recognition technology which has replaced the fingerprint reader on the iPhone X.  Apple has branded the PrimeSense module as the “TrueDepth” camera.

It would surprise me if there wasn’t work already underway to use the PrimeSense technology in the iPhone X to act as a 3D scanner of objects generally –  as ultimately as enabled by/through the Apple ARKit.  Others, like those at Apple Insider, have come to the same conclusion. As one example, the TrueDepth camera could be used to capture higher quality objects to be placed within the scene that the ARKit can otherwise detect and map to (surfaces, etc.). In another, the TrueDepth camera combined with the data generated from the onboard sensor package combined with known SLAM implementations, and cloud processing, could turn the iPhone X into a mapping and large scene capture device as well as enabling the device to better localize itself within an environment that would be difficult for the device to currently work in (e.g. a relatively featureless space). The challenge with all active sensing technologies (the Apple “TrueDepth” camera, the Intel RealSense camera, or the host of commercial data acquisition devices that are available) is that they are all relatively power hungry, and therefore inefficient as a small form factor, mobile, sensing device (that, oh yeah, needs to be a phone and have long battery life).

Are we at the point where new mobile sensor packages (whether consumer or professional) coupled with new algorithms, fast(er) data transmission and cloud based GPU compute solutions will create the platform to enable crowd sourced world 3D data capture (e.g. Mapillary for the 3D world?). The potential applications working against such a dataset are virtually limitless (and truly exciting!).

Facebook Acquires Source3.io

Recognizing the importance of protecting and monetizing intellectual property in the world of user generated content – Facebook has acquired Source3.

First publicly reported by Recode, and now picked up by numerous other publications, including TechCrunch, Business Insider, Fortune, Variety and others — Facebook has acquired the Source3 technology and many of the Source3 team members will join Facebook and work out of Facebook’s NYC offices.

To all of our investors – a profound “Thank You”.

I am so incredibly proud to have been part of the early Source3 journey along with Patrick Sullivan, Scott Sellwood, Ben Cockerham, Tom Simon and Michael March.   I am also so incredibly thankful to each of them for their vision, energy and diligence in completing the first phase of this journey — and looking forward to seeing what they do next at Facebook and elsewhere!

Autodesk REAL 2016 Startup Competition

I had the opportunity to attend the Autodesk REAL 2016 event which is currently taking place at Fort Mason over March 8th and March 9th.   This event focuses on “reality computing” – the ecosystem of reality capture, modeling tools, computational solutions and outputs (whether fully digital workflows or a process that results in a physical object).

The event kicked off with the first Autodesk REAL Deal pitch competition.  Jesse Devitte from Borealis Ventures served as the emcee for this event.   A VC in his own right (as the Managing Director and Co-founder of Borealis), Jesse understands the technical computing space and has a great track record of backing companies that impact the built environment.   The VC panel judging the pitches consisted of: (1) Trae Vassalo, an independent investor/consultant who was previously a general partner of Kleiner Perkins Caufield & Byers; (2) Sven Strohband, CTO, Kholsa Ventures, and (3) Shahin Farshchi, a Partner with Lux Capital.

The winner of the competition will be announced, in conjunction with a VC panel discussion, at the end of the first day’s events starting at 5:00pm on the REAL Live Stage, Herbst Pavillion.

[Note: These were typed in near real time while watching the presenters and their interactions with the REAL Deal VC panelists – my apologies in advance if they don’t flow well, etc.  I’ve tried to be as accurate as possible.]

Lucid VR

Lucid VR was the first presenting company – pitching a 3D stereoscopic camera for the specific purpose of VR.  Han Jin, the CEO and co-founder, presented on behalf of the company.  He started by explaining that 16M headsets will be shipped this year or VR consumption – however – VR content creation is still incredibly difficult to produce.  It is a “journey of pain” transiting time, money, huge data sets, production and sharing difficulties.  Lucid VR has created an integrated hardware device, called the LucidCam, that “captures an experience” and simplifies the content production and publication process of VR content, which can then be consumed by all VR headsets.  Han pitched the vision of combining multiple LucidCam devices to support immersive 360 VR, real time VR live streaming.  Lucid VR hit its $100K crowdfunding campaign goal in November of 2015.

Panel Questions

Sven’s initially asked a two-part question: (1) which market is the company trying to attack first – consumer or enterprise; and (2) what is the technical differentiation for the hardware device (multi camera setups have been around for a while).   Han said that the initial use cases seem to be focusing on training applications – so more of an enterprise setup.  He explained that while dual camera setups have been around, they are complex, multi-part mechanically driven solutions, where they leverage GPU based solutions to complete on device processing for real time for capture and playback – a more silicon versus mechanical based solution.  Trae then asked about market timing – how will you get to market, what will be the pricing, etc.  Han said that they planned to ship at the end of the year, and that as of right now they were primarily working with consumer retailers for content creation.  They expected a GTM price point of between $300 and $400 for their capture device.   Trae’s follow-up – even if you capture and create the content, isn’t one of the gating factors going to be that the consumers will not have the appropriate hardware/software locally to experience it?

Minds Mechanical

The next presentation was from Minds Mechanical, and led by the CEO, Jacob Hockett.

Jacob explained that Minds Mechanical started as a solutions company – integrating various hardware and software to support the product development needs (primarily by providing inspection and compliance services) of some of the largest Tier 1 manufacturers in the world.   While growing and developing this services business they realized that they had identified a generalized challenge – and were working to disrupt the metrology (as opposed to meteorology, as Jacob jokingly pointed out) space.

Jacob explained that current metrology software is very expensive and is often optimized and paired with specific hardware.  Further compounding the problem is that various third party metrology software solutions often give different results on the same part, and even acting on the same data set.   The expense in adding new seats, combined with potentially incompatible results across third party solutions, results in limited metrology information sharing within an organization.

They have developed a cloud-based solution called Quality to help solve these challenge – Jacob suggested that we think of as a PLM type solution for the manufacturing and inspection value chain; tying inspection data back into the design and build process.  Jacob claims that Quality is the first truly cross platform solution available in the industry.

Given their existing customer relationships, they were targeting the aerospace, defense and MRO markets initially, to be followed by medical and automotive later.  They are actively transitioning their business from a solutions business to a software company and were seeking a $700K investment to grow the team. [Note:  Jacob was previously a product manager and AE at Verisurf Software, one of the market leading metrology software applications prior to starting Minds Mechanical.]  The lack of modern, easy to use tools are barriers to the industry and Minds Mechanical is going to try and change the entire market.

Panel Questions

Trae kicked off the questions – asking Jacob to identify who the buyer is within an organization and what is the driver for purchasing (expansion to new opportunities, cost savings, etc.).  Jacob said that the buy decision was mostly a cost savings opportunity.  Their pricing is low enough that it can be a credit card purchase, avoiding internal PO and purchase approval processes entirely.  Trae then followed up by asking how the data was originally captured – Jacob explained that they abstract data from various third party metrology applications which might be used in an account and provide a publication and analytics layer on top of those content creation tools.   Sven then asked about data ownership/regulation compliance for a SaaS solution – was it a barrier to purchase?   Jacob said that they understand the challenges of hosting/acting upon manufacturing data on the cloud; but that the reality was that for certain manufacturers and certain types of projects it just “wasn’t going to happen”.  Trae then asked whether they were working on a local hosted solution for those types of requirements, and Jacob said yes they were.  Shahin from Lux then asked who they were selling to – was it the OEM (and trying to force them to mandate it within the value chain or to the actual supply chain participants?  Jacob said that they will target the suppliers first, and not try and force the OEMs initially to demand use within their supply chain, focusing initially on a bottom-up sales approach first.

AREVO Labs

The next presentation was from Hermant Bheda, the CEO and founder of AREVO Labs.  AREVO’s mission was to leverage additive manufacturing technologies to produce light and strong composite parts to replace metal parts for production applications.  Hermant explained that they have ten pending patent applications and to execute on this vision they need: (1) high performance materials for production, (2) 3D printing software for production parts; and (3) a scalable manufacturing platform.

AREVO has create a continuous carbon fiber composite material which is five times as strong as titanium – unlocked by their proprietary software to weave this material together in a “true” 3D space (rather than 2.5D which they claim the existing FDM based printers use).   AREVO claims to have transport the industry from 2.5D to true 3D by optimizing the tool path/material deposition to generate the best parts – integrating a proprietary solution to estimate the post production part strength, then optimize the tool path to use lowest cost, lowest time, highest strength solution.

Their solution is based around a robotic arm based manufacturing cell – and could be used for small to large parts (up to 2 meters in size).  Markets from medical for single use applications, aerospace/defense for lightweight structural solutions, on demand industrial spare parts as well as oil & gas applications.  They have current customer engagements with Northrup, Airbus, Bombardier, J&J and Schlumberger.

[FWIW, you and see an earlier article on them at 3DPrint.com here, as well as a video of their process.  MarkForged is obviously also in the market and utilizes continuous carbon fiber as part of an AM process.  One of the slides in the AREVO Labs deck which was quickly clicked through was comparison of the two process, would be interesting to learn more about that differentiation indeed!]

Hermant explained that they were currently seeking a Series A raise of $8M.

Panel Questions

Shahin kicked off the questions for the panel – asking whether customers were primarily interested in purchasing parts produced from the technology or whether they wanted to buy the technology so they could produce their own?  Hermant said that the answer is both – some want parts produced for them, others want the tech, it depends on what their anticipated needs were over time.  Sven asked Hermant how he thought the market would settle out over time between continuous fiber (as with their solution) versus chopped fiber.   Hermant said that they view both technologies as complimentary to each other – but in the metals replacement market, continuous fiber is the solution for many higher value, higher materials properties use cases, but both will exist in the market.

UNYQ

The final presentation of the day during the REAL Deal Pitch competition came from UNYQ – they had previously presented at the REAL 2015 event.   Eythor Bendor, the CEO, presented on behalf of UNYQ.  UNYQ develops personalized prosthetic and orthotic devices leveraging additive manufacturing for production.  In 2016 they will be introducing the UNYQ Scoliosis Brace, having licensed the technology from 3D Systems, who are also investors.  According to Crunchbase data UNYQ has raised right around $2.5M across three funding rounds, and expect to be profitable sometime in 2017.

UNYQ has been working a platform for 3D printing manufacturing, personalization and data integration – resulting in devices that are not only personalized using AM for production, but can also integrate various sensors so that they become IoT nodes reporting back various streams of data (performance, how long it has been worn, etc.) which can be shared with clinicians.   UNYQ uses a photogrammetry based app to capture shape data and then leverages Autodesk technology to compute and mesh a solution.  The information is captured in clinics and the devices are primarily produced on FDM printers – going from photos to personalized products in less than four weeks.  They generated roughly $500K in revenues in 2015 starting with their prosthetic covers and have a GTM plan for their scoliosis offering which would have them generate $1M in sales within the first year after launch in May 2016.

UNYQ is currently seeking a $4M Series Seed round.

Panel Questions

Trae asked how UNYQ could accelerate this into market – given the market need, why wasn’t adoption happening faster?   Eythor said that in 2014/15 they had really been focusing on platform and partnership development – it was only at the very end of 2015 that they started creating a direct sales team. Given that there are only roughly 2,000 clinics in the US it was a known market and they had a plan of attack. The limited number of clinics, plus the opportunity to reach consumers directly via social media and other d2c marketing efforts will only accelerate growth in 2016 and beyond.  Trae followed up by asking – where is the resistance to adoption in the market (is it the middleman or something else that bogging things down).  Eythor said that it is more a process resistance (it hasn’t been done this way before, and with manual labor) than it is with the clinics themselves.  Sven then asked about data comparing the treatment efficacy and patient outcomes using the UNYQ devices versus the “traditional” methods of treatment.  Eythor said that while the sample set was limited, one of their strategic advisors had compared their solutions to those traditionally produced and found that the UNYQ offering was as least as good as what is in the market today – but with an absolutely clear preference on the patient side.  The final question came from Shahin at Lux who asked whether there was market conflict in that the clinics (which are the primary way UNYQ gets to market) has a somewhat vested interest in continuing to do things the old way (potentially higher revenues/margins, lots of crafters involved in that value chain, reluctance to change, etc.).  Eythor explained that they were focusing only on the 10-20% of the market that are progressive and landing/winning them; and then over time pull the rest of the market forward.

IP in the Coming World of Distributed Manufacturing: Redux

Late in 2014 I wrote an article outlining why I felt that the transformational changes occurring on both “ends” of the 3D ecosystem were going to force a re-think of the ways that 3D content creators, owners and consumers would capture, interact with, and perhaps even make physical 3D data.  These changes will catalyze a UGC content explosion – forever changing the ways that brands interact with consumers and the ways consumers chose to personalize, and then manufacture, the goods that are relevant to them.

There are no doubt significant technical hurdles which remain in realizing that future.   I am confident that they will be overcome in time.  In addition to the broad question of how the metes and bounds of intellectual property protection will be stretched in the face of these new technologies, I examined some key tactical issues which needed to be addressed.  These were:

IP-for-3D-printing-source3

 

As we exit 2015, let’s take a look at each of these in turn and see what, if anything, has changed in the previous twelve months.

De-facto and proposed new manufacturing file formats do not encapsulate intellectual property information

After outlining the challenges with STL and AMF, I proposed that was needed was:

A file format (AMF or an alternate) for manufacturing which specifically allow for metadata containers to be encapsulated in the file itself.  These data containers can hold information about the content of the file such that, to a large extent, ownership and license rights could be self-describing.   An example of this is the ID3 metadata tagging system for MP3 files.   Of course the presence of tag information alone is not intended to prevent piracy (i.e. like a DRM implementation would be), but it certainly makes it easier for content creators and consumers alike to organize and categorize content, obtain and track license rights, etc.

In late April 2015, the 3MF Consortium was launched by seven companies in the 3D printing ecosystem (Autodesk, Dassault, FIT AG/netfabb (now part of Autodesk), HP, Microsoft, Shapeways and SLM Solutions Group) releasing the “3D Manufacturing Format (3MF) specification, which allows design applications to send full-fidelity 3D models to a mix of other applications, platforms, services and printers.”  3D Systems, Materialise and Stratasys have since joined, the most current membership list can be found here.   While launched under the umbrella of an industry wide consortium, the genesis for 3MF came from Microsoft – concluding that none of the existing formats worked (or could be made to work in a timely fashion) sufficiently well to support a growing ecosystems of 3D content creators, materials and devices.

Adrian Lannin, the Executive Director of the 3MF Consortium (and also Group Product Manager at Microsoft) gave a great presentation (video here) on the genesis of the 3MF Consortium, and the challenges they are attempting to solve, at the TCT Show in mid-October 2015.

The specification for the 3MF format has been published here.  A direct link to the published 1.01 version of specification can be found here.  In addition to attempting to solve some of the current interoperability and functionality issues with the current file formats, the 3MF Specification provides a “hook” to inject IP data into the 3MF package via an extension.

The specification does provide for optional package elements including digital signatures (see figure 2-1), more fully described in Section 6.1.  An extension to the 3MF format covering materials and properties can be found here.

Table 8-1 of the 3MF Specification makes clear that in the context of a model, the following are valid metadata names:

  • Title
  • Designer
  • Description
  • Copyright
  • LicenseTerms
  • Rating
  • CreationDate
  • ModificationDate

The content block associated to any of these metadata names can be any string of data.  Looks like ID3 tags for MP3 to me!  A separate extension specifically addressing ownership data, license rights, etc. could be developed providing for more granularity than the current mechanism.

While it will likely take time for 3MF to displace STL based workflows, the 3MF Specification seems to define the necessary container into which rights holder information can be injected and persisted throughout the manufacturing process.

Inconsistent, and perhaps even inappropriate, licensing schemes used for 3D data

After reviewing the multitude of ways that content creators and rights holders were attempting to protect and license their works, I concluded that what was needed was:

An integrated, harmonized licensing scheme addressing all of the intellectual property rights impacted in the digital manufacturing ecosystem – drafted in a way that non-lawyers can read and clearly understand them.  This is no small project, but needs to be done. Harmonization would simplify the granting and tracking of license rights (assuming stakeholders in the ecosystem helped to draft and use those terms) and could be implemented in conjunction with the file format metadata concept described earlier.

Unfortunately, not a lot of progress has yet been made in this regard.

As I outlined first in 2012, I continue to believe that there is a generalized, misplaced, widespread, reliance on the Creative Commons license framework for digital content which is to be manufactured into physical items.    These licenses, while incredibly useful, only address works protected by copyright – and were originally intended to grant copyright permissions in non-software works to the public.

The Creative Commons Attribution 4.0 International Public License framework specifically excludes trademark and patent licensing (see Section 2(b)(2)) as well as ability to collect royalties (See Section 2(b)(3)) making the framework generally inapplicable for use in all licensing schemes where the rightsholders wish to be paid upon the exercise of license rights.   This shouldn’t be surprising to anyone who knows why the Creative Commons licensing scheme was originally developed – but I suspect it is nevertheless surprising to folks who maybe relying on the framework as the basis for commercial transactions requiring royalties.  Even those who are properly using the CC scheme within its intended purpose may have compliance challenges when licenses requiring attribution are implemented in a 3D printing workflow.

The Creative Commons, no doubt, understands the complexity, and potential ambiguities, of using the current CC licensing schemes for 3D printing workflows.

Safe-harbor provisions of the DMCA apply only to copyright infringement

It is possible, via secondary or vicarious liability, to be held legally responsible for intellectual property infringement even if you did not directly commit acts of infringement.   After examining the Digital Millennium Copyright Act (the “DMCA”) and the “safe harbor” it potentially provides to service providers for copyright infringement (assuming they comply with other elements of the law), I concluded that what was needed was an extension of the concepts in the DMCA to cover the broader bucket of intellectual property rights beyond copyright, most notably, providing protection against dubious trademark infringement claims.

On September 1st, 2015, Danny Marti, the U.S. Intellectual Property Enforcement Coordinator (USIPC) at the White House Office of Management and Budget, solicited comments from interested parties in the Federal Register on the development of the 2016-2019 Joint Strategic Plan on Intellectual Property Enforcement.   Presumably the primary goal was to solicit feedback on intellectual property infringement enforcement priorities.  Several parties used it as an opportunity to provide public comment on the necessity of extending DMCA like “safe harbor” protections to trademark infringement claims.

On October 16th, 2015, Etsy, Shapeways, Foursquare, Kickstarter, and Meetup (describing themselves as “online service providers (OSPs) that connect millions of creators, designers, and small business owners to each other, to their customers, and to the world”) provided comments in response to the USIPC request, which can be found here.   After walking through some representative examples across their businesses, and making the argument that the lack of a notice/counter-notice process for trademark infringement claims can sometimes be chilling, the commentators ultimately conclude that it is time to consider expanding safe harbors:

While the benefits of statutory safe harbors are important, they are currently limited to disputes over copyright and claims covered by section 230 of the [Communications Decency Act]. No such protection exists for similarly problematic behavior with regard to trademark. As online content grows and brings about more disputes, it is necessary to consider expanding existing safe harbors or creating new ones for trademarks.

In the Matter of Development of the Joint Strategic Plan for Intellectual Property Enforcement – Comments of Etsy, Foursquare, Kickstarter, Meetup, and Shapeways, page 6, (October 16th, 2015).  [Note: Additional background on the examples given by the commentators can be found in an article posted on 3ders.org here.]

No doubt that in addition to the benefit to UGC creators on the “wrong” side of spurious trademark infringement claims, clearly, OSPs as a class, would benefit from expanded safe harbors covering potential trademark infringement claims.  That is certainly not a bad result as well.

We are at the dawn of the UGC economy – whether we are talking purely about digital goods, or those that are ultimately made physical. While any process to change the applicable law will be long and winding – the conversation needs to be started now. OSPs that serve the UGC economy need the business model certainty and protection from illegitimate copyright and trademark infringement claims that expanded safe harbors would bring.

This article was originally published on December 15th, 2015 at 3D Printing Industry

The New Era of 3D Printing – Introducing Carbon3D

When Joe DeSimone takes the stage in Vancouver at TED2015 tonight in the opening gambit he will be publicly introducing the world to Carbon3D, a stealth (Sequoia) venture backed company whose technology and impact might ultimately be as impactful Chuck Hull’s original invention of stereolithography process for additive manufacturing.

The Carbon3D founding team of Joe DeSimone, Alex Ermoshkin, Ed Samulski and Phil DeSimone originally started the company as EIPI Systems in Chapel Hill, NC in mid-2013.  Along the way they took investment from Sequoia and others and have been joined by an incredible group of leaders from within and outside the Bay Area.

Carbon3D Tweet

Carbon3D and their technology stack will ultimately transform the industry in several ways – driving AM as a method of manufacture into areas typically reserved for injection molding:

  • Speed – their process currently allows them to print at 50x – 150x the speed of other methods, so fast that “little” problems like heat need to be managed. It is a sight to behold.
  • Materials – given that the founders have incredible chemistry backgrounds, it shouldn’t be surprising that they are focusing as much on materials, and the science behind them, as their device. The result?  Incredible engineered materials with material strengths simply not possible with existing techniques.
  • Surface Finish – imagine if you could produce surface finishes approaching that of injection molding without post processing?

While the Carbon3D team continues to develop their technology and expand beyond the pilot phase, the future sure does look promising.

Intellectual Property in the Coming World of Distributed Digital Manufacturing

We are certainly in the midst of a transformation in the way that 3D content creators, owners and consumers will interact with, exchange, and perhaps even make physical, 3D data.  Along the way, traditional notions of what represents content worthy of protection will be stretched (and perhaps broken) as the market works to navigate and find the acceptable solution for all participants in the ecosystem – allowing 3D content creators to properly monetize their creativity and hard work, while allowing 3D content consumers to leverage a rich universe of quality content, and perhaps even paying for it along the way.   It won’t be easy, but there is a path forward.

In early 2012 I began a series of blogs on the intersection of intellectual property with the dramatic changes influencing the 3D capture/modify/make ecosystem (of course 3D printing is but one, of many, possible outcomes of a 3D capture and design process).  My first blog in this series was The Storm Clouds on the Horizon where I wrote that I felt the next “Napster” era was upon us for digitally captured real world content.

Storm Clouds

There is a growing awareness and understanding of intellectual property considerations in the 3D ecosystem – whether we are talking about how it might impact consumers who wish to use their in-home 3D printers to produce an item or a company within a distributed digital manufacturing chain for a large consumer goods company.  These concerns have been accelerated by the transformative technical changes on both “ends” of that ecosystem.

The technological shift

Over the last few years there has been continuing acceleration in the hardware, software and services necessary to empower digital design and manufacturing processes.  Earlier in 2014 I identified the following key trends in the capture/modify/make ecosystem for object based 3D capture and manufacture:

2014 Market Trends

We are at a unique point in time – when both “ends” of the capture to make ecosystem are being impacted by dramatic technological changes.  The change is continuing, the pace is accelerating.

The last several years have seen many new market entrants on the consumer/prosumer 3D printing side.  What is, and will be in my opinion, equally or more transformative is the impact that new low cost/smaller form factor 3D capture devices will have in this space.  Consumer 3D data capture is becoming more mainstream on the consumer side  as we close out 2014  – as Intel adds their RealSenseTM depth sense technology to every laptop they ship (with the first expression in the Creative Senz3D , Google progresses with Project Tango  along with their software partners and other 3D data capture solutions are developed and distributed to consumers.  I looked at some of these market players in an earlier blog and also examined how new passive 3D capture technologies, leveraging plenoptic (a/k/a “light field”) cameras, may find their way into your next phone or tablet.

3D Sensor Progression

A recent research paper co-authored by Microsoft Research and published at SIGGRAPH 2014 earlier in August titled Learning to be a Depth Camera demonstrates that 3D capture and interaction can be implemented by applying machine learning techniques and minor hardware modifications to existing single 2D camera systems.

With the convergence of technologies, it is likely we will see the growth of multifunction 3D capture and printing devices that attempt to offer “one button” reproduction (and transmission/sharing) of certain sized objects in certain materials.  Examples even exist today – like the ZEUS – marketed as the first “ALL-IN-ONE 3D Printer / Copy Machine” as well as the Blacksmith Genesis, which started a crowd-funding campaign on Indiegogo in August.   3D Systems, Intel and Best Buy have recently collaborated on an integrated campaign called the “Intel Experience” where, in selected Best Buy stores, consumers will be exposed to 3D capture solutions leveraging Intel’s RealSense cameras along side 3D Systems 3D printing solutions.

While I believe the ecosystem is lagging in producing software tools that make it easy for non-professional users to create, find and personalize 3D content, we are only a short time away from dramatic changes there too.

When people can more easily digitize, share, copy and reproduce real world 3D content – how will that change the landscape for content owners and consumer alike?  What existing business models will be threatened, and which new ones created, with such a transformation?

What exactly “is” Intellectual Property in the Context of Digital Manufacturing?

Many things!  It may be represented in trade secrets – the confidential, differentiated manufacturing processes used to produce something. It could be represented by copyright – in for example the rights a sculptor would have in their latest creation.  It might be represented by patent – in a novel, non-obvious, useful device.  In the EU, a design could be protected by registered or unregistered design rights.

What if your son broke the leg of his favorite action figure (which you purchased from a big box toy store) and you decided to repair it using something you produced from your 3D printer (or you could also print it to the Staples down the street or have it shipped to you from Shapeways)?

What if you were able to find and download a manufacturable model (in STL format) of that action figure that someone had uploaded to one of the many model sharing sites and used that as the basis of the print job?  What if the person who uploaded the file had created the model by hand (e.g. they may have looked at the same action figure you wanted to repair but they designed it on a blank digital canvas)?  What if the person who uploaded the file created the representation (in the file) by 3D scanning an undamaged action figure?   What if you scanned, printed, and repaired the item in your own home but did not share the files with anyone else?

Lamp Rings

What if was not an action figure, but instead a retaining ring for one of the low voltage lights which keep getting run over in your front yard?

Do these differences matter?  Absolutely.

The type of content (artistic or functional), the reason for manufacture (new item, replacement part, etc.), how the content to be manufactured was generated (created from scratch, printable file obtained from a third party, the end result of a 3D reality capture process, from the manufacturer, etc.) and where the content will be manufactured (in your home, at a local store for pickup, on a third parties networked printer, at a remote service bureau and shipped, etc.) all matter.  In some instances the content might not be protected at all, in others it might touch multiple types of third party intellectual property.

There is not enough space here to give you a general primer on all of the intellectual property issues in the create/capture/modify/make ecosystem.  I would instead point you to several excellent publications and presentations as background (which principally look at the application of US law):

The above is a small (but particularly useful) sample of work examining some of these issues in depth, another broader summary can be found here.  You will find that authors in this space cover a broad spectrum of opinions –from those who believe that intellectual property issues need to be understood in digital manufacturing but generally inapplicable because many objects that would be manufactured are generally not protectable (e.g. Weinberg), to those who believe that the democratization of capture and printing technologies will utterly transform manufacturing supply chains and potentially substantially devalue intellectual property rights all content owners will have in the future (e.g. Hornick) as well as everything in between.

I fall in the middle ground – believing that the fundamental technical and market changing technologies will stretch the concept of intellectual property, but as we have seen in the past with the music industry, that over time the ecosystem will adapt – including the law.

Intellectual Property Concerns an Impediment to Continuing Growth?

Intellectual property concerns have moved from beyond the theoretical to one which manufacturers consider to be one of the most potentially disruptive impacts of the broadening reach of additive manufacturing.   In June 2014, PricewaterhouseCoopers (“PwC”) and the Manufacturing Institute published their report on 3D Printing and the New Shape of Industrial Manufacturing (the “PwC Report”).   The report is broad reaching, and well worth an extended read by itself.   One section examines the potential for additive manufacturing to shrink supply chains:

Companies are re-imagining supply chains: a world of networked printers where logistics may be more about delivering digital design files—from one continent to printer farms in another—than about containers, ships and cargo planes. In fact, 70% of manufacturers we surveyed in the PwC Innovations Survey believe that, in the next three–five years, 3DP will be used to produce obsolete parts; 57% believe it will be used for after-market parts.

Source: PwC Report, Page #1

When PwC Report survey participants were asked to identify what they felt the most disruptive impact wide adoption of additive manufacturing technologies could have on US manufacturing – the “threat to intellectual property” was second only to supply chain restructuring.

This concern should not really be all that surprising.

image010In October 2013 the market research firm Gartner, in conjunction with their Gartner Symposium/ITxpo made a series of predictions impacting IT organizations and users for 2014 and beyond.   Several related to the impact that cheaper 3D capture and printing devices were predicted to have in the future for the creation of physical goods – predicting staggering losses from the piracy of intellectual property:

By 2018, 3D printing will result in the loss of at least $100 billion per year in intellectual property globally. Near Term Flag: At least one major western manufacturer will claim to have had intellectual property (IP) stolen for a mainstream product by thieves using 3D printers who will likely reside in those same western markets rather than in Asia by 2015.

The plummeting costs of 3D printers, scanners and 3D modeling technology, combined with improving capabilities, makes the technology for IP theft more accessible to would-be criminals. Importantly, 3D printers do not have to produce a finished good in order to enable IP theft. The ability to make a wax mold from a scanned object, for instance, can enable the thief to produce large quantities of items that exactly replicate the original.

Source: 2013 Gartner ITxpo Press Release

Now, I do not share the dire predictions of Gartner – as many of these hardware and software technologies have already existed for many years, but primarily because the process of creating high quality digital reproductions (either from “scratch” or from a 3D reality capture process) is still very difficult, even for experienced users.  But over time, and with almost certainty in the market for certain consumer goods, if someone could manufacture something in their home at comparable cost and quality to what they could buy at a store, why wouldn’t they?

Intellectual Property Issues in Digital Manufacturing

Obviously there must be a willingness of content owners to share and distribute their intellectual property for distributed manufacturing – whether as part of a collapsing supply chain for industrial manufacturers, or to authorize someone to produce a licensed good in their own home.

We are seeing companies test the water – from the Nokia experiment in early 2013 (prior to the Microsoft acquisition) to provide STL and STEP models of certain phone cases for 3D printing, to Honda releasing their 3D “design archives” in early 2014.

520_shell

Nokia Lumia 520 Shell, author: Nokia (CC BY-NC-SA 3.0)

A few months ago Hasbro licensed a handful of artists to create derivative works based on their My Little Pony line of toys and then those artist designed customizations could be purchased from Shapeways.  To be clear, Hasbro did not authorize anyone to create customizations of their licensed works, but rather started with a single design, customized by a handful of artists, to start.  Buoyed by the success of this launch, Hasboro and Shapeways are now soliciting designers to create customized 3D printable designs based on Dragonvale, Dungeons & Dragons,  Monopoly, My Little Pony, Scrabble (to be sold in the US and Canada only) and Transformers – with upload instructions posted to Superfanart.com in late August 2014 (which now points as subdomain to Shapeways.com).

What will accelerate the types of projects piloted by Nokia, Hasboro and Shapeways?

There are obviously business and technical hurdles in distributed digital manufacturing, but there are also some fundamental intellectual property issues which need to be resolved as well:

Issue Potential Resolution
De-facto and proposed new manufacturing file formats do not encapsulate intellectual property information Refine specification to make each file self-describing and/or to develop a metadata wrapper like ID3 for MP3
Inconsistent, and perhaps even inappropriate, licensing schemes used for 3D data Development of a harmonized community type licensing scheme for 3D content
Safe-harbor provisions of the DMCA apply only to copyright infringement Statutory extension of these protections to all forms of intellectual property

 

I’ll examine each of these issues, and potential resolutions, in more detail below.  There are clear parallels (in my mind at least) to the music industry – what lessons can be learned from the digitization and distribution of digital content there?  Which business methods are ultimately prevailing?

A manufacturing file format which encapsulates intellectual property information

The de-facto standard used for digital manufacturing is and has (and remains) the STL (from “STereoLithography” a/k/a “Standard Tessellation Language”).  STL has the benefit of being well known and computationally easy to read and process.  Most manufacturing systems require triangulated models to get sliced for processing (e.g. CAM, 3D printing, etc.).  The challenges with STL, however, are many – it does not scale well to higher resolutions, there is no native support for color or materials properties, it is unit-less, and it does not compress well (among others).

A new standard has been proposed to replace the STL format, it is known as the AMF (for “Advanced Manufacturing Format” a/k/a “STL2”).    Al Dean reviewed the AMF and compared it to STL in his January 2013 DEVELOP3D article Alpha-Mike-Foxtrot to STL.  More useful background can be found at the AMF Wikispace.

Without getting into a debate as to whether the current AMF specification is “good enough” to grow into the next de-facto standard, it is important to recognize that the handling of intellectual property rights are specifically excluded.  Section 1.4 of the ASTM AFM specification reads:

This standard also does not purport to address any copyright and intellectual property concerns, if any, associated with its use. It is the responsibility of the user of this standard to meet any intellectual property regulations on the use of information encoded in this file format.

Further, the AMF specification is lacking support for metadata containers which would allow for the file content to be self-describing at some level.

Shapeways has decided to enter the fray and announced their own voxel based file format for 3D printing called SVX at the end of September.  As with STL and AMF, the SVX specification does not address intellectual property.

What is needed?   A file format (AMF or an alternate) for manufacturing which specifically allow for metadata containers to be encapsulated in the file itself.  These data containers can hold information about the content of the file such that, to a large extent, ownership and license rights could be self-describing.   An example of this is the ID3 metadata tagging system for MP3 files.   Of course the presence of tag information alone is not intended to prevent piracy (i.e. like a DRM implementation would be), but it certainly makes it easier for content creators and consumers alike to organize and categorize content, obtain and track license rights, etc.

Mp3filestructureMP3 File Structure, user: Kim Meyrick (CC-BY/GFDL)

Inconsistent/inappropriate licensing schemes for 3D data

Most 3D printing service bureaus and model hosting sites have licensing terms which are only concerned with copyright, rather than dealing more broadly with the entire “bucket” of potential intellectual property ownership and licensing concerns.  Several rely on the Creative Commons licensing scheme (or some variation thereof) as the foundation for the licensing relationship between their content creators/contributors, content consumers/users and their own services.   Worrying only about copyright, or exclusively using the CC licensing scheme for manufacturable 3D content (via 3D printing or otherwise) is misguided.

Creative Commons (the organization behind the CCL scheme) is acutely aware of using the wrong license type for functional content, see the post titled CC and 3D Printing Community.  The challenge with the current CC licensing schemes is that they were never intended to cover “functional” content (that which might be covered by intellectual property rights other than copyright).    As the blog above notes –

With the exception of CC0, the Creative Commons licenses are only for granting permissions to use non-software works. The worlds of software and engineering have additional concerns outside of the scope of what is addressed by the CC licenses. 3D printing is a new medium which encompasses both the creative domains of culture and engineering, and often 3D printed works do not fall neatly into either category.

Creative Commons explored the creative/functional split in a Wiki for the 4.0 release of licenses, but did not develop a framework for a license covering both types of content.

I examined these issues previously in more detail in a two part blog The Call for a Harmonized “Community License” for 3D Content.  While dated, those materials can be useful background.

Why does this matter?  There is presently no licensing consistency among the various players in the digital manufacturing ecosystem – potentially meaning that there are tens, or even hundreds of “flavors” of a license grant, for the same content.

What is Needed?  An integrated, harmonized licensing scheme addressing all of the intellectual property rights impacted in the digital manufacturing ecosystem – drafted in a way that non-lawyers can read and clearly understand them.  This is no small project, but needs to be done. Harmonization would simplify the granting and tracking of license rights (assuming stakeholders in the ecosystem helped to draft and use those terms) and could be implemented in conjunction with the file format metadata concept described earlier.

At least one organization is working on a new model for licensing, utilizing a community approach to drafting and feedback – driven by Joris Peels the YouMagine Community Manager (and long time participant in the 3D printing ecosystem).   You can find the current progress here.

Do the “Safe Harbor” Provisions Apply?

It is possible, via secondary or vicarious liability, to be held legally responsible for intellectual property infringement even if you did not directly commit acts of infringement.

In 1998 the Digital Millennium Copyright Act (the “DMCA”) became law in the United States.  The DMCA, among other notable things (such as criminalizing anti-circumvention protections such as DRM), creates limitations on the liability of online service providers for copyright infringement by third parties when engaging in certain types of activities – primarily relating to the transmission, storage and searching/indexing of data.  These have become known as the “safe harbor” provisions of the DMCA.

image016

Wick Harbour, user: Dorcas Sinclair (CC-BY-SA-2.0)

To receive these protections, service providers must comply with the conditions in the Act, including providing clear “notice and takedown” procedures which permit the owners of a licensed content to stop access to content which they allege to be infringing.

The DMCA provides a “safe harbor” to service providers for copyright infringement, if for example, it turns out that they, for example, hosted or store content upload by a third party which was found to be infringing.  There are a few key limitations: (1) the content may not be modified by a service provider (if it is, the DMCA safe harbor protections do not apply); and (2) the DMCA only limits liability for copyright infringement, it does not help protect a service provider from other potential forms of infringement.

The first DMCA “take down” notice for 3D printed content was sent to Thingiverse (now part of Stratasys) in February 2011 for a Penrose Triangle which could be 3D printed – likely content not protectable by copyright in the first place.  Shapeways [link: ] and many others in the ecosystem commented on the notice and what it meant for the industry at large – how do you reward legitimate creators/inventors in a world of “copy paste”.

You can see examples of how companies have implemented a DMCA notices on the 3D Systems Cubify site (see Section 9)  and on Shapeways.  There are obviously others.

Unfortunately, in the world of distributed digital manufacturing there is the potential for more than just copyright infringement – functional items which are manufactured and used may (and I stress may) violate third party patents, trademarks, trade dress, design rights, etc.    This could open up participants in the digital manufacturing chain to claims of secondary infringement for rights other than copyright.   These are typically much more difficult claims to make (just by the nature of what needs to be demonstrated under the law) – but potentially chilling nevertheless.

What is Needed?  Extension of the concepts in the DMCA to cover the broader bucket of intellectual property rights beyond copyright.  Desai and Magliocca, in Section III(c) of the Patents Meet Napster: 3D Printing and the Digitization of Things article I referenced earlier reach a similar conclusion and propose a framework for implementation.  Such changes need to be considered and implemented in a way which does not create or extend secondary liability to more players in the ecosystem, but rather provides a safe harbor for certain non-copyright claims should infringement liability otherwise exist.

More Certainty Will Bring Business Model Exploration

Forward thinking content owners, like Hasbro and others, recognize that over the next several years there will be substantial transformation in the digital manufacturing ecosystem.  Intellectual property metadata in self-describing digital files, harmonized licensing schemes and revised statutory frameworks will help accelerate these changes.

Ultimately, there is a universal market need for an intellectual property licensing, clearance and payment infrastructure to support the seamless distribution and payment for manufacturable content.  Hundreds of billions of dollars worth of consumer goods alone are likely to be manufactured (in the home, at a store, at a remote service bureau on demand, or by the consumer goods company themselves)  on annual basis using additive manufacturing technologies.     When content creators have an easy way to monetize their content through licensing, content consumers can find and pay for quality content which meets their needs, and simple personalization tools have been created, we will truly see a transformation in digital manufacturing.

Note: The majority of the content in this post was originally published in the September 2014 edition of DEVELOP3D Magazine, it has been updated and refreshed. 

SIGGRAPH 2014 Technical Paper Round Up

As many of you already know, SIGGRAPH 2014 (#SIGGRAPH2014) is taking place this week in Vancouver, British Columbia through 14-AUG.  SIGGRAPH has been around for more than four decades, and the presentations there constantly represent some of the most forward thinking in the fields of computer graphics, computer vision and human computer interface technologies and techniques. I am certainly jealous of those in attendance, so I will covet from afar as I make my way to a client visit this week.  The first pages of all of the SIGGRAPH 2014 Technical papers can be found at the SIGGRAPH site. Here is a sampling of those papers which I personally found to be most interesting.  A few have already been profiled by others, and if I seen them reviewed before, I will provide additional links.  These are not in any order of priority:

  • Learning to be a Depth Camera for Close-Range Human Capture and Interaction [(Microsoft Research project which proposes a machine learning technique to estimate z-depth per pixel using any conventional single 2D camera in certain limited capture and interaction scenarios [hands and faces] – demonstrating results comparable to existing consumer depth cameras, with dramatically lower costs, power consumption and form factor).  This one, admittedly, blew me away.   I have been interested in the consumer reality capture space for a while, and have blogged previously about the PrimeSense powered ecosystem and plenoptic (a/k/a “light field”) computational cameras.  I argued that light field cameras made lots of sense (to me at least) as the technology platform for mobile consumer depth sensing solutions (form factor, power consumption, etc.).   This new paper from Microsoft Research proposes a low cost depth sensing system for specific capture and interaction scenarios (the geometry of hands and faces) – turning a “regular” 2D camera into a depth sensor.   Admittedly doing so requires that you first calibrate the 2D camera by registering depth maps captured from a depth camera against intensity images, and in this way the 2D camera “learns” and encodes such things as surface geometry and reflectance among other things.   They demonstrate two prototype hardware designs – a modified web camera for desktop sensing and a modified camera for mobile applications – in both instances demonstrating hand and face tracking on par with existing consumer depth camera solutions.  This paper is a great read, in addition to describing their proposed techniques, they provide a solid overview of existing consumer depth capture solutions.

Learning to be a Depth Camera

  • Proactive 3D Scanning of Inaccessible Parts  (proposes a 3D scanning method where a user modifies/moves the object being acquired during the scanning process to capture occluded regions, using an algorithm supporting scene movement as part of the global 3D scanning process)

 

  • First-person Hyper-lapse Videos – paper  + Microsoft Research site (presentation of a method to convert single camera, first-person videos into hyper-lapse videos, i.e. time lapse videos with smoothly moving camera – overcoming limitations of prior stabilization methods).  What does this mean?  If you have ever tried to take a video that you shot (particularly while the camera is moving) and slow it down – the results are often not optimal.  Because frames need to be “made up” to fill the gaps, any camera movement introduces blurring.   Techcrunch reviewed the Microsoft Research project here.

 

  • Color Map Optimization for 3D Reconstruction with Consumer Depth Cameras (proposes an optimization approach to map color images onto geometric reconstructions generated from range and color videos produced by consumer grade color depth cameras – demonstrating substantially improved color mapping fidelity).  Anyone who has attempted to create a 3D reconstruction of an object or a scene using consumer depth cameras knows that it is one thing to create a generally good surface map, but it is an entirely more challenging problem to map color, per pixel, to accurately represent the captured environment.  Because consumer depth cameras are inherently noisy, and in particular because the shutters of the RGB and depth cameras are not synchronized, this means that generally color information is “out of phase” with the reconstructed surfaces.  Their method provides for some pretty incredible results:

Improved Color Map

  • Real-time Non-rigid Reconstruction Using an RGB-D Camera (a proposed hardware and software solution, using consumer graphics cards, for markerless reconstruction in real-time (at 30 Hz) of arbitrary shaped (i.e. faces, bodies, animals), yet moving/deforming physical objects).  Real-time reconstruction of objects or scenes without moving elements are the bread and butter of solutions such as Kinect Fusion.  Real-time 3D reconstruction of moving objects, in real time, is much more challenging.  Imagine, for example, having your facial expressions and body movements being “painted” in real-time, to your avatar in a virtual world.   While this solution requires a custom rig (i.e. high quality capture at close range was needed, something consumer depth cameras do not provide) it is certainly exciting to see what can be achieved with relatively modest hardware modifications.

 

  • Functional Map Networks for Analyzing and Exploring Large Shape Collections  (proposes a new algorithm for organizing, searching and ultimately using collections of models – first by creating high quality maps connecting the models, and then using those connections for queries, reconstruction, etc.).   Much of this paper was beyond me – but the problem is certainly understood by everyone, who even today, searches for 3D content.  Most of that data is organized/categorized by metadata – and not be the characteristics of the shapes themselves.  There are obviously some services, like 3DShap.es, which are actually interpreting and categorizing the underlying shape data – but most model hosting sites do not.  Imagine if you could run an algorithm against a huge database of content (e.g. Trimble’s 3D Warehouse), or even shapes when “discovered” on the web, and immediately build connections and relationships between shapes so that you could ask the query “Show me similar doors”.  Wow.

 

  • Automatic Editing of Footage from Multiple Social Cameras  (presents an approach that takes multiple cameras captured by “social” cameras – cameras that are carried/worn by those participating in the activity – and automatically produces a final, coherent cut video of that activity, represented from multiple camera views.)   The folks at Mashable recently looked at this approach.   While this is certainly cool, I’ve often wondered why, given all the mobile video camera solutions that exist, that an application has been developed which allows an event to be “socially” captured on video, and then in real or near-real time, allow interaction with that socially captured video, navigating from camera position to camera position within a 3D environment.  Sure, it is a huge data problem, but if you have gone to a concert lately you will soon realize that many folks (thousands of them in fact) are capturing some, if not all, of the entire event, from their unique camera position.  Certainly true for many sporting events as well (and in most cases, youth sporting events where the parents are recording their children).   Taking the Microsoft Photosynth  approach on steroids, if those camera positions are back-computed into 3D space, the video and sound could be synchronized, allowing for virtual fly throughs to different camera locations (if necessary interpolating frames along the way.)  OK, we might have to borrow all of the DARPA computing power for a month for a five minute video clip, but boy would it be cool!  😉

LazeeEye – 3D Capture Device Phone Add-On

There has been a continuing strong push on the consumer/prosumer 3D reality capture side of the capture/modify/make ecosystem – whether that captured content is to be used in an object or scene based scanning workflow.  New processing algorithms along with orders of magnitude improvement in processing power are unlocking new capabilities.

DIY scanning solutions have been around for a while – ranging from pure photogrammetric approaches, to building structured light/laser scanning setups (e.g. see the recommendations which DAVID 3D Solutions GbR makes on the selection of scanning hardware, by leveraging commercial depth sense cameras in interesting new ways (e.g. leveraging PrimeSense, SoftKinetic or other devices to create a 3D depth map or by utilizing light field cameras for 3D reconstructions . Occipital raised $1M in their Kickstarter campaign to develop their Structure Sensor (which is powered by PrimeSense technology) hardware attachment for Apple devices and 3D Systems is white labeling that solution.   Google has been working on Google Tango with its project partners (and apparently Apple – because the Google Tango prototype included PrimeSense technology)!

Early in 2014 I looked at the various market trends that were impacting the capture/modify/make ecosystem — the explosion of low cost, easy to use 3D reality capture devices (and associated software solution stack and hardware processing platforms) were part of the key among them –

2014 Market Trends

For a graphical evolution of how some of the lower cost sensors have developed over time, see:

3D Sensor Progression

Along comes an interesting Kickstarter project from Heuristic Labs for the LazeeEye, which so far has raised roughly $67K (on a goal of $250K) to develop a laser emitter which attaches to a phone, which flashes a pattern of light onto the object or scene to be capture, and stereo vision processing software on the phone creates/infers a depth map from that.  According to Heuristic Labs, the creators of the LazeeEye:

LazeeEye? Seriously? The name “LazeeEye” is a portmanteau of “laser” and “eye,” indicating that your phone’s camera (a single “eye”) is being augmented with a second, “laser eye” – thus bestowing depth perception via stereo vision, i.e., letting your smartphone camera see in 3D just like you can!

The examples provided in the funding video are pretty rough, and because it is a “single shot” solution, only those surfaces which can be seen from the camera viewpoint are captured.  In order to capture full scene, multiple shots would need to be captured, registered and then stitched together.   This is not a problem that is unique to this solution (it is a known element of “single shot” solutions).  More from the LazeeEye Kickstarter project pages:

How does LazeeEye work? The enabling technology behind LazeeEye is active stereo vision, where (by analogy with human stereo vision) one “eye” is your existing smartphone camera and passively receives incoming light, while the other “eye” actively projects light outwards onto the scene, where it bounces back to the passive eye. The projected light is patterned in a way that is known and pre-calibrated in the smartphone; after snapping a photo, the stereo vision software on the phone can cross-reference this image with its pre-calibrated reference image. After finding feature matches between the current and reference image, the algorithm essentially triangulates to compute an estimate of the depth. It performs this operation for each pixel, ultimately yielding a high-resolution depth image that matches pixel-for-pixel with the standard 2D color image (equivalently, this can be considered a colored 3D point cloud). Note that LazeeEye also performs certain temporal modulation “magic” (the details of which we’re carefully guarding as a competitive advantage) that boosts the observed signal-to-noise ratio, allowing the projected pattern to appear much brighter against the background.

Note that a more in-depth treatment of active stereo vision can be found in the literature: e.g., http://www.willowgarage.com/sites/default/files/ptext.pdf and https://cvhci.anthropomatik.kit.edu/~manel/publications/mva2013RGBD.pdf

[Side note, I found it interesting that Heuristic Labs is using Sketchfab to host its 3D models – yet another 3D content developer/provider who is leveraging this great technical solution for 3D content sharing.]

Depending on the funding level you select during the campaign you get different hardware – varying laser colors (which impact the scan quality), whether it is aligned, SDK access, etc.  They readily acknowledge that 3D capture technologies will become more ubiquitous in the coming years with the next generations of smartphones (whether powered by active technology like the PrimeSense solutions or passive solutions such as light field cameras) – their answer – why wait (and even if you wanted to wait, their solution is more cost effective).

Why wait indeed.  Interesting application of existing technical solutions packaged in a cheap approachable package for a DIY consumer, will be curious to see how this campaign finishes up.

[Second side note, I guess my idea of hacking the newest generator of video cameras, with built in DLP projectors (like those Sony makes), to create a structured light video solution is worthwhile pursuing.  The concept?  Use the onboard projector to emit patterns of structured light, capture that using the onboard CCD, process on a laptop, in the cloud, on your camera, etc.  Wala, a cheap 3D capture device that you take with you on your next vacation.  Heck, you are going to do that, why not just mount a DLP pico projector directly to your phone and do the same thing. . .  ;-)]