Autodesk REAL 2016 Startup Competition

I had the opportunity to attend the Autodesk REAL 2016 event which is currently taking place at Fort Mason over March 8th and March 9th.   This event focuses on “reality computing” – the ecosystem of reality capture, modeling tools, computational solutions and outputs (whether fully digital workflows or a process that results in a physical object).

The event kicked off with the first Autodesk REAL Deal pitch competition.  Jesse Devitte from Borealis Ventures served as the emcee for this event.   A VC in his own right (as the Managing Director and Co-founder of Borealis), Jesse understands the technical computing space and has a great track record of backing companies that impact the built environment.   The VC panel judging the pitches consisted of: (1) Trae Vassalo, an independent investor/consultant who was previously a general partner of Kleiner Perkins Caufield & Byers; (2) Sven Strohband, CTO, Kholsa Ventures, and (3) Shahin Farshchi, a Partner with Lux Capital.

The winner of the competition will be announced, in conjunction with a VC panel discussion, at the end of the first day’s events starting at 5:00pm on the REAL Live Stage, Herbst Pavillion.

[Note: These were typed in near real time while watching the presenters and their interactions with the REAL Deal VC panelists – my apologies in advance if they don’t flow well, etc.  I’ve tried to be as accurate as possible.]

Lucid VR

Lucid VR was the first presenting company – pitching a 3D stereoscopic camera for the specific purpose of VR.  Han Jin, the CEO and co-founder, presented on behalf of the company.  He started by explaining that 16M headsets will be shipped this year or VR consumption – however – VR content creation is still incredibly difficult to produce.  It is a “journey of pain” transiting time, money, huge data sets, production and sharing difficulties.  Lucid VR has created an integrated hardware device, called the LucidCam, that “captures an experience” and simplifies the content production and publication process of VR content, which can then be consumed by all VR headsets.  Han pitched the vision of combining multiple LucidCam devices to support immersive 360 VR, real time VR live streaming.  Lucid VR hit its $100K crowdfunding campaign goal in November of 2015.

Panel Questions

Sven’s initially asked a two-part question: (1) which market is the company trying to attack first – consumer or enterprise; and (2) what is the technical differentiation for the hardware device (multi camera setups have been around for a while).   Han said that the initial use cases seem to be focusing on training applications – so more of an enterprise setup.  He explained that while dual camera setups have been around, they are complex, multi-part mechanically driven solutions, where they leverage GPU based solutions to complete on device processing for real time for capture and playback – a more silicon versus mechanical based solution.  Trae then asked about market timing – how will you get to market, what will be the pricing, etc.  Han said that they planned to ship at the end of the year, and that as of right now they were primarily working with consumer retailers for content creation.  They expected a GTM price point of between $300 and $400 for their capture device.   Trae’s follow-up – even if you capture and create the content, isn’t one of the gating factors going to be that the consumers will not have the appropriate hardware/software locally to experience it?

Minds Mechanical

The next presentation was from Minds Mechanical, and led by the CEO, Jacob Hockett.

Jacob explained that Minds Mechanical started as a solutions company – integrating various hardware and software to support the product development needs (primarily by providing inspection and compliance services) of some of the largest Tier 1 manufacturers in the world.   While growing and developing this services business they realized that they had identified a generalized challenge – and were working to disrupt the metrology (as opposed to meteorology, as Jacob jokingly pointed out) space.

Jacob explained that current metrology software is very expensive and is often optimized and paired with specific hardware.  Further compounding the problem is that various third party metrology software solutions often give different results on the same part, and even acting on the same data set.   The expense in adding new seats, combined with potentially incompatible results across third party solutions, results in limited metrology information sharing within an organization.

They have developed a cloud-based solution called Quality to help solve these challenge – Jacob suggested that we think of as a PLM type solution for the manufacturing and inspection value chain; tying inspection data back into the design and build process.  Jacob claims that Quality is the first truly cross platform solution available in the industry.

Given their existing customer relationships, they were targeting the aerospace, defense and MRO markets initially, to be followed by medical and automotive later.  They are actively transitioning their business from a solutions business to a software company and were seeking a $700K investment to grow the team. [Note:  Jacob was previously a product manager and AE at Verisurf Software, one of the market leading metrology software applications prior to starting Minds Mechanical.]  The lack of modern, easy to use tools are barriers to the industry and Minds Mechanical is going to try and change the entire market.

Panel Questions

Trae kicked off the questions – asking Jacob to identify who the buyer is within an organization and what is the driver for purchasing (expansion to new opportunities, cost savings, etc.).  Jacob said that the buy decision was mostly a cost savings opportunity.  Their pricing is low enough that it can be a credit card purchase, avoiding internal PO and purchase approval processes entirely.  Trae then followed up by asking how the data was originally captured – Jacob explained that they abstract data from various third party metrology applications which might be used in an account and provide a publication and analytics layer on top of those content creation tools.   Sven then asked about data ownership/regulation compliance for a SaaS solution – was it a barrier to purchase?   Jacob said that they understand the challenges of hosting/acting upon manufacturing data on the cloud; but that the reality was that for certain manufacturers and certain types of projects it just “wasn’t going to happen”.  Trae then asked whether they were working on a local hosted solution for those types of requirements, and Jacob said yes they were.  Shahin from Lux then asked who they were selling to – was it the OEM (and trying to force them to mandate it within the value chain or to the actual supply chain participants?  Jacob said that they will target the suppliers first, and not try and force the OEMs initially to demand use within their supply chain, focusing initially on a bottom-up sales approach first.


The next presentation was from Hermant Bheda, the CEO and founder of AREVO Labs.  AREVO’s mission was to leverage additive manufacturing technologies to produce light and strong composite parts to replace metal parts for production applications.  Hermant explained that they have ten pending patent applications and to execute on this vision they need: (1) high performance materials for production, (2) 3D printing software for production parts; and (3) a scalable manufacturing platform.

AREVO has create a continuous carbon fiber composite material which is five times as strong as titanium – unlocked by their proprietary software to weave this material together in a “true” 3D space (rather than 2.5D which they claim the existing FDM based printers use).   AREVO claims to have transport the industry from 2.5D to true 3D by optimizing the tool path/material deposition to generate the best parts – integrating a proprietary solution to estimate the post production part strength, then optimize the tool path to use lowest cost, lowest time, highest strength solution.

Their solution is based around a robotic arm based manufacturing cell – and could be used for small to large parts (up to 2 meters in size).  Markets from medical for single use applications, aerospace/defense for lightweight structural solutions, on demand industrial spare parts as well as oil & gas applications.  They have current customer engagements with Northrup, Airbus, Bombardier, J&J and Schlumberger.

[FWIW, you and see an earlier article on them at here, as well as a video of their process.  MarkForged is obviously also in the market and utilizes continuous carbon fiber as part of an AM process.  One of the slides in the AREVO Labs deck which was quickly clicked through was comparison of the two process, would be interesting to learn more about that differentiation indeed!]

Hermant explained that they were currently seeking a Series A raise of $8M.

Panel Questions

Shahin kicked off the questions for the panel – asking whether customers were primarily interested in purchasing parts produced from the technology or whether they wanted to buy the technology so they could produce their own?  Hermant said that the answer is both – some want parts produced for them, others want the tech, it depends on what their anticipated needs were over time.  Sven asked Hermant how he thought the market would settle out over time between continuous fiber (as with their solution) versus chopped fiber.   Hermant said that they view both technologies as complimentary to each other – but in the metals replacement market, continuous fiber is the solution for many higher value, higher materials properties use cases, but both will exist in the market.


The final presentation of the day during the REAL Deal Pitch competition came from UNYQ – they had previously presented at the REAL 2015 event.   Eythor Bendor, the CEO, presented on behalf of UNYQ.  UNYQ develops personalized prosthetic and orthotic devices leveraging additive manufacturing for production.  In 2016 they will be introducing the UNYQ Scoliosis Brace, having licensed the technology from 3D Systems, who are also investors.  According to Crunchbase data UNYQ has raised right around $2.5M across three funding rounds, and expect to be profitable sometime in 2017.

UNYQ has been working a platform for 3D printing manufacturing, personalization and data integration – resulting in devices that are not only personalized using AM for production, but can also integrate various sensors so that they become IoT nodes reporting back various streams of data (performance, how long it has been worn, etc.) which can be shared with clinicians.   UNYQ uses a photogrammetry based app to capture shape data and then leverages Autodesk technology to compute and mesh a solution.  The information is captured in clinics and the devices are primarily produced on FDM printers – going from photos to personalized products in less than four weeks.  They generated roughly $500K in revenues in 2015 starting with their prosthetic covers and have a GTM plan for their scoliosis offering which would have them generate $1M in sales within the first year after launch in May 2016.

UNYQ is currently seeking a $4M Series Seed round.

Panel Questions

Trae asked how UNYQ could accelerate this into market – given the market need, why wasn’t adoption happening faster?   Eythor said that in 2014/15 they had really been focusing on platform and partnership development – it was only at the very end of 2015 that they started creating a direct sales team. Given that there are only roughly 2,000 clinics in the US it was a known market and they had a plan of attack. The limited number of clinics, plus the opportunity to reach consumers directly via social media and other d2c marketing efforts will only accelerate growth in 2016 and beyond.  Trae followed up by asking – where is the resistance to adoption in the market (is it the middleman or something else that bogging things down).  Eythor said that it is more a process resistance (it hasn’t been done this way before, and with manual labor) than it is with the clinics themselves.  Sven then asked about data comparing the treatment efficacy and patient outcomes using the UNYQ devices versus the “traditional” methods of treatment.  Eythor said that while the sample set was limited, one of their strategic advisors had compared their solutions to those traditionally produced and found that the UNYQ offering was as least as good as what is in the market today – but with an absolutely clear preference on the patient side.  The final question came from Shahin at Lux who asked whether there was market conflict in that the clinics (which are the primary way UNYQ gets to market) has a somewhat vested interest in continuing to do things the old way (potentially higher revenues/margins, lots of crafters involved in that value chain, reluctance to change, etc.).  Eythor explained that they were focusing only on the 10-20% of the market that are progressive and landing/winning them; and then over time pull the rest of the market forward.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Mapillary Raises $8M – Crowdsourced Street Photos

Crowdsourced Street Maps – Mapillary Raises $8M

Mapillary, a Malmo, Switzerland based company that is building a crowdsourced street level photo mapping service has raised $8M in their Series A fundraising round, led by Atomico, with participation by Sequoia Capital, LDV Capital and Playfair Capital.   Some have commented that Mapillary wants to compete with Google Street View using crowd sourced, and then geo-located, photos (and presumably video and other assets over time).  Mapillary uses Mapbox as its base mapping platform.  Mapbox itself sources its underlying street mapping data from OpenStreetMap, as well as satellite imagery, terrain and places information from other commercial sources – you can see the full list here.   Very interesting to see that Mapillary has a relationship with ESRI – such that ESRI ArcGIS users can access Mapillary crowd sourced photo data directly via ArcGIS online.

I previously wrote about MapBox and OpenStreetMap in October 2013 when it closed its initial $10M Series A round led by Foundry Group.  You can see that initial blog post here.  MapBox subsequently raised a $52.6M Series B round, led by DFJ, in June of 2015.  I then examined the intersection of crowdsourced data collection and commercial use in the context of the Crunchbase dispute with Pro Populi and contrasted that with the MapBox and OpenStreetMap relationship.

I am fascinated by the opportunities that are unlocked by the continuing improvement in mobile imaging sensors.  The devices themselves are becoming robust enough for local computer vision processing (rather than sending data to the cloud) and we are perhaps a generation away (IMHO) from having an entirely different class of sensors to capture data from. That combined with significant improvements in location services makes it possible to explore some very interesting business and data services in the future.

In late 2013 I predicted that, in time, mobile 3D capture devices (and primarily passive ones) would ultimately be used to capture, and tie together a crowd sourced world model of 3D data.

What could ultimately game changing is if we find updated and refined depth sense technology embedded and delivered directly with the next series of smartphones and augmented reality devices. . .  In that world, everyone has a 3D depth sensor, everyone is capturing data, and the potentials are limitless for applications which can harvest and act upon that data once captured.

Let the era of crowd sourced world 3D data capture begin!

It makes absolute sense that the place to start along this journey is 2D and video imagery, which can be supplement (and ultimately supplanted) over time by richer sources of data  – leveraging an infrastructure that has already been built.  We still have thorny and interesting intellectual property implications to consider (Think Before You Scan That Building) – but regardless – bravo Mapillary! Bravo indeed!

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

IP in the Coming World of Distributed Manufacturing: Redux

Late in 2014 I wrote an article outlining why I felt that the transformational changes occurring on both “ends” of the 3D ecosystem were going to force a re-think of the ways that 3D content creators, owners and consumers would capture, interact with, and perhaps even make physical 3D data.  These changes will catalyze a UGC content explosion – forever changing the ways that brands interact with consumers and the ways consumers chose to personalize, and then manufacture, the goods that are relevant to them.

There are no doubt significant technical hurdles which remain in realizing that future.   I am confident that they will be overcome in time.  In addition to the broad question of how the metes and bounds of intellectual property protection will be stretched in the face of these new technologies, I examined some key tactical issues which needed to be addressed.  These were:



As we exit 2015, let’s take a look at each of these in turn and see what, if anything, has changed in the previous twelve months.

De-facto and proposed new manufacturing file formats do not encapsulate intellectual property information

After outlining the challenges with STL and AMF, I proposed that was needed was:

A file format (AMF or an alternate) for manufacturing which specifically allow for metadata containers to be encapsulated in the file itself.  These data containers can hold information about the content of the file such that, to a large extent, ownership and license rights could be self-describing.   An example of this is the ID3 metadata tagging system for MP3 files.   Of course the presence of tag information alone is not intended to prevent piracy (i.e. like a DRM implementation would be), but it certainly makes it easier for content creators and consumers alike to organize and categorize content, obtain and track license rights, etc.

In late April 2015, the 3MF Consortium was launched by seven companies in the 3D printing ecosystem (Autodesk, Dassault, FIT AG/netfabb (now part of Autodesk), HP, Microsoft, Shapeways and SLM Solutions Group) releasing the “3D Manufacturing Format (3MF) specification, which allows design applications to send full-fidelity 3D models to a mix of other applications, platforms, services and printers.”  3D Systems, Materialise and Stratasys have since joined, the most current membership list can be found here.   While launched under the umbrella of an industry wide consortium, the genesis for 3MF came from Microsoft – concluding that none of the existing formats worked (or could be made to work in a timely fashion) sufficiently well to support a growing ecosystems of 3D content creators, materials and devices.

Adrian Lannin, the Executive Director of the 3MF Consortium (and also Group Product Manager at Microsoft) gave a great presentation (video here) on the genesis of the 3MF Consortium, and the challenges they are attempting to solve, at the TCT Show in mid-October 2015.

The specification for the 3MF format has been published here.  A direct link to the published 1.01 version of specification can be found here.  In addition to attempting to solve some of the current interoperability and functionality issues with the current file formats, the 3MF Specification provides a “hook” to inject IP data into the 3MF package via an extension.

The specification does provide for optional package elements including digital signatures (see figure 2-1), more fully described in Section 6.1.  An extension to the 3MF format covering materials and properties can be found here.

Table 8-1 of the 3MF Specification makes clear that in the context of a model, the following are valid metadata names:

  • Title
  • Designer
  • Description
  • Copyright
  • LicenseTerms
  • Rating
  • CreationDate
  • ModificationDate

The content block associated to any of these metadata names can be any string of data.  Looks like ID3 tags for MP3 to me!  A separate extension specifically addressing ownership data, license rights, etc. could be developed providing for more granularity than the current mechanism.

While it will likely take time for 3MF to displace STL based workflows, the 3MF Specification seems to define the necessary container into which rights holder information can be injected and persisted throughout the manufacturing process.

Inconsistent, and perhaps even inappropriate, licensing schemes used for 3D data

After reviewing the multitude of ways that content creators and rights holders were attempting to protect and license their works, I concluded that what was needed was:

An integrated, harmonized licensing scheme addressing all of the intellectual property rights impacted in the digital manufacturing ecosystem – drafted in a way that non-lawyers can read and clearly understand them.  This is no small project, but needs to be done. Harmonization would simplify the granting and tracking of license rights (assuming stakeholders in the ecosystem helped to draft and use those terms) and could be implemented in conjunction with the file format metadata concept described earlier.

Unfortunately, not a lot of progress has yet been made in this regard.

As I outlined first in 2012, I continue to believe that there is a generalized, misplaced, widespread, reliance on the Creative Commons license framework for digital content which is to be manufactured into physical items.    These licenses, while incredibly useful, only address works protected by copyright – and were originally intended to grant copyright permissions in non-software works to the public.

The Creative Commons Attribution 4.0 International Public License framework specifically excludes trademark and patent licensing (see Section 2(b)(2)) as well as ability to collect royalties (See Section 2(b)(3)) making the framework generally inapplicable for use in all licensing schemes where the rightsholders wish to be paid upon the exercise of license rights.   This shouldn’t be surprising to anyone who knows why the Creative Commons licensing scheme was originally developed – but I suspect it is nevertheless surprising to folks who maybe relying on the framework as the basis for commercial transactions requiring royalties.  Even those who are properly using the CC scheme within its intended purpose may have compliance challenges when licenses requiring attribution are implemented in a 3D printing workflow.

The Creative Commons, no doubt, understands the complexity, and potential ambiguities, of using the current CC licensing schemes for 3D printing workflows.

Safe-harbor provisions of the DMCA apply only to copyright infringement

It is possible, via secondary or vicarious liability, to be held legally responsible for intellectual property infringement even if you did not directly commit acts of infringement.   After examining the Digital Millennium Copyright Act (the “DMCA”) and the “safe harbor” it potentially provides to service providers for copyright infringement (assuming they comply with other elements of the law), I concluded that what was needed was an extension of the concepts in the DMCA to cover the broader bucket of intellectual property rights beyond copyright, most notably, providing protection against dubious trademark infringement claims.

On September 1st, 2015, Danny Marti, the U.S. Intellectual Property Enforcement Coordinator (USIPC) at the White House Office of Management and Budget, solicited comments from interested parties in the Federal Register on the development of the 2016-2019 Joint Strategic Plan on Intellectual Property Enforcement.   Presumably the primary goal was to solicit feedback on intellectual property infringement enforcement priorities.  Several parties used it as an opportunity to provide public comment on the necessity of extending DMCA like “safe harbor” protections to trademark infringement claims.

On October 16th, 2015, Etsy, Shapeways, Foursquare, Kickstarter, and Meetup (describing themselves as “online service providers (OSPs) that connect millions of creators, designers, and small business owners to each other, to their customers, and to the world”) provided comments in response to the USIPC request, which can be found here.   After walking through some representative examples across their businesses, and making the argument that the lack of a notice/counter-notice process for trademark infringement claims can sometimes be chilling, the commentators ultimately conclude that it is time to consider expanding safe harbors:

While the benefits of statutory safe harbors are important, they are currently limited to disputes over copyright and claims covered by section 230 of the [Communications Decency Act]. No such protection exists for similarly problematic behavior with regard to trademark. As online content grows and brings about more disputes, it is necessary to consider expanding existing safe harbors or creating new ones for trademarks.

In the Matter of Development of the Joint Strategic Plan for Intellectual Property Enforcement – Comments of Etsy, Foursquare, Kickstarter, Meetup, and Shapeways, page 6, (October 16th, 2015).  [Note: Additional background on the examples given by the commentators can be found in an article posted on here.]

No doubt that in addition to the benefit to UGC creators on the “wrong” side of spurious trademark infringement claims, clearly, OSPs as a class, would benefit from expanded safe harbors covering potential trademark infringement claims.  That is certainly not a bad result as well.

We are at the dawn of the UGC economy – whether we are talking purely about digital goods, or those that are ultimately made physical. While any process to change the applicable law will be long and winding – the conversation needs to be started now. OSPs that serve the UGC economy need the business model certainty and protection from illegitimate copyright and trademark infringement claims that expanded safe harbors would bring.

This article was originally published on December 15th, 2015 at 3D Printing Industry

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Paracosm Seeking CV/CG Engineers and C++ Developers!

If you are a computer vision, computer graphics or C++ developer and are looking for a new opportunity with an exciting venture backed company — look no further.   Paracosm is developing an exciting software platform leveraging the newest wave of 3D hardware capture devices in order to build a perceptual map for devices (and humans!) to navigate within.

Job description provided by Paracosm follows — feel free to reach out to Amir (email below) or to me and I will pass your c.v. along:


About Paracosm:
Paracosm is solving machine perception by generating 3D maps of every interior space on earth.
We are developing a large-scale 3D reconstruction and perception platform that will enable robots and augmented reality apps to fully interact with their environment. You can see some of our fun demos (pass: MINDBLOWN ) and here:
We are a venture-backed startup based in Gainesville, FL, and were original development partners on Google’s Project Tango. We are currently working closely with companies like iRobot to commercialize our technology.
Job Role:
We are looking for senior C++ developers, computer-vision engineers, and computer graphics engineers to help us implement our next-gen 3D-reconstruction algorithms. Our algorithms sit at the intersection of SLAM+Computer Vision+Computer Graphics.
As part of this sweet gig, you’ll be working alongside a team of Computer Vision PhD’s to:
* design & implement & test cloud-based 3D-reconstruction algorithms
* develop real-time front-end interfaces designed to run on tablets (Google Tango, Intel RealSense) and AR headsets
* experiment with cutting edge machine-learning techniques to perform object segmentation and detection
Proficiency with C++ is pretty critical, ideally you’ll be experienced enough to even teach us a few tricks! Familiarity with complex algorithms is a huge plus, ideally one of the following categories:
– Surface reconstruction + meshing
– 3D dense reconstruction from depth cameras
– SLAM and global optimization techniques
– Visual odometry and sensor fusion
– Localization and place recognition
– Perception: Object segmentation and recognition
Work Environment:
Teamwork, collaboration and exploration of risky new ideas and concepts is a core part of our culture. We all work closely together to implement new approaches that push the state of the art.
We have fresh, healthy & delicious lunch catered every day by our personal chef, a kitchen full of snacks, and backlog of crazy hard problems we need solved.
We actively encourage people on our team to publish their work and present at conferences (we also offer full stipends for attending 2 conferences each year).
Did I mention we’re big on the team work thing? The entire team has significant input into company strategy and product direction, and everyone’s opinion and voice is valued.
Work will take place at our offices in Gainesville, FL
If you are interested, please email the CEO directly: Amir Rubin,


Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Intellectual Property in the Coming World of Distributed Digital Manufacturing

We are certainly in the midst of a transformation in the way that 3D content creators, owners and consumers will interact with, exchange, and perhaps even make physical, 3D data.  Along the way, traditional notions of what represents content worthy of protection will be stretched (and perhaps broken) as the market works to navigate and find the acceptable solution for all participants in the ecosystem – allowing 3D content creators to properly monetize their creativity and hard work, while allowing 3D content consumers to leverage a rich universe of quality content, and perhaps even paying for it along the way.   It won’t be easy, but there is a path forward.

In early 2012 I began a series of blogs on the intersection of intellectual property with the dramatic changes influencing the 3D capture/modify/make ecosystem (of course 3D printing is but one, of many, possible outcomes of a 3D capture and design process).  My first blog in this series was The Storm Clouds on the Horizon where I wrote that I felt the next “Napster” era was upon us for digitally captured real world content.

Storm Clouds

There is a growing awareness and understanding of intellectual property considerations in the 3D ecosystem – whether we are talking about how it might impact consumers who wish to use their in-home 3D printers to produce an item or a company within a distributed digital manufacturing chain for a large consumer goods company.  These concerns have been accelerated by the transformative technical changes on both “ends” of that ecosystem.

The technological shift

Over the last few years there has been continuing acceleration in the hardware, software and services necessary to empower digital design and manufacturing processes.  Earlier in 2014 I identified the following key trends in the capture/modify/make ecosystem for object based 3D capture and manufacture:

2014 Market Trends

We are at a unique point in time – when both “ends” of the capture to make ecosystem are being impacted by dramatic technological changes.  The change is continuing, the pace is accelerating.

The last several years have seen many new market entrants on the consumer/prosumer 3D printing side.  What is, and will be in my opinion, equally or more transformative is the impact that new low cost/smaller form factor 3D capture devices will have in this space.  Consumer 3D data capture is becoming more mainstream on the consumer side  as we close out 2014  – as Intel adds their RealSenseTM depth sense technology to every laptop they ship (with the first expression in the Creative Senz3D , Google progresses with Project Tango  along with their software partners and other 3D data capture solutions are developed and distributed to consumers.  I looked at some of these market players in an earlier blog and also examined how new passive 3D capture technologies, leveraging plenoptic (a/k/a “light field”) cameras, may find their way into your next phone or tablet.

3D Sensor Progression

A recent research paper co-authored by Microsoft Research and published at SIGGRAPH 2014 earlier in August titled Learning to be a Depth Camera demonstrates that 3D capture and interaction can be implemented by applying machine learning techniques and minor hardware modifications to existing single 2D camera systems.

With the convergence of technologies, it is likely we will see the growth of multifunction 3D capture and printing devices that attempt to offer “one button” reproduction (and transmission/sharing) of certain sized objects in certain materials.  Examples even exist today – like the ZEUS – marketed as the first “ALL-IN-ONE 3D Printer / Copy Machine” as well as the Blacksmith Genesis, which started a crowd-funding campaign on Indiegogo in August.   3D Systems, Intel and Best Buy have recently collaborated on an integrated campaign called the “Intel Experience” where, in selected Best Buy stores, consumers will be exposed to 3D capture solutions leveraging Intel’s RealSense cameras along side 3D Systems 3D printing solutions.

While I believe the ecosystem is lagging in producing software tools that make it easy for non-professional users to create, find and personalize 3D content, we are only a short time away from dramatic changes there too.

When people can more easily digitize, share, copy and reproduce real world 3D content – how will that change the landscape for content owners and consumer alike?  What existing business models will be threatened, and which new ones created, with such a transformation?

What exactly “is” Intellectual Property in the Context of Digital Manufacturing?

Many things!  It may be represented in trade secrets – the confidential, differentiated manufacturing processes used to produce something. It could be represented by copyright – in for example the rights a sculptor would have in their latest creation.  It might be represented by patent – in a novel, non-obvious, useful device.  In the EU, a design could be protected by registered or unregistered design rights.

What if your son broke the leg of his favorite action figure (which you purchased from a big box toy store) and you decided to repair it using something you produced from your 3D printer (or you could also print it to the Staples down the street or have it shipped to you from Shapeways)?

What if you were able to find and download a manufacturable model (in STL format) of that action figure that someone had uploaded to one of the many model sharing sites and used that as the basis of the print job?  What if the person who uploaded the file had created the model by hand (e.g. they may have looked at the same action figure you wanted to repair but they designed it on a blank digital canvas)?  What if the person who uploaded the file created the representation (in the file) by 3D scanning an undamaged action figure?   What if you scanned, printed, and repaired the item in your own home but did not share the files with anyone else?

Lamp Rings

What if was not an action figure, but instead a retaining ring for one of the low voltage lights which keep getting run over in your front yard?

Do these differences matter?  Absolutely.

The type of content (artistic or functional), the reason for manufacture (new item, replacement part, etc.), how the content to be manufactured was generated (created from scratch, printable file obtained from a third party, the end result of a 3D reality capture process, from the manufacturer, etc.) and where the content will be manufactured (in your home, at a local store for pickup, on a third parties networked printer, at a remote service bureau and shipped, etc.) all matter.  In some instances the content might not be protected at all, in others it might touch multiple types of third party intellectual property.

There is not enough space here to give you a general primer on all of the intellectual property issues in the create/capture/modify/make ecosystem.  I would instead point you to several excellent publications and presentations as background (which principally look at the application of US law):

The above is a small (but particularly useful) sample of work examining some of these issues in depth, another broader summary can be found here.  You will find that authors in this space cover a broad spectrum of opinions –from those who believe that intellectual property issues need to be understood in digital manufacturing but generally inapplicable because many objects that would be manufactured are generally not protectable (e.g. Weinberg), to those who believe that the democratization of capture and printing technologies will utterly transform manufacturing supply chains and potentially substantially devalue intellectual property rights all content owners will have in the future (e.g. Hornick) as well as everything in between.

I fall in the middle ground – believing that the fundamental technical and market changing technologies will stretch the concept of intellectual property, but as we have seen in the past with the music industry, that over time the ecosystem will adapt – including the law.

Intellectual Property Concerns an Impediment to Continuing Growth?

Intellectual property concerns have moved from beyond the theoretical to one which manufacturers consider to be one of the most potentially disruptive impacts of the broadening reach of additive manufacturing.   In June 2014, PricewaterhouseCoopers (“PwC”) and the Manufacturing Institute published their report on 3D Printing and the New Shape of Industrial Manufacturing (the “PwC Report”).   The report is broad reaching, and well worth an extended read by itself.   One section examines the potential for additive manufacturing to shrink supply chains:

Companies are re-imagining supply chains: a world of networked printers where logistics may be more about delivering digital design files—from one continent to printer farms in another—than about containers, ships and cargo planes. In fact, 70% of manufacturers we surveyed in the PwC Innovations Survey believe that, in the next three–five years, 3DP will be used to produce obsolete parts; 57% believe it will be used for after-market parts.

Source: PwC Report, Page #1

When PwC Report survey participants were asked to identify what they felt the most disruptive impact wide adoption of additive manufacturing technologies could have on US manufacturing – the “threat to intellectual property” was second only to supply chain restructuring.

This concern should not really be all that surprising.

image010In October 2013 the market research firm Gartner, in conjunction with their Gartner Symposium/ITxpo made a series of predictions impacting IT organizations and users for 2014 and beyond.   Several related to the impact that cheaper 3D capture and printing devices were predicted to have in the future for the creation of physical goods – predicting staggering losses from the piracy of intellectual property:

By 2018, 3D printing will result in the loss of at least $100 billion per year in intellectual property globally. Near Term Flag: At least one major western manufacturer will claim to have had intellectual property (IP) stolen for a mainstream product by thieves using 3D printers who will likely reside in those same western markets rather than in Asia by 2015.

The plummeting costs of 3D printers, scanners and 3D modeling technology, combined with improving capabilities, makes the technology for IP theft more accessible to would-be criminals. Importantly, 3D printers do not have to produce a finished good in order to enable IP theft. The ability to make a wax mold from a scanned object, for instance, can enable the thief to produce large quantities of items that exactly replicate the original.

Source: 2013 Gartner ITxpo Press Release

Now, I do not share the dire predictions of Gartner – as many of these hardware and software technologies have already existed for many years, but primarily because the process of creating high quality digital reproductions (either from “scratch” or from a 3D reality capture process) is still very difficult, even for experienced users.  But over time, and with almost certainty in the market for certain consumer goods, if someone could manufacture something in their home at comparable cost and quality to what they could buy at a store, why wouldn’t they?

Intellectual Property Issues in Digital Manufacturing

Obviously there must be a willingness of content owners to share and distribute their intellectual property for distributed manufacturing – whether as part of a collapsing supply chain for industrial manufacturers, or to authorize someone to produce a licensed good in their own home.

We are seeing companies test the water – from the Nokia experiment in early 2013 (prior to the Microsoft acquisition) to provide STL and STEP models of certain phone cases for 3D printing, to Honda releasing their 3D “design archives” in early 2014.


Nokia Lumia 520 Shell, author: Nokia (CC BY-NC-SA 3.0)

A few months ago Hasbro licensed a handful of artists to create derivative works based on their My Little Pony line of toys and then those artist designed customizations could be purchased from Shapeways.  To be clear, Hasbro did not authorize anyone to create customizations of their licensed works, but rather started with a single design, customized by a handful of artists, to start.  Buoyed by the success of this launch, Hasboro and Shapeways are now soliciting designers to create customized 3D printable designs based on Dragonvale, Dungeons & Dragons,  Monopoly, My Little Pony, Scrabble (to be sold in the US and Canada only) and Transformers – with upload instructions posted to in late August 2014 (which now points as subdomain to

What will accelerate the types of projects piloted by Nokia, Hasboro and Shapeways?

There are obviously business and technical hurdles in distributed digital manufacturing, but there are also some fundamental intellectual property issues which need to be resolved as well:

Issue Potential Resolution
De-facto and proposed new manufacturing file formats do not encapsulate intellectual property information Refine specification to make each file self-describing and/or to develop a metadata wrapper like ID3 for MP3
Inconsistent, and perhaps even inappropriate, licensing schemes used for 3D data Development of a harmonized community type licensing scheme for 3D content
Safe-harbor provisions of the DMCA apply only to copyright infringement Statutory extension of these protections to all forms of intellectual property


I’ll examine each of these issues, and potential resolutions, in more detail below.  There are clear parallels (in my mind at least) to the music industry – what lessons can be learned from the digitization and distribution of digital content there?  Which business methods are ultimately prevailing?

A manufacturing file format which encapsulates intellectual property information

The de-facto standard used for digital manufacturing is and has (and remains) the STL (from “STereoLithography” a/k/a “Standard Tessellation Language”).  STL has the benefit of being well known and computationally easy to read and process.  Most manufacturing systems require triangulated models to get sliced for processing (e.g. CAM, 3D printing, etc.).  The challenges with STL, however, are many – it does not scale well to higher resolutions, there is no native support for color or materials properties, it is unit-less, and it does not compress well (among others).

A new standard has been proposed to replace the STL format, it is known as the AMF (for “Advanced Manufacturing Format” a/k/a “STL2”).    Al Dean reviewed the AMF and compared it to STL in his January 2013 DEVELOP3D article Alpha-Mike-Foxtrot to STL.  More useful background can be found at the AMF Wikispace.

Without getting into a debate as to whether the current AMF specification is “good enough” to grow into the next de-facto standard, it is important to recognize that the handling of intellectual property rights are specifically excluded.  Section 1.4 of the ASTM AFM specification reads:

This standard also does not purport to address any copyright and intellectual property concerns, if any, associated with its use. It is the responsibility of the user of this standard to meet any intellectual property regulations on the use of information encoded in this file format.

Further, the AMF specification is lacking support for metadata containers which would allow for the file content to be self-describing at some level.

Shapeways has decided to enter the fray and announced their own voxel based file format for 3D printing called SVX at the end of September.  As with STL and AMF, the SVX specification does not address intellectual property.

What is needed?   A file format (AMF or an alternate) for manufacturing which specifically allow for metadata containers to be encapsulated in the file itself.  These data containers can hold information about the content of the file such that, to a large extent, ownership and license rights could be self-describing.   An example of this is the ID3 metadata tagging system for MP3 files.   Of course the presence of tag information alone is not intended to prevent piracy (i.e. like a DRM implementation would be), but it certainly makes it easier for content creators and consumers alike to organize and categorize content, obtain and track license rights, etc.

Mp3filestructureMP3 File Structure, user: Kim Meyrick (CC-BY/GFDL)

Inconsistent/inappropriate licensing schemes for 3D data

Most 3D printing service bureaus and model hosting sites have licensing terms which are only concerned with copyright, rather than dealing more broadly with the entire “bucket” of potential intellectual property ownership and licensing concerns.  Several rely on the Creative Commons licensing scheme (or some variation thereof) as the foundation for the licensing relationship between their content creators/contributors, content consumers/users and their own services.   Worrying only about copyright, or exclusively using the CC licensing scheme for manufacturable 3D content (via 3D printing or otherwise) is misguided.

Creative Commons (the organization behind the CCL scheme) is acutely aware of using the wrong license type for functional content, see the post titled CC and 3D Printing Community.  The challenge with the current CC licensing schemes is that they were never intended to cover “functional” content (that which might be covered by intellectual property rights other than copyright).    As the blog above notes –

With the exception of CC0, the Creative Commons licenses are only for granting permissions to use non-software works. The worlds of software and engineering have additional concerns outside of the scope of what is addressed by the CC licenses. 3D printing is a new medium which encompasses both the creative domains of culture and engineering, and often 3D printed works do not fall neatly into either category.

Creative Commons explored the creative/functional split in a Wiki for the 4.0 release of licenses, but did not develop a framework for a license covering both types of content.

I examined these issues previously in more detail in a two part blog The Call for a Harmonized “Community License” for 3D Content.  While dated, those materials can be useful background.

Why does this matter?  There is presently no licensing consistency among the various players in the digital manufacturing ecosystem – potentially meaning that there are tens, or even hundreds of “flavors” of a license grant, for the same content.

What is Needed?  An integrated, harmonized licensing scheme addressing all of the intellectual property rights impacted in the digital manufacturing ecosystem – drafted in a way that non-lawyers can read and clearly understand them.  This is no small project, but needs to be done. Harmonization would simplify the granting and tracking of license rights (assuming stakeholders in the ecosystem helped to draft and use those terms) and could be implemented in conjunction with the file format metadata concept described earlier.

At least one organization is working on a new model for licensing, utilizing a community approach to drafting and feedback – driven by Joris Peels the YouMagine Community Manager (and long time participant in the 3D printing ecosystem).   You can find the current progress here.

Do the “Safe Harbor” Provisions Apply?

It is possible, via secondary or vicarious liability, to be held legally responsible for intellectual property infringement even if you did not directly commit acts of infringement.

In 1998 the Digital Millennium Copyright Act (the “DMCA”) became law in the United States.  The DMCA, among other notable things (such as criminalizing anti-circumvention protections such as DRM), creates limitations on the liability of online service providers for copyright infringement by third parties when engaging in certain types of activities – primarily relating to the transmission, storage and searching/indexing of data.  These have become known as the “safe harbor” provisions of the DMCA.


Wick Harbour, user: Dorcas Sinclair (CC-BY-SA-2.0)

To receive these protections, service providers must comply with the conditions in the Act, including providing clear “notice and takedown” procedures which permit the owners of a licensed content to stop access to content which they allege to be infringing.

The DMCA provides a “safe harbor” to service providers for copyright infringement, if for example, it turns out that they, for example, hosted or store content upload by a third party which was found to be infringing.  There are a few key limitations: (1) the content may not be modified by a service provider (if it is, the DMCA safe harbor protections do not apply); and (2) the DMCA only limits liability for copyright infringement, it does not help protect a service provider from other potential forms of infringement.

The first DMCA “take down” notice for 3D printed content was sent to Thingiverse (now part of Stratasys) in February 2011 for a Penrose Triangle which could be 3D printed – likely content not protectable by copyright in the first place.  Shapeways [link: ] and many others in the ecosystem commented on the notice and what it meant for the industry at large – how do you reward legitimate creators/inventors in a world of “copy paste”.

You can see examples of how companies have implemented a DMCA notices on the 3D Systems Cubify site (see Section 9)  and on Shapeways.  There are obviously others.

Unfortunately, in the world of distributed digital manufacturing there is the potential for more than just copyright infringement – functional items which are manufactured and used may (and I stress may) violate third party patents, trademarks, trade dress, design rights, etc.    This could open up participants in the digital manufacturing chain to claims of secondary infringement for rights other than copyright.   These are typically much more difficult claims to make (just by the nature of what needs to be demonstrated under the law) – but potentially chilling nevertheless.

What is Needed?  Extension of the concepts in the DMCA to cover the broader bucket of intellectual property rights beyond copyright.  Desai and Magliocca, in Section III(c) of the Patents Meet Napster: 3D Printing and the Digitization of Things article I referenced earlier reach a similar conclusion and propose a framework for implementation.  Such changes need to be considered and implemented in a way which does not create or extend secondary liability to more players in the ecosystem, but rather provides a safe harbor for certain non-copyright claims should infringement liability otherwise exist.

More Certainty Will Bring Business Model Exploration

Forward thinking content owners, like Hasbro and others, recognize that over the next several years there will be substantial transformation in the digital manufacturing ecosystem.  Intellectual property metadata in self-describing digital files, harmonized licensing schemes and revised statutory frameworks will help accelerate these changes.

Ultimately, there is a universal market need for an intellectual property licensing, clearance and payment infrastructure to support the seamless distribution and payment for manufacturable content.  Hundreds of billions of dollars worth of consumer goods alone are likely to be manufactured (in the home, at a store, at a remote service bureau on demand, or by the consumer goods company themselves)  on annual basis using additive manufacturing technologies.     When content creators have an easy way to monetize their content through licensing, content consumers can find and pay for quality content which meets their needs, and simple personalization tools have been created, we will truly see a transformation in digital manufacturing.

Note: The majority of the content in this post was originally published in the September 2014 edition of DEVELOP3D Magazine, it has been updated and refreshed. 

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

SIGGRAPH 2014 Technical Paper Round Up

As many of you already know, SIGGRAPH 2014 (#SIGGRAPH2014) is taking place this week in Vancouver, British Columbia through 14-AUG.  SIGGRAPH has been around for more than four decades, and the presentations there constantly represent some of the most forward thinking in the fields of computer graphics, computer vision and human computer interface technologies and techniques. I am certainly jealous of those in attendance, so I will covet from afar as I make my way to a client visit this week.  The first pages of all of the SIGGRAPH 2014 Technical papers can be found at the SIGGRAPH site. Here is a sampling of those papers which I personally found to be most interesting.  A few have already been profiled by others, and if I seen them reviewed before, I will provide additional links.  These are not in any order of priority:

  • Learning to be a Depth Camera for Close-Range Human Capture and Interaction [(Microsoft Research project which proposes a machine learning technique to estimate z-depth per pixel using any conventional single 2D camera in certain limited capture and interaction scenarios [hands and faces] – demonstrating results comparable to existing consumer depth cameras, with dramatically lower costs, power consumption and form factor).  This one, admittedly, blew me away.   I have been interested in the consumer reality capture space for a while, and have blogged previously about the PrimeSense powered ecosystem and plenoptic (a/k/a “light field”) computational cameras.  I argued that light field cameras made lots of sense (to me at least) as the technology platform for mobile consumer depth sensing solutions (form factor, power consumption, etc.).   This new paper from Microsoft Research proposes a low cost depth sensing system for specific capture and interaction scenarios (the geometry of hands and faces) – turning a “regular” 2D camera into a depth sensor.   Admittedly doing so requires that you first calibrate the 2D camera by registering depth maps captured from a depth camera against intensity images, and in this way the 2D camera “learns” and encodes such things as surface geometry and reflectance among other things.   They demonstrate two prototype hardware designs – a modified web camera for desktop sensing and a modified camera for mobile applications – in both instances demonstrating hand and face tracking on par with existing consumer depth camera solutions.  This paper is a great read, in addition to describing their proposed techniques, they provide a solid overview of existing consumer depth capture solutions.

Learning to be a Depth Camera

  • Proactive 3D Scanning of Inaccessible Parts  (proposes a 3D scanning method where a user modifies/moves the object being acquired during the scanning process to capture occluded regions, using an algorithm supporting scene movement as part of the global 3D scanning process)


  • First-person Hyper-lapse Videos – paper  + Microsoft Research site (presentation of a method to convert single camera, first-person videos into hyper-lapse videos, i.e. time lapse videos with smoothly moving camera – overcoming limitations of prior stabilization methods).  What does this mean?  If you have ever tried to take a video that you shot (particularly while the camera is moving) and slow it down – the results are often not optimal.  Because frames need to be “made up” to fill the gaps, any camera movement introduces blurring.   Techcrunch reviewed the Microsoft Research project here.


  • Color Map Optimization for 3D Reconstruction with Consumer Depth Cameras (proposes an optimization approach to map color images onto geometric reconstructions generated from range and color videos produced by consumer grade color depth cameras – demonstrating substantially improved color mapping fidelity).  Anyone who has attempted to create a 3D reconstruction of an object or a scene using consumer depth cameras knows that it is one thing to create a generally good surface map, but it is an entirely more challenging problem to map color, per pixel, to accurately represent the captured environment.  Because consumer depth cameras are inherently noisy, and in particular because the shutters of the RGB and depth cameras are not synchronized, this means that generally color information is “out of phase” with the reconstructed surfaces.  Their method provides for some pretty incredible results:

Improved Color Map

  • Real-time Non-rigid Reconstruction Using an RGB-D Camera (a proposed hardware and software solution, using consumer graphics cards, for markerless reconstruction in real-time (at 30 Hz) of arbitrary shaped (i.e. faces, bodies, animals), yet moving/deforming physical objects).  Real-time reconstruction of objects or scenes without moving elements are the bread and butter of solutions such as Kinect Fusion.  Real-time 3D reconstruction of moving objects, in real time, is much more challenging.  Imagine, for example, having your facial expressions and body movements being “painted” in real-time, to your avatar in a virtual world.   While this solution requires a custom rig (i.e. high quality capture at close range was needed, something consumer depth cameras do not provide) it is certainly exciting to see what can be achieved with relatively modest hardware modifications.


  • Functional Map Networks for Analyzing and Exploring Large Shape Collections  (proposes a new algorithm for organizing, searching and ultimately using collections of models – first by creating high quality maps connecting the models, and then using those connections for queries, reconstruction, etc.).   Much of this paper was beyond me – but the problem is certainly understood by everyone, who even today, searches for 3D content.  Most of that data is organized/categorized by metadata – and not be the characteristics of the shapes themselves.  There are obviously some services, like, which are actually interpreting and categorizing the underlying shape data – but most model hosting sites do not.  Imagine if you could run an algorithm against a huge database of content (e.g. Trimble’s 3D Warehouse), or even shapes when “discovered” on the web, and immediately build connections and relationships between shapes so that you could ask the query “Show me similar doors”.  Wow.


  • Automatic Editing of Footage from Multiple Social Cameras  (presents an approach that takes multiple cameras captured by “social” cameras – cameras that are carried/worn by those participating in the activity – and automatically produces a final, coherent cut video of that activity, represented from multiple camera views.)   The folks at Mashable recently looked at this approach.   While this is certainly cool, I’ve often wondered why, given all the mobile video camera solutions that exist, that an application has been developed which allows an event to be “socially” captured on video, and then in real or near-real time, allow interaction with that socially captured video, navigating from camera position to camera position within a 3D environment.  Sure, it is a huge data problem, but if you have gone to a concert lately you will soon realize that many folks (thousands of them in fact) are capturing some, if not all, of the entire event, from their unique camera position.  Certainly true for many sporting events as well (and in most cases, youth sporting events where the parents are recording their children).   Taking the Microsoft Photosynth  approach on steroids, if those camera positions are back-computed into 3D space, the video and sound could be synchronized, allowing for virtual fly throughs to different camera locations (if necessary interpolating frames along the way.)  OK, we might have to borrow all of the DARPA computing power for a month for a five minute video clip, but boy would it be cool!  😉

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Think Before You 3D Scan That Building

There is a growing awareness and understanding of the intellectual property considerations in the capture/modify/make ecosystem – particularly as it relates to content that is captured via a 3D scanner, modified or mashed up, and then manufactured (via 3D printing or otherwise).   I have written about this before – blogging early in 2012 in The Storm Clouds on the Horizon that I felt the next “Napster” era was upon us for digitally captured real world content.   In the last two years there have been transformative technical changes on both “ends” of that ecosystem, consumer/prosumer 3D printing solutions along with an emerging class of inexpensive 3D real world capture devices and software solutions.

Earlier in 2014 I identified the following key trends in the capture/modify/make ecosystem for object based 3D capture and reproduction:

2014 Market Trends

Intellectual Property Concerns in Scene Based 3D Capture?

I’d point you to the on-going discussion around the metes and bounds of intellectual property protection in the object based capture/modify/make ecosystem is interesting context for some high level issue spotting with the intersection of intellectual property and scene/world based reality capture (as part of a formal digital documentation process or as part of an informal, crowd-sourced creation of a 3D world model).   What you’ll find is a lot of gray areas, inconsistencies, and mind melting information.

Understanding intellectual property in the context of scene based scanning will become more relevant over the coming years for many of the same reasons that have driven change and awareness in the context of capturing and reproducing objects.

The falling cost of high accuracy scanners useful for scene based 3D capture (e.g. the FARO Focus3D family means that more data will be captured, by more users, in the commercial context (either directly by the owner/operators themselves or by third party scanning service bureaus).  Similarly, 3D data capture will become ubiquitous on the consumer side later in 2014 and beyond – as Intel adds their RealSenseTM depth sense technology to every laptop they ship, Google progresses with Project Tango along with their software partners and as lightfield camera technology goes mainstream.  In my opinion, we are not too far away from the creation and continuous update of a 3D “world model” populated with data coming from various types of consumer, professional and industrial sensors.

Intellectual Property Applies to Buildings and Public Spaces?

I hate to be the bearer of bad news, but to answer the question, “yes” intellectual property rights may be impacted if you capture 3D data of a building, public spaces or other real world scenes. That’s probably surprising to many of you.  For purposes of this discussion I’m only going to focus on how the laws in the United States might apply (and in many respects they differ from other countries around the world – for an example of just how interesting/different that can be, see:  Your mileage can and will vary, this is presented to help with issue spotting – and not to provide definitive guidance on any particular situation.

I had the opportunity to speak on this topic with Michael Weinberg from Public Knowledge at a FARO Digital Documentation Conference a few years ago – many of the issues and examples we discussed then are still relevant now (and even more so with the explosion of low cost 3D capture solutions).

You Can Infringe Intellectual Property By Scanning A [Building] [Lobby] [Plaza] [etc.]?

The intellectual property ramifications of capturing 3D data in the context of a scene is very muddled.  Very little case law addresses these issues, and that which does, isn’t very clear.  Naturally, most buildings would not be copyrightable because, by their very nature, they are “useful” – and something that is useful is generally not afforded copyright protection (yes, useful objects can be patented, and you can actually patent certain building elements – but that’s a topic for a different day).

But think about something like a sculptural memorial that is in a public plaza.  Would the sculptor of the memorial be afforded copyright protection?  You betcha, as a sculptural work is specifically protected under the United States Copyright Act.   What if you were to take a picture of that memorial and decide to license it to the United States Postal Service for use on a postage stamp.  Would that picture, and re-use, require you to obtain clearance (in the form of a license) from the sculptor before you could do so, and before the USPS could sell the stamps?
Korean War Veterans Memorial Sculpture

According to the 5th Circuit Court of Appeals, when they looked at this question in 2010 – the answer was “yes.”  The failure to get a license from the sculptor, even though the defendant obtained one from the general contractor which installed the sculpture, constituted copyright infringement.   After remand back to the United States Court of Federal Claims, and a subsequent appeal, the sculptor, Frank Gaylord, was awarded $685,000 in the fall of 2013.

So what if you plopped your scanner in the middle of that field and captured a 3D point cloud (that mere act is likely infringing)?   What if you decided to sell that data to a third party and then printed 3D prints from the data?  What if you used that data as part of an immersive augmented reality platform to promote tourism for Washington DC?   Would you/could you be liable?

OK Sculptures Maybe, But A Building?

Empire State Building

A building is the essence of utilitarian and functional, so we are safe from copyright, right?  You might think so.  But you’d be wrong.

Cooper Union Building

Take for example the above, which is the Cooper Union New Academic Building in New York City. Construction was finished in 2009.   Still utilitarian and functional (umm, as a building) so not copyrightable, right?

Wrong. In the United States, under the Architectural Works Copyright Act of 1990 a building designed (and that design is fixed in a tangible medium – i.e. drawings, or actually constructed) after 1990 is specifically subject to copyright protection (although purely functional or utilitarian aspects of a building are not protected).  So, if you set up your 3D scanner on the sidewalk here and captured a point cloud of the Cooper Union New Academic Building, have you committed an act of copyright infringement?

Alright how about this awesome bridge –


It’s a bridge, so the essence of utilitarian and functional.  It also has significant artistic and sculptural elements. Since it’s not intended for human habitation, the Architectural Works Copyright Act does not apply.  Phew.

How about this pavilion built at the Fort Maurepaus beach park, located in Ocean Springs, Mississippi and constructed by FEMA after Hurricane Kartrina?


Folks aren’t supposed to live in pavilions, so we are safe, right?  Wrong.  Pavilions are specifically covered by the Architectural Works Copyright Act.  Only if it was built after 1990.  And it was. Oye.

 But Wait, We Have An Exception!

There is an exception to acts of infringement under the Architectural Works Copyright Act –

 The copyright in an architectural work that has been constructed does not include the right to prevent the making, distributing, or public display of pictures, paintings, photographs, or other pictorial representations of the work, if the building in which the work is embodied is located in or ordinarily visible from a public place.

United States Copyright Act, 17 U.S.C. Section 120 (as amended).

Yippee, so if we user our 3D scanner to capture a point cloud of a building we are saved by the so-called “photographer’s exception” for buildings that have been constructed and can be seen from a public place.  Right?  Unfortunately it’s a big “I have no idea but don’t think so but couldn’t find any case law that answers this question”.

Baltimore Train Station

The above is a photo of Baltimore Penn Station.  This was certainly built before 1990 – so no protection under the AWCA.   Even if it was built after 1990 – we would still OK because it would be covered by the photographer’s exception (assuming that applied to 3D data acquisition), right?

Wrong.  Even if that exception were extended to 3D data capture via scanning, that specific statutory exception does not apply to sculptures (or other objects protected by copyright) which are separable from a building.  [FWIW, the sculpture is called Male/Female by Jonathan Borofsky].

Other countries (e.g. Canada, Ireland, the UK) extend the photographer exception concept to all publicly located, but otherwise copyrightable, works (e.g. sculptures).  Not in the United States though.

Think You Are Confused Now?

What about a sculpture in a building?

Sculpture Attached to a Building

Or how about a sculpture that is attached to a building?

Sculpture in a Building

Or what about a building built by Yale in 2005, CALLED the “Sculpture Building” –

The Sculpture Building

Or how about a building that IS a sculpture (like the Walt Disney Concert Hall in Los Angeles, designed by Frank Gehry?)

Frank Gehry Concert Hall

Policy Implications of Ubiquitous 3D Data Capture of Scenes

In addition to the various intellectual property concerns that are potentially touched about when 3D scenes are captured, I believe there a host of other privacy and ownership issues that need to be thought through as well.  If I’m a facility owner, and I don’t want data captured – how do I prevent it as devices become more ubiquitous?  Sure, I can require people to leave their phones at the security desk (many secure facilities already have no photography or data transfer processes), but what do I do about their glasses?   If I’m a contributor of 3D data to a community sourced 3D “world model” who owns the data that I capture and upload?   Who is responsible if it is ultimately found to be infringing?   What are the policy and legal implications if Google, instead of capturing photographs for their street maps, instead, created 3D point clouds of every place they went?

Some Practical Advice for Service Providers

So what can you do minimize your risks if you are a commercial scanning service provider and are engaged to do some scene based scanning?

  • Ask questions —  Know enough to generally understand the potential risks and pitfalls of any data capture engagement.
  • Transfer liability and responsibility for clearance  – As a service provider, make sure that the owner/operator or the entity which engaged you to do complete the work is responsible for intellectual property clearance issues and agree to hold you harmless (e.g. they are responsible, not you, for any potential infringements).
  • Be especially careful with artistic elements –  Creative and sculptural elements should be subject to more scrutiny.  For example, if you are asked to scan a building lobby, and there is a sculpture in the middle of it, you should specifically get clearance from the artist.
  • Know how the collected data will be used —  Be absolutely clear on the data ownership and the plans for downstream use.  Is the data going to be used as part of a digital documentation process (so no broad public dissemination) or is going to be published and made accessible as part of an augmented reality application?

About the Author

Tom Kurke was the former President and Chief Operating Officer of Geomagic, a specialist supplier of 3D reconstruction and interaction software and hardware solutions – which was acquired by 3D Systems Corporation (NYSE: DDD) earlier this year.  Prior to Geomagic he spent more than a decade with Bentley Systems, a leading providing of solutions to the designers, constructors owners and operators of some of the largest constructed assets in the world.  He recently joined the Board of Advisors of Paracosm ( whose mission is to “3D-ify the World.”  When not supporting his two sons various sporting activities, or writing on topics of interest in the areas of 3D printing, digital reality capture, intellectual property, AEC/GIS or unmanned aerial systems at, you might see him finding new ways to crash his quadcopter.

[Note: This article was originally published on LiDAR News on April 26th, 2014, you can find that here.]

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Trunki v. Kiddee – (a/k/a Horned Animal v. the Insect)

Horned Animal v. The Insect

My friends over at DEVELOP3D have a great June 2014 issue (click here to download, you will need to register first) – the cover story is one that is near to my heart, namely the intersection of intellectual property and 3D content.

Starting on page 20 of the DEVELOP3D June 2014 issue, Stephen Holmes details the historical intellectual property battle between Magmatic Ltd. and PMS International Limited surrounding travel cases for children, the potential implications for the industry, and the campaign started by Rob Law (the founder of Magmatic) to re-visit some of these issues in the UK Supreme Court.  I urge you to register, download and read this (and subsequent!) issues of DEVELOP3D Magazine (either online or in print).


Magmatic Ltd. develops and sells a line of children’s oriented travel gear – including their range of Trunki travel cases, which come in different colors and graphics, but with the same surface profiles:

T-LadyBug-300x300 T-Terrance-300x300 T-Tipu-300x300

Magmatic had protected the Trunki family design via a Community Registered Design (a “CRD”).  While the metes and bounds of a CRD are outside the scope of this short article, the International Trademark Association (“INTA”) has published a very useful “fact sheet” on a CRD. Applications for CRD’s are not substantively reviewed, but at a minimum must contain a representation of a product design, and protect that specific appearance.

PMS International Limited (“PMS”) subsequently developed a competitive children’s case, called the Kiddee.  Magmatic sued PMS for infringing the CRD, its UK unregistered design rights in the design of the Trunki and its copyrights associated with the packaging for the Trunki.  The UK High Court found, in an opinion dated July 11, 2013, that PMS had infringed the CRD and the design right in four of the six designs. The copyright infringement claim was dismissed (except for one count which PMS conceded).  There is little doubt that PMS developed its line of children’s travel cases to be directly competitive with Magmatic — as nearly 20% of all three to six year olds in the UK owned a Trunki case (from Magnetic’s research).

In the United States we do not have a statutory intellectual property method akin to a CRD (copyright, trade dress, design patents, etc. can be used, but nothing that parallels the CRD).   Separately, you maybe interested in reading how a US and UK court examined the same set of facts and came to completely diverging opinions on whether an item was protectable by copyright (in this case it was a Star Wars Stormtrooper helmet.  See my earlier blog – (US court concluded that the helmets were copyrightable, UK court held they were not because they were “functional” items in the context of a movie).

The Appeal

Magmatic appealed the High Court’s decision. On February 28th, 2014 the UK Court of Appeal rendered its decision (the “Appeal”) overturning the lower court and holding that PMS, with its Kiddee case, had not infringed Magmatic’s CRD for the Trunki.

Infringement, especially in the case of copyrights, registered designs, and design patents, is always a subjective one – there is simply no black and white test.    The decision on appeal here turned on the specific frame of reference the Court of Appeal used for the CRD infringement analysis, “[a]t the end of the day, the scope of the design must be determined from the [CRD] representation itself.” Appeal Finding 36.   In other words, how the products actually look in the marketplace isn’t relevant to whether a competitive product infringes rights in a Community Registered Design – what matters is the design and materials submitted as part of the application process.

The Court of Appeal reviewed prior decisions and found that:

[b]efore carrying out any comparison of the registered design with an earlier design or with the design of an alleged infringement, it is necessary to ascertain which features are actually protected by the design and so are relevant to the comparison. If a registered design comprises line drawings in monochrome and colour is not a feature of it, then it cannot avail a defendant to say that he is using the same design but in a colour or in a number of colours.

Appeal Finding 37.   The Court of Appeal concluded that the High Court had erred by concluding that the infringement analysis solely related to the shape of the suitcases – when distinctive design elements were present in the CRD beyond shape.  Appeal Finding 40.   The Court of Appeals found that the High Court was wrong in two primary respects: (1) the designs submitted were not wireframes (and so not restricted to shape), but were instead “six monochrome representations of a suitcase”. . .”which, considered as a whole, looks like a horned animal” Appeal Finding 41; and (2) because submitted in monochrome, the various shadings should be interpreted as distinct design elements (e.g. Magmatic could have been depicted the wheels in a similar shade as he rest of the body, but chose not to).  Appeal Finding 42.


Image Source – Annex to the Appeal (from left to right in each row, first image is the Trunki case design submitted as part of the Magmatic CRD, followed by two images of representative Kiddee cases in the market, Trunki case design, and then two more images of the Kiddee cases).

The Court of Appeals then evaluated the various Kiddee cases to decide whether those produce the same overall impression on the informed user (the CRD infringement standard of review) and concluded that they did not – the Trunki case design (as submitted in the CRD) gave the overall impression of a “horned animal” whereas the various Kiddee cases looked like a “ladybird” with “antennae” and “a tiger with ears. It is plainly not a horned animal. Once again the accused design produces a very different impression from that of the CRD.”  Appeal Finding 47.   The Court of Appeals also found that the color contrast between the wheels and the rest of the body in the Trunki CRD were a distinctive design element which were simply not present in the Kiddee cases.  Appeal Finding 48.  Ultimately, the Court of Appeals found that:

[T]he overall impression created by the two designs is very different. The impression created by the CRD is that of a horned animal. It is a sleek and stylised design and, from the side, has a generally symmetrical appearance with a significant cut away semicircle below the ridge. By contrast the design of the Kiddee Case is softer and more rounded and evocative of an insect with antennae or an animal with floppy ears. At both a general and a detailed level the Kiddee Case conveys a very different impression.

Appeal Finding 55.

Practical Considerations

Many commentators have said the practical takeaway guidance from this decision is that those seeking protection via a CRD should generally avoid surfaced 3D representations in their CRD filings, and instead use wireframes.   The logic is that if only wireframes are used, then surface markings, color, etc. are irrelevant in a CRD infringement analysis.  Since at least one part of the Court of Appeals decision focused on the purposeful difference in the wheel color chosen by Magmatic, that would have been irrelevant if they had used wireframes.

I am certainly no expert in UK law, nor that relating to CRD registrations, but I do not believe that this case represents bad law, as much as it does a bad set of facts for the Plaintiff, Magmatic.   If Magmatic had submitted wireframes as part of their CRD, then PMS would have most certainly first claimed that the CRD itself was invalid because it wasn’t novel or possess enough individual character to warrant protection – the very things that colors, surface markings, lettering, etc. can bring to a simplified shape which make it more unique and protectable as a CRD.   It could be argued that many of the design elements were functional, and therefore not protectable (e.g. cases need wheels, they have straps, clasps, etc.) – particularly if depicted as a wireframe.

Ultimately though, if Magmatic had submitted wireframes for its CRD, wouldn’t it still have looked like a “horned animal” as opposed to an “insect” to the Court?  Look at the above images and ask yourself.  Their position might have been stronger (if the underlying CRD were deemed to be valid), but would it have changed the outcome?

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS