Trunki v. Kiddee – (a/k/a Horned Animal v. the Insect)

Horned Animal v. The Insect

My friends over at DEVELOP3D have a great June 2014 issue (click here to download, you will need to register first) – the cover story is one that is near to my heart, namely the intersection of intellectual property and 3D content.

Starting on page 20 of the DEVELOP3D June 2014 issue, Stephen Holmes details the historical intellectual property battle between Magmatic Ltd. and PMS International Limited surrounding travel cases for children, the potential implications for the industry, and the campaign started by Rob Law (the founder of Magmatic) to re-visit some of these issues in the UK Supreme Court.  I urge you to register, download and read this (and subsequent!) issues of DEVELOP3D Magazine (either online or in print).

Background

Magmatic Ltd. develops and sells a line of children’s oriented travel gear – including their range of Trunki travel cases, which come in different colors and graphics, but with the same surface profiles:

T-LadyBug-300x300 T-Terrance-300x300 T-Tipu-300x300

Magmatic had protected the Trunki family design via a Community Registered Design (a “CRD”).  While the metes and bounds of a CRD are outside the scope of this short article, the International Trademark Association (“INTA”) has published a very useful “fact sheet” on a CRD. Applications for CRD’s are not substantively reviewed, but at a minimum must contain a representation of a product design, and protect that specific appearance.

PMS International Limited (“PMS”) subsequently developed a competitive children’s case, called the Kiddee.  Magmatic sued PMS for infringing the CRD, its UK unregistered design rights in the design of the Trunki and its copyrights associated with the packaging for the Trunki.  The UK High Court found, in an opinion dated July 11, 2013, that PMS had infringed the CRD and the design right in four of the six designs. The copyright infringement claim was dismissed (except for one count which PMS conceded).  There is little doubt that PMS developed its line of children’s travel cases to be directly competitive with Magmatic — as nearly 20% of all three to six year olds in the UK owned a Trunki case (from Magnetic’s research).

In the United States we do not have a statutory intellectual property method akin to a CRD (copyright, trade dress, design patents, etc. can be used, but nothing that parallels the CRD).   Separately, you maybe interested in reading how a US and UK court examined the same set of facts and came to completely diverging opinions on whether an item was protectable by copyright (in this case it was a Star Wars Stormtrooper helmet.  See my earlier blog – http://3dsolver.com/the-40-or-20-million-helmet-or-not/ (US court concluded that the helmets were copyrightable, UK court held they were not because they were “functional” items in the context of a movie).

The Appeal

Magmatic appealed the High Court’s decision. On February 28th, 2014 the UK Court of Appeal rendered its decision (the “Appeal”) overturning the lower court and holding that PMS, with its Kiddee case, had not infringed Magmatic’s CRD for the Trunki.

Infringement, especially in the case of copyrights, registered designs, and design patents, is always a subjective one – there is simply no black and white test.    The decision on appeal here turned on the specific frame of reference the Court of Appeal used for the CRD infringement analysis, “[a]t the end of the day, the scope of the design must be determined from the [CRD] representation itself.” Appeal Finding 36.   In other words, how the products actually look in the marketplace isn’t relevant to whether a competitive product infringes rights in a Community Registered Design – what matters is the design and materials submitted as part of the application process.

The Court of Appeal reviewed prior decisions and found that:

[b]efore carrying out any comparison of the registered design with an earlier design or with the design of an alleged infringement, it is necessary to ascertain which features are actually protected by the design and so are relevant to the comparison. If a registered design comprises line drawings in monochrome and colour is not a feature of it, then it cannot avail a defendant to say that he is using the same design but in a colour or in a number of colours.

Appeal Finding 37.   The Court of Appeal concluded that the High Court had erred by concluding that the infringement analysis solely related to the shape of the suitcases – when distinctive design elements were present in the CRD beyond shape.  Appeal Finding 40.   The Court of Appeals found that the High Court was wrong in two primary respects: (1) the designs submitted were not wireframes (and so not restricted to shape), but were instead “six monochrome representations of a suitcase”. . .”which, considered as a whole, looks like a horned animal” Appeal Finding 41; and (2) because submitted in monochrome, the various shadings should be interpreted as distinct design elements (e.g. Magmatic could have been depicted the wheels in a similar shade as he rest of the body, but chose not to).  Appeal Finding 42.

181(image1)181(image2)

Image Source – Annex to the Appeal (from left to right in each row, first image is the Trunki case design submitted as part of the Magmatic CRD, followed by two images of representative Kiddee cases in the market, Trunki case design, and then two more images of the Kiddee cases).

The Court of Appeals then evaluated the various Kiddee cases to decide whether those produce the same overall impression on the informed user (the CRD infringement standard of review) and concluded that they did not – the Trunki case design (as submitted in the CRD) gave the overall impression of a “horned animal” whereas the various Kiddee cases looked like a “ladybird” with “antennae” and “a tiger with ears. It is plainly not a horned animal. Once again the accused design produces a very different impression from that of the CRD.”  Appeal Finding 47.   The Court of Appeals also found that the color contrast between the wheels and the rest of the body in the Trunki CRD were a distinctive design element which were simply not present in the Kiddee cases.  Appeal Finding 48.  Ultimately, the Court of Appeals found that:

[T]he overall impression created by the two designs is very different. The impression created by the CRD is that of a horned animal. It is a sleek and stylised design and, from the side, has a generally symmetrical appearance with a significant cut away semicircle below the ridge. By contrast the design of the Kiddee Case is softer and more rounded and evocative of an insect with antennae or an animal with floppy ears. At both a general and a detailed level the Kiddee Case conveys a very different impression.

Appeal Finding 55.

Practical Considerations

Many commentators have said the practical takeaway guidance from this decision is that those seeking protection via a CRD should generally avoid surfaced 3D representations in their CRD filings, and instead use wireframes.   The logic is that if only wireframes are used, then surface markings, color, etc. are irrelevant in a CRD infringement analysis.  Since at least one part of the Court of Appeals decision focused on the purposeful difference in the wheel color chosen by Magmatic, that would have been irrelevant if they had used wireframes.

I am certainly no expert in UK law, nor that relating to CRD registrations, but I do not believe that this case represents bad law, as much as it does a bad set of facts for the Plaintiff, Magmatic.   If Magmatic had submitted wireframes as part of their CRD, then PMS would have most certainly first claimed that the CRD itself was invalid because it wasn’t novel or possess enough individual character to warrant protection – the very things that colors, surface markings, lettering, etc. can bring to a simplified shape which make it more unique and protectable as a CRD.   It could be argued that many of the design elements were functional, and therefore not protectable (e.g. cases need wheels, they have straps, clasps, etc.) – particularly if depicted as a wireframe.

Ultimately though, if Magmatic had submitted wireframes for its CRD, wouldn’t it still have looked like a “horned animal” as opposed to an “insect” to the Court?  Look at the above images and ask yourself.  Their position might have been stronger (if the underlying CRD were deemed to be valid), but would it have changed the outcome?

LazeeEye – 3D Capture Device Phone Add-On

There has been a continuing strong push on the consumer/prosumer 3D reality capture side of the capture/modify/make ecosystem – whether that captured content is to be used in an object or scene based scanning workflow.  New processing algorithms along with orders of magnitude improvement in processing power are unlocking new capabilities.

DIY scanning solutions have been around for a while – ranging from pure photogrammetric approaches, to building structured light/laser scanning setups (e.g. see the recommendations which DAVID 3D Solutions GbR makes on the selection of scanning hardware, by leveraging commercial depth sense cameras in interesting new ways (e.g. leveraging PrimeSense, SoftKinetic or other devices to create a 3D depth map or by utilizing light field cameras for 3D reconstructions . Occipital raised $1M in their Kickstarter campaign to develop their Structure Sensor (which is powered by PrimeSense technology) hardware attachment for Apple devices and 3D Systems is white labeling that solution.   Google has been working on Google Tango with its project partners (and apparently Apple – because the Google Tango prototype included PrimeSense technology)!

Early in 2014 I looked at the various market trends that were impacting the capture/modify/make ecosystem — the explosion of low cost, easy to use 3D reality capture devices (and associated software solution stack and hardware processing platforms) were part of the key among them –

2014 Market Trends

For a graphical evolution of how some of the lower cost sensors have developed over time, see:

3D Sensor Progression

Along comes an interesting Kickstarter project from Heuristic Labs for the LazeeEye, which so far has raised roughly $67K (on a goal of $250K) to develop a laser emitter which attaches to a phone, which flashes a pattern of light onto the object or scene to be capture, and stereo vision processing software on the phone creates/infers a depth map from that.  According to Heuristic Labs, the creators of the LazeeEye:

LazeeEye? Seriously? The name “LazeeEye” is a portmanteau of “laser” and “eye,” indicating that your phone’s camera (a single “eye”) is being augmented with a second, “laser eye” – thus bestowing depth perception via stereo vision, i.e., letting your smartphone camera see in 3D just like you can!

The examples provided in the funding video are pretty rough, and because it is a “single shot” solution, only those surfaces which can be seen from the camera viewpoint are captured.  In order to capture full scene, multiple shots would need to be captured, registered and then stitched together.   This is not a problem that is unique to this solution (it is a known element of “single shot” solutions).  More from the LazeeEye Kickstarter project pages:

How does LazeeEye work? The enabling technology behind LazeeEye is active stereo vision, where (by analogy with human stereo vision) one “eye” is your existing smartphone camera and passively receives incoming light, while the other “eye” actively projects light outwards onto the scene, where it bounces back to the passive eye. The projected light is patterned in a way that is known and pre-calibrated in the smartphone; after snapping a photo, the stereo vision software on the phone can cross-reference this image with its pre-calibrated reference image. After finding feature matches between the current and reference image, the algorithm essentially triangulates to compute an estimate of the depth. It performs this operation for each pixel, ultimately yielding a high-resolution depth image that matches pixel-for-pixel with the standard 2D color image (equivalently, this can be considered a colored 3D point cloud). Note that LazeeEye also performs certain temporal modulation “magic” (the details of which we’re carefully guarding as a competitive advantage) that boosts the observed signal-to-noise ratio, allowing the projected pattern to appear much brighter against the background.

Note that a more in-depth treatment of active stereo vision can be found in the literature: e.g., http://www.willowgarage.com/sites/default/files/ptext.pdf and https://cvhci.anthropomatik.kit.edu/~manel/publications/mva2013RGBD.pdf

[Side note, I found it interesting that Heuristic Labs is using Sketchfab to host its 3D models – yet another 3D content developer/provider who is leveraging this great technical solution for 3D content sharing.]

Depending on the funding level you select during the campaign you get different hardware – varying laser colors (which impact the scan quality), whether it is aligned, SDK access, etc.  They readily acknowledge that 3D capture technologies will become more ubiquitous in the coming years with the next generations of smartphones (whether powered by active technology like the PrimeSense solutions or passive solutions such as light field cameras) – their answer – why wait (and even if you wanted to wait, their solution is more cost effective).

Why wait indeed.  Interesting application of existing technical solutions packaged in a cheap approachable package for a DIY consumer, will be curious to see how this campaign finishes up.

[Second side note, I guess my idea of hacking the newest generator of video cameras, with built in DLP projectors (like those Sony makes), to create a structured light video solution is worthwhile pursuing.  The concept?  Use the onboard projector to emit patterns of structured light, capture that using the onboard CCD, process on a laptop, in the cloud, on your camera, etc.  Wala, a cheap 3D capture device that you take with you on your next vacation.  Heck, you are going to do that, why not just mount a DLP pico projector directly to your phone and do the same thing. . .  ;-)]

 

My Radio Shack Recovery Plan

I admit it, I’m a geek.    When the weather wasn’t good, or if I was especially bored at playing manhunt, during the ages of  10 to 12 I spent a lot of time at my local Radio Shack.   Doing what?  Buying breadboard kits and building a radio.  Having a conversation with Eliza on a TRS-80 (and oh my, the hours spent playing Zork).  Buying my very own TRS Color Computer.  Physically hacking it to increase the RAM to 32KB by stacking chips and soldering wire.   Writing some software to remap the game cartridge memory to the memory space occupied by RAM (and then dumping that out to the state of the art tape cassette drive).   POKEing a memory space to double (that’s right DOUBLE) the MC6809 chip to a whopping 1.9mhz!   All within a context and community environment that nurtured geeks (no question was stupid) and provided help (with regular meet-up sessions while the store was open and after it closed).

When do I go to Radio Shack now?   Hardly ever.  Only if I need something “right now” and I’m willing to pay those “right now” inflated prices (e.g. $10 for a splitter that I could get from Amazon via Prime for $.99 if I could wait two days).   If I’m going to buy a computer, I’m not shopping there.   A mobile phone?  Nope.  A TV? Certainly not.  Batteries (probably not, unless it falls into the “right now” category and it is a non-standard size).  Are my kids going to shop there?  Are my 12 and 11 year old boys going to ask “Hey can we go to Radio Shack?”   Not a chance.  You get my point.

I’m obviously not alone.  A few weeks ago Radio Shack announced that it is closing 1,100 stores nationwide after same store sales plummet 19%.  They obviously recognize that they have a “brand” image challenge (their Superbowl ad was actually quite funny).  I would love to see a re-invigorated and vibrant community of Radio Shack stores – and so I offer the following Radio Shack “recovery plan.”

Return to your roots – You didn’t become successful because you sold all sorts of consumer goods to all kinds of people.  Admittedly, the selling environment has changed entirely (big box retail stores, discount stores, online availability of everything), but who your customer is (or should be) really hasn’t changed.  More on that later.

Start a conversation, build a community – It is difficult to survive in a low-margin, high-volume business that is today’s consumer electronics market.   You will not now (not ever) make that tech-savvy purchaser buy a TV from you.  You can engage certain types of prospects.  Sales is a process.  It is a conversation.  Re-create the environment to have good meaningful conversations about (high margin, yet to be commoditized) tech which interests them.  Those conversations may be with you, but most likely they will be with others. Hold meet-ups.  Let folks play with things in the store.  Make it become a place (again) that certain folks want to go.  And who might those folks be?

Target makers and the makers to be –  Look no further than the community of “makers” and “doers” who are building things, programming things, flying things and printing things.  They exist everywhere. These were the folks you sold to before. These are the folks you should sell to again.  Concentrate on STEM engagement with the children – partner with your local elementary and middle schools to show and demonstrate cool technologies.  Become a partner for Lego Mindstorms.  Let kids play Minecraft in the store.  Put it up on monitors for people to see.  Sell Rasberry Pi dev kits, as well as holding in store programming course sessions.  Target all kinds of robotics and RC hobbyists, including of course those who are flying all types of unmanned aerial platforms (single rotor, multi-rotor, fixed wing, etc.).  Partner with 3D Robotics and/or Airware to take their tech directly to consumers.  Explain/help folks to get their projects on Quirky or start a campaign on Kickstarter or Indiegogo.   Sell AR Drones as an entry point for folks to get into unmanned aerial systems.  Go beyond offering 3D printers by offering classes on how to make them work most effectively (what software to use, what 3D scanners to buy, etc.).  You already know this – admittedly, this is one pretty funny Radio Shack ad featuring 3D printing.  Partner with folks like Shapeways to allow people to capture/design items in the store and then have them drop shipped to their homes.   Show folks how to do it.  Nurture the entire 3D printing ecosystem (not just the printers as the end to themselves).   And with all of this, plug them back into a growing community of makers/doers and users.

Hire people who are makers and geeks – Hire people that are advocates for your target markets and consumers.  No disrespect meant, but the folks who I have come across at Radio Shack recently (admittedly a very small sample size) didn’t look like they wanted to be there and certainly weren’t makers themselves.  This is obviously difficult (because it is a self-reinforcing system), but make the “next/first” hire somebody who identifies with the target communities you are selling to.   Why would I want to buy a 3D printer from somebody who really wishes that they working at Best Buy instead (and regardless, they have no idea what a water-tight STL is. . .)?

Consider the policy perspective – Go to Washington and start lobbying on behalf of makers, doers, builders and flyers.  Help shape policy around thorny issues relating to 3D printing, unmanned aerial systems and robotics.  Partner with existing organizations that share similar views.  Become a positive voice in Washington for the community (of buyers) who you represent.

Result: Selling to a high margin/non-commoditized market – Following the above would get you right back to where you were at the beginning, selling high margin technology to the early-adopters, before things got commoditized.   In many cases you are selling solutions where a community of others (and their knowledge) is required to get things “right” – like in the earlier days of the personal computing market, when you sold TRS-80s and CoCos.  And breadboards.  And capacitors.  And wires.  And motors.  And a community.  You get the picture.

I’ll bet if you did the above many folks will start visiting and communicating in your stores again – my kids might even ask to stop by, to play Minecraft at the very least. 😉

Administrative Law Judge Decides that Commercial Drone Use is Not Prohibited by FAA Rules

UPDATE ON 3/7 : Not surprisingly, the FAA has appealed, and of course taking the position that this appeal “stays” the ALJ’s decision on the “ban” and that the “ban” is still in effect.  This is of course the view that the FAA should take.  An alternate view is that there never was a valid “ban” at all – so the ALJ’s decision solely relating to Pirker’s fine is stayed (e.g. the Motion to Dismiss).

——————–

The FAA attempted to fine Raphael Pirker $10,000 for “illegally” flying his plane at the University of Virginia, gathering film for a commercial.

His defense?  Quite simple.  Pirker argued that the FAA had no basis for fining him because the FAA had never gone through the rulemaking process and attempted to regulate model aircraft.  In other words, his activity wasn’t illegal, and a 2007 FAA policy notice wasn’t binding.

On March 6th, 2014, Patrick Geraghty, an Administrative Law Judge with the National Transportation Safety Board, ruled in favor of Piker, and dismissed the FAA’s fine.   In reviewing the applicable law, he held that while the FAA certainly had valid regulations pertaining to “aircraft”, they did not extend to “model aircraft” – that the FAA had historically (themselves) distinguished between those devices, and couldn’t now argue that regulations relating to aircraft encompassed models as well.

It is concluded that, as [the FAA]: has not issued an enforceable FAR regulatory rule governing model aircraft operation; has historically exempted model aircraft from the statutory FAR definitions of “aircraft” by relegating model aircraft operations to voluntary compliance with the guidance expressed in AC 91-57, Respondent’s model, aircraft operation was not subject to FAR regulation, and enforcement.

Decisional Order, Page 3.

Judge Geraghy also concluded that Congress, at least in 2012, must not have believed that there were any rules in place relating to the commercial use of unmanned aerial systems.  Why?  Because when they passed the FAA Modernization Re-authorization and Reform Act of 2012, specifically Subtitle B, Unmanned Aircraft Systems, Congress directed the FAA to define acceptable standards for operation and certification of civil UAS.  Why do that it rules already existed?

Because the FAA had never completed the rulemaking process for “model aircraft” or “unmanned aerial systems” and because his model was not covered by FAR rules governing “aircraft” then Pirker’s actions (flying his plane for commercial use) were not prohibited by law.

The entire Decisional Order can be found here ALJ Pirker Decision (3.7.14).

3D Printing Talk at UNCW CIE

I was fortunate yesterday to spend some time with a great crowd at the UNCW Center for Innovation and Entrepreneurship to talk about 3D Printing — sharing the time with an awesome team of presenters from GE Hitachi Nuclear Energy.  Jim Roberts, the Director of the UNCW CIE, a friend of mine since moving to North Carolina, invited me to see his impressive incubator space located at the edge of the UNC Wilmington campus – and I was glad to do so.  He has an impressive facility, and some great partner/tenant companies already working hard, I am excited to see what will be “hatched” under Jim’s leadership.  While there I also had the chance to meet with some great local entrepreneurs as well as spending some time with the Wired Wizard Robotics Team — and incredibly impressive group of young, talented, future scientists, engineers, technologists and mathematicians.   They were planning how to integrate 3D printing into their next design, I came away again believing how much STEM and the entire “capture to make” ecosystem should be intertwined.

One of the things I talked about yesterday was the interesting correlation between the performance of the publicly traded 3D printing companies and the relative rise of “3D Printing” as opposed to the technical term of “additive manufacturing”.  The upper left inserted graph is a Google Trends chart showing those search terms over time — if you haven’t used Google Trends — this data is normalized relative to all search volume over time.   In other words, a flat line would show that as a % of overall search, that term has stayed consistent (even as volume grows).  What you can see from this graph is the explosion of the rise of “3D Printing” as opposed to small, incremental growth of “additive manufacturing.”  Compare the rise of “3D Printing” to the stock charts and you see an interesting correlation indeed.  During the rest of my time I gave some reasons for why I believed this happened — looking at the macro level trends on both “sides” of the content to make ecosystem that may have unlocked this opportunity.

3D Printing + Additive Manufacturing

For those who have interest, you can download the slides I delivered here. TMK Presentation for UNCW on 3D Printing Opportunity (1.17.14 – FOR DISTRIBUTION)

Have a great weekend!

Light Field Cameras for 3D Imaging

Thanks for reading Part I of this article published at LiDAR News.  Below I examine some of the plenoptic technology providers as well as provide some predictions about 3D imaging in 2014 and beyond.  If you have been directed here from LiDAR News certainly skip ahead to the section starting with Technology Providers below.  Happy Holidays!

Light Field Cameras for 3D Capture and Reconstruction

Plenoptic cameras, or light field cameras, use an array of individual lenses (a microlens) to capture 4D light field about a scene.   This lens arrangement means that multiple light rays can be associated to each sensor pixel and synthetic cameras (created via software) can then process that information.

Phew, that’s a mouthful, right?  It’s actually easier to visualize –

Raytrix Plenoptic Camera Example

Image from Raytrix GmbH Presentation delivered at NVIDIA GTC 2012

This light field information can be used to help solve various computer vision challenges – for example, allowing images to be refocused after they are taken, to substantially improve low light performance with an acceptable signal to noise ratio or even to create a 3D depth map of a scene.   Of course the plenoptic approach is not restricted to single images, plenoptic “video” cameras (with a corresponding increase in data captured) have been developed as well.

The underlying algorithms and concepts behind a plenoptic camera have been around for quite some time.   A great technical backgrounder on this technology can be found in Dr. Ren Ng’s 2005 Stanford publication titled Light Field Photography with a Hand-Held Plenoptic Camera.   He reviews the (then) current state of the art before proposing his solution targeted at synthetic image formation.  Dr. Ng ultimately went on to commercialize his research by founding Lytro, which I discuss later.    Another useful backgrounder is the technical presentation prepared by Raytrix (profiled below) and delivered at the NVIDIA GPU Technology Conference 2012.

In late 2010 at the NVIDIA GPU Conference, Adobe demonstrated a plenoptic camera system (hardware and software) they had been working on – while dated, it is a useful video to watch as it explains both the hardware and software technologies involved with light field imaging as well as the computing horsepower required.  Finally, another interesting source of information and recent news on developments in the light field technology space can be found at the Light Field Forum.

Light field cameras have only become truly practical because of advances in lens and sensor manufacturing techniques coupled with the massive computational horsepower unlocked by GPU compute based solutions.  To me, light field cameras represent a very interesting step in the evolution of digital imaging – which until now – has really been focused on improving what had been a typical analog workflow.

Light Field Cameras and 3D Reconstructions

 Much of the recent marketing around the potential of plenoptic synthetic cameras focuses on the ability of a consumer to interact and share images in an entirely different fashion (i.e. changing the focal point of a captured scene).  While that is certainly interesting in its own right, I am personally much more excited about the potential of extracting depth map information from light field cameras, and then using that depth map to create 3D surface reconstructions.

Pelican Imaging (profiled below) recently published a paper at SIGGRAPH Asia 2013 detailing exactly that — the creation of a depth map, which was then surfaced, using their own plenoptic hardware and software solution called the PiCam.  This paper is published in full at the Pelican Imaging site, see especially pages 10-12.

There is a lot of on-going research in this space, some use traditional stereo imaging methods acting upon the data generated from the plenoptic lens array but others use entirely different technical approaches for depth map extraction.   A very interesting recent paper presented at ICCV 2013 in early December 2013 titled Depth from Combining Defocus and Correspondence Using Light Field Cameras and authored by researchers from the University of California, Berkley and Adobe proposes a novel method for extracting depth data from light field cameras by combining two methods of depth estimation.  The authors of this paper have made available their sample code and representative examples and note in the Introduction:

 The images in this paper were captured from a single passive shot of the $400 consumer Lytro camera in different scenarios, such as high ISO, outdoors and indoors. Most other methods for depth acquisition are not as versatile or too expensive and difficult for ordinary users; even the Kinect is an active sensor that does not work outdoors. Thus, we believe our paper takes a step towards democratizing creation of depth maps and 3D content for a range of real-world scenes.

Technology Providers

Let’s take a look at a non-exhaustive list of light field technology manufacturers – this is no way complete, nor does it even attempt to cover all of the handset manufacturers and others who are incorporating plenoptic technologies – nor those who are developing “proxy” solutions to replicate some of the functionalities which true plenoptic solutions offer (e.g. Nokia’s ReFocus app software).  Apple recently entered the fray of plenoptic technologies when it was reported in late November that it had been granted a range of patents (originally filed in 2011) covering a “hybrid” light field camera setup which can be switched traditional and plenoptic imaging.

Lytro

Lytro (@Lytro) was founded in 2010 by Dr. Ren Ng, building on research he started at Stanford in 2004.  Lytro has raised a total of $90M with an original $50M round in mid-2011 from Andreesen Horowitz ((@a16z, @cdixon), NEA (@NEAVC), Greylock (@GreylockVC), and a new $40M round adding North Bridge Venture Partners (@North_Bridge).   In early 2012 Lytro begin shipping its consumer focused light field camera system, later in that year Dr. Ng stepped down as CEO (he remains the Chairman), with the current CEO, Jason Rosenthal, joining in March 2013.

Lytro camera inside

Inside the Lytro Camera from Lytro

I would suspect that Lytro is pivoting from focusing purely on a consumer camera to instead the development of an imaging platform and infrastructure stack (including cloud services for interaction) that it, along with third party developers, can leverage.  This may also have been the strategy all along – in many cases to market a platform you have to first demonstrate to the market how the platform can be expressed in an application.  Jason Rosenthal seems to acknowledge as much in a recent interview published in the San Francisco Chronicle’s SF Gate blog in August 2013 (prior to their most recent round being publicly announced) where is quoted as saying that the long term Lytro vision is to become  “the new software and hardware stack for everything with a lens and sensor. That’s still cameras, video cameras, medical and industrial imaging, smartphones, the entire imaging ecosystem.”  Jonathan Heiliger, a general partner at North Bridge Venture Partners, in his quote supporting their participation in the latest $40M round supports that vision – [t]he fun you experience when using a Lytro camera comes from the ability to engage with your photos in ways you never could before.  But powering that interactivity is some great software and hardware technology that can be used for everything with a lens and a sensor.”

I am of course intrigued by the suggestion from Rosenthal that Lytro could be developing solutions useful for medical and industrial imaging.  If you are Pelican Imaging, you are of course focusing on the comments relating to “smartphones.”

Pelican Imaging

Pelican Imaging

Image from Pelican Imaging

Pelican Imaging (@pelicanimaging) was founded in 2008 and its current investors include Qualcomm (@Qualcomm), Nokia Growth Partners, Globespan Capital Partners (@Globespancap), Granite Ventures (@GraniteVentures), InterWest Partners (@InterwestVC) and IQT.  Pelican Imaging has raised more than $37M since inception and recently extended its Series C round by adding an investment (undisclosed amount) from Panasonic in August 2013.   Interesting to me is of course the large number of handset manufacturers who have participated in earlier funding rounds, as well as early investment support from In-Q-Tel (IQT), an investment arm aligned with the United States Central Intelligence Agency.

Pelican Imaging has been pretty quiet from a marketing perspective until recently, but no doubt with their recent additional investment from Panasonic and other hardware manufacturers they are making a push to become the embedded plenoptic sensor platform.

Raytrix

Raytrix is a German developer of plenoptic cameras, and has been doing so since 2009.  They have, up until now, primarily focused on using this technology for a host of industrial imaging solutions.   They offer a range of complete plenoptic camera solutions.  A detailed presentation explaining their solutions can also be found on their site and a very interesting video demonstration of the possibilities of a plenoptic video approach for creating 3D videos can be found hosted at the NVIDIA GPU Technology Conference website.  Raytrix has posted a nice example of how they created a depth map and 3D reconstruction using their camera here.   Raytrix plenotpic video cameras can be used for particle image velocimetry (PIV), a method of measuring velocity fields in fluids by tracking how particles move across time.  Raytrix has a video demonstrating these capabilities here.

The Future

For 2014, I believe we will see the following macro-level trends develop in the 3D capture space (these were originally published here).

  • Expansion of light field cameras – Continued acceleration in 3D model and scene reconstruction (both small and large scale using depth sense and time of flight cameras but with an expansion into light field cameras (i.e. like Lytro, Pelican Imaging, Raytrix, as proposed by Apple, etc).
  • Deprecation of 3D capture hardware in lieu of solutions – We will see many companies which had been focusing mostly on data capture pivot more towards a vertical applications stack, deprecating the 3D capture hardware (as it becomes more and more ubiquitous – i.e. plenoptic cameras combined with RTK GPS accurate smartphones).
  • More contraction in the field due to M&A – Continued contraction of players in the capture/modify/make ecosystem, with established players in the commercial 3D printing and scanning market moving into the consumer space (e.g. Stratasys acquiring Makerbot, getting both a 3D scanner and a huge consumer ecosystem with Thingiverse) and with both ends of the market collapsing in to offer more complete solutions from capture to print (e.g. 3D printing companies buying 3D scanner hardware and software companies, vice versa, etc.)
  • Growing open source software alternatives – Redoubled effort on community sourced 3D reconstruction libraries and application software (e.g. Point Cloud Libraries and Meshlab), with perhaps even an attempt made to commercialize these offerings (like the Red Hat model).
  • 3D Sensors everywhere – Starting in 2014, but really accelerating in the years that follow, 3D sensors everywhere (phones, augmented reality glasses, in our cars) which will constantly capture, record and report depth data – the beginnings of a crowd sourced 3D world model.

Over time, I believe that light field cameras will grow to have a significant place in the consumer acquisition of 3D scene information via mobile devices.  They have the benefit of having a relatively small form factor, are a passive imaging system, and can be used in a workflow which consumers already know and understand.   They are of course not a panacea, and ultimately currently suffer similar limitations as does photogrammetry and stereo reconstruction when targets are not used (e.g. difficulty in accurately computing depth data in scenes without a lot of texture, accuracy dependent on depth of the scene from the camera, etc.) but novel approaches to extract more information from a 4D light field hold promise for capturing more accurate 3D depth data from light field cameras.

For consumers, and consumer applications driven from mobile, I predict that light field technologies will take a significant share of sensor technologies, where accuracy is a secondary consideration (at best) and the ease of use, form factor and the “eye candy” quality of the results are most compelling.   Active imaging systems, like those which Apple acquired from PrimeSense certainly have a strong place in the consumer acquisition of 3D data, but in mobile their usefulness maybe limited by the very nature of the sensing technology (e.g. relatively large power draw and form factor, sensor confusion in the presence of multiple other active devices, etc.).

 

Disruptive Trends in the Content to Make Ecosystem

Back in April I prepared a presentation covering various aspects of the capture/modify/make ecosystem — covering what I thought were (and were going to be) the disruptive forces that would impact 3D scanning and 3D printing over the coming months and years.

I outlined the following disruptive trends:

  • Democratization of low cost 3D capture devices and solutions
  • Commoditization of high accuracy 3D capture devices
  • Democratization of 3D printing along with the Makers movement
  • “Gamified” content capture, creation and modification tools
  • Leveraging crowd sourced design and open source 3D content communities
  • Accelerating investment in 3D capture and creation technologies
  • New processing and interaction paradigms
  • Overarching policy issues

Disruptive Trends

I’ve posted the full presentation (minus embedded videos, sorry!) if you have interest.  Disruptive Trends in Capture to Make (4.5.13).  These trends continue to evolve and hold true – I intend to update these trends with new representative examples which have popped up in the last half of 2013.

Apple Buys Tech Behind Microsoft Kinect (PrimeSense) – 3D Scanning Impact?

[Update: Apple has confirmed the acquisition of PrimeSense for roughly $350M, when originally published the acquisition was still only rumored.]

It has been reported that Apple (@Apple) has acquired PrimeSense (@GoPrimeSense) for $345M.

I have been long on PrimeSense’s depth sensing cameras for a while – I started following them in the months leading up to the original launch of the Microsoft Kinect in the “Project Natal” days (late 2009).  Photogrammetry was always interesting to me as an approach to create 3D models – but the reconstructions tended to fail frequently (and without warning) and always required post-processing.

My interest in PrimeSense technology was primarily twofold: (1) to find a way to leverage the installed base of Microsoft Kinect devices as 3D capture devices (as well as the Xbox Live payment infrastructure) and (2) to build an inexpensive stand-alone 3D scanner based on PrimeSense technology.  I was only more interested after Microsoft published their real-time scene reconstruction research known as KinectFusion.  Hacks like the Harvard “Drill of Depth”s (a Kinect made mobile by attaching it to a battery powered drill, screen and software, circa early 2011) only further piqued my interest about the possibilities.

Drill of Depth

The writing was on the wall for PrimeSense after Microsoft decided to abandon PrimeSense technology and develop their own depth sensing devices for use with the new Xbox One.  PrimeSense had to transition from a lucrative relationship with one large customer (~30M+ units) to a developer of hardware and firmware solutions seeking broader markets.  The OpenNI initiative (an open source project to develop middleware SDK for 3D sensors which was primarily sponsored by PrimeSense) was an attempt to broaden the potential pool of third party developers who would ultimately build solutions around PrimeSense technologies.

There are many PrimeSense powered 3D scanners in the market today – it will be interesting to see whether this pool expands or contracts after the planned Apple acquisition (e.g. will the direction be inward, focusing the PrimeSense technology to be delivered directly with Apple only devices or will they continue to court third parties developers across all types of hardware and software solutions).   The new PrimeSense Capri form factor already allows for entirely new deployment paradigms for this technology, with one more generation the sensor will have shrunk so much that they can be comfortably embedded directly in phone and tablet devices (but with a trade off in data quality if the sensor shrinks too much).

Here is a quick run-down on a non-exhaustive list of PrimeSense powered 3D scanner hardware technology and vendors (note, this isn’t a profile of the universe of software companies that offer solutions around 3D model and scene reconstruction – as there are many):

Standard Microsoft Kinect – the initial movement for using the PrimeSense technology as a 3D scene reconstruction device came from hacks to the original Microsoft Kinect.  The Kinect was hacked to run independently of the Kinect, and ultimately Microsoft decided to embrace these hacks and develop a standalone Kinect SDK.

Microsoft Kinect for PC – Microsoft began selling a Kinect which would directly interface with Windows devices, it also enabled a “near” mode for the depth camera.

Asus XTION (Pro) – This is an Asus OEM of the PrimeSense technology which provides essentially the same functional specifications as delivered in the Microsoft Kinect (they use the same PrimeSense chipset and reference design).

MatterportMatterport (@Matterport) has raised $10M  since the middle of 2012 to develop a camera system, software and cloud infrastructure for scanning interior spaces.  The camera system itself is built around PrimeSense technologies (along with 2D cameras to capture higher quality images to be referenced to the 3D reconstruction created from the PrimeSense cameras).   Most interesting to me that Matterport counts Red Swan  and Felicis Ventures as investors, both which are also invested into Floored (see below).  A few days ago Forbes profiled the use of the Matterport system, the article is worth a read.

Floored – (@Floored3D), formerly known as Lofty, concentrates primarily on developing software to help visualize interior spaces and is concentrating first on the commercial real estate industry.  Floored has raised at least a little over $1M now, including common investors with Matterport.  For more on the relationship between Matterport and Floored, see this TechCrunch article.  Floored’s CEO is Dave Eisenberg, and he gave a great presentation at the TechCrunch NYC Startup Battlefield in late April 2013 explaining Floored’s value proposition.   Floored is definitely filled with brilliant minds, and obviously a whole lot of computer vision folks who understand how difficult it is to attempt to automatically generate 3D models of interior spaces from scan data (of any quality).  To get a sense of what they are currently thinking about, check out the Floored blog.

Lynx A – This was an offering from a start-up in Austin, Texas known as Lynx Labs (@LynxLabsATX) who launched an early 2013 KickStarter campaign for an “all in one” point and shoot 3D camera.  This device was a sensor, combined with a computing device and software which would allow for the real time capturing and rendering of 3D scenes.  The first round of devices shipped in the middle of September 2013.   I do not know for sure, but my assumption is that this device is PrimeSense powered.

DotProduct (@DotProduct3D) with their DPI-7 scanner.   As with the Lynx A camera, this is a PrimeSense powered device, combined with a Google Nexus, and their scene reconstruction software called Phi.3D.  DotProduct claims 2-4mm accuracy at 1m, achieved through a combination of individual sensor calibration, their software, and rejecting sensors which do not achieve spec.  DotProduct announced in late October 2013, at the Intel Capital Global Summit, that Intel Capital (@IntelCapital) had made a seed investment into DotProduct, spearheaded by Intel’s Perceptual Computing Group.

Occipital Structure SensorOccipital (@occipital) is an extremely interesting company based in Boulder and San Francisco, filled with amazing computer vision expertise.  After cutting their teeth on some computer vision applications for generating panoramas on Apple devices, they have bridged into a complete hardware and software stack for 3D data capture and model creation.  Occipital counts the Foundry Group (@foundrygroup) as one of its investors (having invested roughly $7M into Occipital in late 2011).   Occipital completed a very successful KickStarter campaign for its Structure Sensor raising nearly $1.3M.

Occipital Structure Sensor

The Structure Sensor is a PrimeSense powered device which is officially supported on later generation Apple iPad devices.  What is compelling is Occipital’s approach to create an entire developer ecosystem around this device – no doubt building on the Skanect (@Skanect) technology they acquired from ManCTL in June of 2013.  Skanect was one of the best third party applications available which had implemented and made available the Microsoft Fusion technology (allowing for real time 3D scene reconstruction from depth cameras).   If it is true, and Apple in fact does buy PrimeSense, then that is potentially problematic for Occipital’s current development direction if Apple has aspirations for embedding this technology in mobile devices (as opposed to Apple TV).  Even if Apple did want to embed in their iDevices, it would seem then that Occipital becomes an immediately interesting acquisition target (in one swoop you get hardware, and most importantly the computer vision software expertise).  Given the depth of talent at Occipital, I’m sure things are going to work out just fine.

Sense™ 3D Scanner by 3D Systems – This is the newest 3D scanner entrant in this space (announced a few weeks ago) delivered by 3D Systems (@3dsystemscorp), which acquired my former company, Geomagic.  The Sense uses the new PrimeSense Carmine sensor – a further evolution of the PrimeSense depth camera technology, allowing for greater depth accuracy across more pixels in the field (and ultimately reconstruction quality).  PrimeSense has a case study on the Sense.

What Are Competitive/Replacement Technologies for PrimeSense Depth Sensors?

In my opinion, the closest competitor in the market today to PrimeSense technologies are made by a company called SoftKinetic (@softkinetic) with their line of DepthSense cameras, sensors, middleware and software.

SoftKinetic

On paper, the functional specifications of these devices stack up well against the PrimeSense reference designs.  Unlike PrimeSense, SoftKinetic sells complete cameras, as well as modules and associated software and middleware.  SoftKinetic uses a time of flight (ToF) approach to capture depth data (which is different than PrimeSense).  Softkinetic has provided software middleware to Sony for the PS4 providing a middleware layer for third party developers to create gesture tracking applications using the PlayStation(R)Camera for PS4.   Softkinetic announced a similar middleware deal with Intel to accelerate perceptual computing in the early summer of 2013 too.

There are other companies in the industrial imaging space (who presently develop machine vision cameras or other time of flight scanners) which could provide consumer devices if they chose to (e.g. such as PMD Technologies in Germany).

I believe the true replacement technology, at least in the consumer space, for 3D data acquisition and reconstruction will come from light field cameras as a class in order to provide range data (e.g. z depth), and not necessarily from active imaging solutions.  See my thoughts on this below.

Predictions for 2014 and Beyond

Early in 2013, when I was asked by my friends at Develop3D to predict what 2013 would bring, I said:

In 2013 we will move through the tipping point of the create/modify/make ecosystem.

Low cost 3D content acquisition, combined with simple, powerful tools will create the 3D content pipeline required for more mainstream 3D printing adoption.  

Sensors, like the Microsoft Kinect, the LeapMotion device, and [Geomagic, now 3D Systems’] Sensable haptic devices, will unlock new interaction paradigms with reality, once digitized.  

Despite the innovation, intellectual property concerns will abound, as we are at the dawn of the next ‘Napster’ era, this one for 3D content.

I believe much of that prediction has come/is coming true.

For 2014 I believe we will see the following macro-level trends in the 3D capture space:

  • Expansion of light field cameras – Continued acceleration in 3D model and scene reconstruction (both small and large scale using depth sense and time of flight cameras but with an expansion into light field cameras (i.e. like Lytro (@Lytro) and Pelican Imaging (@pelicanimaging)).
  • Deprecation of 3D capture hardware in lieu of solutions – We will see many companies which had been focusing mostly on data capture pivot more towards a vertical applications stack, deprecating the 3D capture hardware (as it becomes more and more ubiquitous).
  • More contraction in the field due to M&A – Continued contraction of players in the capture/modify/make ecosystem, with established players in the commercial 3D printing and scanning market moving into the consumer space (e.g. Stratasys acquiring Makerbot, getting both a 3D scanner and a huge consumer ecosystem with Thingiverse) and with both ends of the market collapsing in to offer more complete solutions from capture to print (e.g. 3D printing companies buying 3D scanner hardware and software companies, vice versa, etc.)
  • Growing open source alternatives – Redoubled effort on community sourced 3D reconstruction libraries and application software (e.g. Point Cloud Libraries and Meshlab), with perhaps even an attempt made to commercialize these offerings (like the Red Hat model).
  • 3D Sensors everywhere – Starting in 2014, but really accelerating in the years that follow, 3D sensors everywhere (phones, augmented reality glasses, in our cars) which will constantly capture, record and report depth data – the beginnings of a crowd sourced 3D world model.

The Use of Light Field Cameras and 3D Data Acquisition and Reconstruction Will Explode

While the use of light field cameras to create 3D reconstructions are just at their infancy, just like the PrimeSense technology (which was designed to be used for an interaction paradigm, not for capturing depth data), I can see (no pun intended) this one coming.  Light field cameras have a strong benefit of being a passive approach to 3D data acquisition (like photogrammetry).  For what is possible in depth map creation from these types of camera systems, check out this marketing video from Pelican Imaging (note the 3D Systems Cube 3D printer)] and a more technical one here .

Pelican Imaging Sensor

Image from Pelican Imaging.

I will have a separate post looking in more depth at light field cameras as a class including Lytro’s recent new $40M round of funding and the addition of North Bridge.  I believe, after refinement, that they ultimately become a strong solution for consumer mobile devices for 3D content capture because of their size, power needs, passive approach, etc.  In the interim, if you have interest in this space you should read the Pelican Imaging presentation recently made at SIGGRAPH Asia on the PiCam and reproduced in full at the Pelican Imaging site.  Fast forward to pages 10-12 in this technical presentation for an example of using the Pelican Imaging camera to produce a depth map which is then surfaced.

What could ultimately game changing is if we find updated and refined depth sense technology embedded and delivered directly with the next series of smartphones and augmented reality devices (e.g. Google Glass).  In that world, everyone has a 3D depth sensor, everyone is capturing data, and the potentials are limitless for applications which can harvest and act upon that data once captured.

Let the era of crowd sourced world 3D data capture begin!

(. . . but wait, who owns that 3D world database once created. . .

This article was original published on DEVELOP3D on November 18th, 2013, it has been modified since that original posting.