Category Archives: Venture Capital

Apple Buys Tech Behind Microsoft Kinect (PrimeSense) – 3D Scanning Impact?

[Update: Apple has confirmed the acquisition of PrimeSense for roughly $350M, when originally published the acquisition was still only rumored.]

It has been reported that Apple (@Apple) has acquired PrimeSense (@GoPrimeSense) for $345M.

I have been long on PrimeSense’s depth sensing cameras for a while – I started following them in the months leading up to the original launch of the Microsoft Kinect in the “Project Natal” days (late 2009).  Photogrammetry was always interesting to me as an approach to create 3D models – but the reconstructions tended to fail frequently (and without warning) and always required post-processing.

My interest in PrimeSense technology was primarily twofold: (1) to find a way to leverage the installed base of Microsoft Kinect devices as 3D capture devices (as well as the Xbox Live payment infrastructure) and (2) to build an inexpensive stand-alone 3D scanner based on PrimeSense technology.  I was only more interested after Microsoft published their real-time scene reconstruction research known as KinectFusion.  Hacks like the Harvard “Drill of Depth”s (a Kinect made mobile by attaching it to a battery powered drill, screen and software, circa early 2011) only further piqued my interest about the possibilities.

Drill of Depth

The writing was on the wall for PrimeSense after Microsoft decided to abandon PrimeSense technology and develop their own depth sensing devices for use with the new Xbox One.  PrimeSense had to transition from a lucrative relationship with one large customer (~30M+ units) to a developer of hardware and firmware solutions seeking broader markets.  The OpenNI initiative (an open source project to develop middleware SDK for 3D sensors which was primarily sponsored by PrimeSense) was an attempt to broaden the potential pool of third party developers who would ultimately build solutions around PrimeSense technologies.

There are many PrimeSense powered 3D scanners in the market today – it will be interesting to see whether this pool expands or contracts after the planned Apple acquisition (e.g. will the direction be inward, focusing the PrimeSense technology to be delivered directly with Apple only devices or will they continue to court third parties developers across all types of hardware and software solutions).   The new PrimeSense Capri form factor already allows for entirely new deployment paradigms for this technology, with one more generation the sensor will have shrunk so much that they can be comfortably embedded directly in phone and tablet devices (but with a trade off in data quality if the sensor shrinks too much).

Here is a quick run-down on a non-exhaustive list of PrimeSense powered 3D scanner hardware technology and vendors (note, this isn’t a profile of the universe of software companies that offer solutions around 3D model and scene reconstruction – as there are many):

Standard Microsoft Kinect – the initial movement for using the PrimeSense technology as a 3D scene reconstruction device came from hacks to the original Microsoft Kinect.  The Kinect was hacked to run independently of the Kinect, and ultimately Microsoft decided to embrace these hacks and develop a standalone Kinect SDK.

Microsoft Kinect for PC – Microsoft began selling a Kinect which would directly interface with Windows devices, it also enabled a “near” mode for the depth camera.

Asus XTION (Pro) – This is an Asus OEM of the PrimeSense technology which provides essentially the same functional specifications as delivered in the Microsoft Kinect (they use the same PrimeSense chipset and reference design).

MatterportMatterport (@Matterport) has raised $10M  since the middle of 2012 to develop a camera system, software and cloud infrastructure for scanning interior spaces.  The camera system itself is built around PrimeSense technologies (along with 2D cameras to capture higher quality images to be referenced to the 3D reconstruction created from the PrimeSense cameras).   Most interesting to me that Matterport counts Red Swan  and Felicis Ventures as investors, both which are also invested into Floored (see below).  A few days ago Forbes profiled the use of the Matterport system, the article is worth a read.

Floored – (@Floored3D), formerly known as Lofty, concentrates primarily on developing software to help visualize interior spaces and is concentrating first on the commercial real estate industry.  Floored has raised at least a little over $1M now, including common investors with Matterport.  For more on the relationship between Matterport and Floored, see this TechCrunch article.  Floored’s CEO is Dave Eisenberg, and he gave a great presentation at the TechCrunch NYC Startup Battlefield in late April 2013 explaining Floored’s value proposition.   Floored is definitely filled with brilliant minds, and obviously a whole lot of computer vision folks who understand how difficult it is to attempt to automatically generate 3D models of interior spaces from scan data (of any quality).  To get a sense of what they are currently thinking about, check out the Floored blog.

Lynx A – This was an offering from a start-up in Austin, Texas known as Lynx Labs (@LynxLabsATX) who launched an early 2013 KickStarter campaign for an “all in one” point and shoot 3D camera.  This device was a sensor, combined with a computing device and software which would allow for the real time capturing and rendering of 3D scenes.  The first round of devices shipped in the middle of September 2013.   I do not know for sure, but my assumption is that this device is PrimeSense powered.

DotProduct (@DotProduct3D) with their DPI-7 scanner.   As with the Lynx A camera, this is a PrimeSense powered device, combined with a Google Nexus, and their scene reconstruction software called Phi.3D.  DotProduct claims 2-4mm accuracy at 1m, achieved through a combination of individual sensor calibration, their software, and rejecting sensors which do not achieve spec.  DotProduct announced in late October 2013, at the Intel Capital Global Summit, that Intel Capital (@IntelCapital) had made a seed investment into DotProduct, spearheaded by Intel’s Perceptual Computing Group.

Occipital Structure SensorOccipital (@occipital) is an extremely interesting company based in Boulder and San Francisco, filled with amazing computer vision expertise.  After cutting their teeth on some computer vision applications for generating panoramas on Apple devices, they have bridged into a complete hardware and software stack for 3D data capture and model creation.  Occipital counts the Foundry Group (@foundrygroup) as one of its investors (having invested roughly $7M into Occipital in late 2011).   Occipital completed a very successful KickStarter campaign for its Structure Sensor raising nearly $1.3M.

Occipital Structure Sensor

The Structure Sensor is a PrimeSense powered device which is officially supported on later generation Apple iPad devices.  What is compelling is Occipital’s approach to create an entire developer ecosystem around this device – no doubt building on the Skanect (@Skanect) technology they acquired from ManCTL in June of 2013.  Skanect was one of the best third party applications available which had implemented and made available the Microsoft Fusion technology (allowing for real time 3D scene reconstruction from depth cameras).   If it is true, and Apple in fact does buy PrimeSense, then that is potentially problematic for Occipital’s current development direction if Apple has aspirations for embedding this technology in mobile devices (as opposed to Apple TV).  Even if Apple did want to embed in their iDevices, it would seem then that Occipital becomes an immediately interesting acquisition target (in one swoop you get hardware, and most importantly the computer vision software expertise).  Given the depth of talent at Occipital, I’m sure things are going to work out just fine.

Sense™ 3D Scanner by 3D Systems – This is the newest 3D scanner entrant in this space (announced a few weeks ago) delivered by 3D Systems (@3dsystemscorp), which acquired my former company, Geomagic.  The Sense uses the new PrimeSense Carmine sensor – a further evolution of the PrimeSense depth camera technology, allowing for greater depth accuracy across more pixels in the field (and ultimately reconstruction quality).  PrimeSense has a case study on the Sense.

What Are Competitive/Replacement Technologies for PrimeSense Depth Sensors?

In my opinion, the closest competitor in the market today to PrimeSense technologies are made by a company called SoftKinetic (@softkinetic) with their line of DepthSense cameras, sensors, middleware and software.

SoftKinetic

On paper, the functional specifications of these devices stack up well against the PrimeSense reference designs.  Unlike PrimeSense, SoftKinetic sells complete cameras, as well as modules and associated software and middleware.  SoftKinetic uses a time of flight (ToF) approach to capture depth data (which is different than PrimeSense).  Softkinetic has provided software middleware to Sony for the PS4 providing a middleware layer for third party developers to create gesture tracking applications using the PlayStation(R)Camera for PS4.   Softkinetic announced a similar middleware deal with Intel to accelerate perceptual computing in the early summer of 2013 too.

There are other companies in the industrial imaging space (who presently develop machine vision cameras or other time of flight scanners) which could provide consumer devices if they chose to (e.g. such as PMD Technologies in Germany).

I believe the true replacement technology, at least in the consumer space, for 3D data acquisition and reconstruction will come from light field cameras as a class in order to provide range data (e.g. z depth), and not necessarily from active imaging solutions.  See my thoughts on this below.

Predictions for 2014 and Beyond

Early in 2013, when I was asked by my friends at Develop3D to predict what 2013 would bring, I said:

In 2013 we will move through the tipping point of the create/modify/make ecosystem.

Low cost 3D content acquisition, combined with simple, powerful tools will create the 3D content pipeline required for more mainstream 3D printing adoption.  

Sensors, like the Microsoft Kinect, the LeapMotion device, and [Geomagic, now 3D Systems’] Sensable haptic devices, will unlock new interaction paradigms with reality, once digitized.  

Despite the innovation, intellectual property concerns will abound, as we are at the dawn of the next ‘Napster’ era, this one for 3D content.

I believe much of that prediction has come/is coming true.

For 2014 I believe we will see the following macro-level trends in the 3D capture space:

  • Expansion of light field cameras – Continued acceleration in 3D model and scene reconstruction (both small and large scale using depth sense and time of flight cameras but with an expansion into light field cameras (i.e. like Lytro (@Lytro) and Pelican Imaging (@pelicanimaging)).
  • Deprecation of 3D capture hardware in lieu of solutions – We will see many companies which had been focusing mostly on data capture pivot more towards a vertical applications stack, deprecating the 3D capture hardware (as it becomes more and more ubiquitous).
  • More contraction in the field due to M&A – Continued contraction of players in the capture/modify/make ecosystem, with established players in the commercial 3D printing and scanning market moving into the consumer space (e.g. Stratasys acquiring Makerbot, getting both a 3D scanner and a huge consumer ecosystem with Thingiverse) and with both ends of the market collapsing in to offer more complete solutions from capture to print (e.g. 3D printing companies buying 3D scanner hardware and software companies, vice versa, etc.)
  • Growing open source alternatives – Redoubled effort on community sourced 3D reconstruction libraries and application software (e.g. Point Cloud Libraries and Meshlab), with perhaps even an attempt made to commercialize these offerings (like the Red Hat model).
  • 3D Sensors everywhere – Starting in 2014, but really accelerating in the years that follow, 3D sensors everywhere (phones, augmented reality glasses, in our cars) which will constantly capture, record and report depth data – the beginnings of a crowd sourced 3D world model.

The Use of Light Field Cameras and 3D Data Acquisition and Reconstruction Will Explode

While the use of light field cameras to create 3D reconstructions are just at their infancy, just like the PrimeSense technology (which was designed to be used for an interaction paradigm, not for capturing depth data), I can see (no pun intended) this one coming.  Light field cameras have a strong benefit of being a passive approach to 3D data acquisition (like photogrammetry).  For what is possible in depth map creation from these types of camera systems, check out this marketing video from Pelican Imaging (note the 3D Systems Cube 3D printer)] and a more technical one here .

Pelican Imaging Sensor

Image from Pelican Imaging.

I will have a separate post looking in more depth at light field cameras as a class including Lytro’s recent new $40M round of funding and the addition of North Bridge.  I believe, after refinement, that they ultimately become a strong solution for consumer mobile devices for 3D content capture because of their size, power needs, passive approach, etc.  In the interim, if you have interest in this space you should read the Pelican Imaging presentation recently made at SIGGRAPH Asia on the PiCam and reproduced in full at the Pelican Imaging site.  Fast forward to pages 10-12 in this technical presentation for an example of using the Pelican Imaging camera to produce a depth map which is then surfaced.

What could ultimately game changing is if we find updated and refined depth sense technology embedded and delivered directly with the next series of smartphones and augmented reality devices (e.g. Google Glass).  In that world, everyone has a 3D depth sensor, everyone is capturing data, and the potentials are limitless for applications which can harvest and act upon that data once captured.

Let the era of crowd sourced world 3D data capture begin!

(. . . but wait, who owns that 3D world database once created. . .

This article was original published on DEVELOP3D on November 18th, 2013, it has been modified since that original posting.

littleBits Raises An Additional $11.1M in Series B Funding

littleBits, the New York City based open hardware startup, has raised a $11.M Series B round of funding led by True Ventures (@trueventures) and Foundry Group (@foundrygroup) and includes new investors Two Sigma Ventures (who had also just led an $11.5M investment in Rethink Robotics, and is invested into Floored (@Floored), who I have blogged about before) and Vegas Tech Fund (@VegasTechFund).   Returning investors Khosla Ventures (@vkhosla), Mena Ventures, Neoteny Labs, O’Reilly AlphaTech (@OATV), Lerer Ventures (also invested into Floored) (@lererventures) and new and returning angel investors also participated.  littleBits had previously raised $3.65M in Series A funding, and $850K in seed funding, bringing its total raised to date to over $15M.

littleBits mission is to “turn everyone into an inventor by making electronics accessible as a material.” littleBits makes “Bits modules” that snap together magnetically to make it easy for children and adults to build simple circuits and inventive projects in seconds. littleBits, and its CEO Ayah Bdeir (@ayahbdeir) have won numerous awards and are viewed as leaders in the maker movement.

I previously profiled littleBits in my two part blog series in November and December 2012 looking examining the intersection of the makers movement with the “Minecraft generation” in my own house – as I try to get my own kids to focus more on the worlds of atoms instead of bits.  You can find those two posts here: (1) http://3dsolver.com/the-makers-movement-intersects-with-the-minecraft-generation/  and (2) http://3dsolver.com/the-maker-in-the-minecraft-generation-part-duex/

Congratulations to littleBits!

CrunchBase: Using Crowdsourced Data for Commercial Purposes

For those of you who don’t know about CrunchBase (@crunchbase), it is a crowdsourced database of information about startups, people and investors.  Crunchbase describes themselves as “the free database of technology companies, people, and investors that anyone can edit. Our mission is to make information about the startup world available to everyone and maintainable by anyone.”  AOL acquired Crunchbase and TechCrunch in 2010 from Michael Arrington.

Crunchbase has been very successful in sourcing data, and have established strong relationships with many of the leading venture capital firms who regularly share data about their portfolio companies (fundraising, people, etc.).  CrunchBase has even developed an Excel Data Exporter, in addition to its API access, to allow for the broader distribution of the information contained in its databases.

The current Crunchbase Terms of Service, Privacy Policy,and Licensing Policy govern the use and access of Crunchbase data.

As of the date of this blog, the Licensing Policy provides that:

We permit anyone to republish our content in accordance with this licensing policy.

We provide CrunchBase’s content under the Creative Commons Attribution License [CC-BY]. Our content includes structured data, overviews and media files associated with companies and people. Our schema, and documentation are also offered under the Creative Commons license.

We ask that API users link back to CrunchBase from any pages that use CrunchBase data. We want to make sure that everyone is able to find the source of the content to keep the service up-to-date and accurate.

This Licensing Policy may be updated from time to time as our services change and grow. If you have any questions about this policy please contact us at licensing@crunchbase.com.

CrunchBase provides a specific licensing contract for services that charge for the use of their data. Contact licensing@crunchbase.com

The CrunchBase Terms of Service provide further restrictions on how the API maybe used:

We provide access to portions of the Site and Service through an API thereby enabling people to build applications on top of the CrunchBase platform. For purposes of this Terms of Service, any use of the API constitutes use of the Site and Service. You agree only to use the API as outlined in the documentation provided by us on the Site.

 On any Web page or Application where you display CrunchBase company or people results, each page must include a hypertext link to the appropriate company or person profile Web page on CrunchBase.com. Additional CrunchBase Branding Requirements can be found on the following Web page: http://info.crunchbase.com/docs/licensing-policy/. CrunchBase may grant exceptions on a case-by-case basis. Contact us atlicensing@crunchbase.com for special branding requests, which must be approved in advance in writing.

CrunchBase will utilize commercially reasonable efforts to provide the CrunchBase API on a 24/7 basis but it shall not be responsible for any disruption, regardless of length. Furthermore, CrunchBase shall not be liable for losses or damages you may incur due to any errors or omissions in any CrunchBase Content, or due to your inability to access data due to disruption of the CrunchBase API.

CrunchBase reserves the right to continually review and evaluate all uses of the API, including those that appear more competitive than complementary in nature.

CrunchBase provides a specific licensing contract for services that charge for the use of their data. Contact licensing@crunchbase.com

CrunchBase reserves the right in its sole discretion (for any reason or for no reason) and at anytime without notice to You to change, suspend or discontinue the CrunchBase API and/or suspend or terminate your rights under these General Terms of Service to access, use and/or display the CrunchBase API, Brand Features and any CrunchBase content.

I previously reviewed various licensing schemes, including the Creative Commons scheme, in a two part earlier blog series The Call for a Harmonized Community License for 3D Content where I proposed a harmonized “community” type license for content which could be produced on 3D printers (arguing that the existing license types do not “fit” for content which can mix copyright, patent, trade dress and other rights)

For those of you who are not aware, the CC-BY license type is a very broad license grant – providing for the “maximum dissimentation of licensed materials”.  You can find the existing CC license types here and specifically the summary of CC-BY license.

Crunchbase was careful to make clear that uploaded material which they link or provide along with the company information might be licensed differently (e.g. not under the CC-BY license) and specifically made clear that:

The graphical layout of the CrunchBase website and other elements of the Site, Content or Service not described above are the copyright of CrunchBase, and may not be reproduced without permission.

 

Enter Pro Populi and People+

Pro Populi, a small three person startup, has been developing applications utilizing the CrunchBase dataset, one app called People+.  Pro Populi has apparently been accessing the CrunchBase data (originally via the API, but also through other means apparently) to populate their own database of content and then accessing that content (and other content) from their applications.

Wired (@Wired) reporter David Kravets (@dmkravets) broke the story on November 5th in a story titled AOL Smacks Startup for Using CrunchBase Content It Gave Away.  If you click through the link to the original Wired article, you can review some of the correspondence gathered by David Kravets in support of the story.

Pro Populi was served with a cease and desist letter from AOL (the parent company of CrunchBase).  Quoting from the Wired article, an AOL Assistant General Counsel apparently sent the following in an email to the Pro Populi CEO after a meeting with the President of CrunchBase last Friday:

On the chance that you may have misinterpreted Matt’s willingness to discuss the matter with you last week, and our reference to this as a ‘request,’ let me make clear, in more formal language, that we demand that People+ immediately cease and desist from its current violation and infringement of AOL’s/TechCrunch’s proprietary rights and other rights to CrunchBase, by removing the CrunchBase content from your People+ product and by ceasing any other use of CrunchBase-provided content.

But if CrunchBase didn’t want to allow others to use the data, why does it license its content under the CC-SA scheme?

Hopefully CrunchBase and Pro Populi can come to an agreement which works for both of them and their interests.

While CrunchBase can likely legitimately claim to restrict access to their content via their API (licensed separately, not covered by the CC-SA scheme, and with separate terms), once content covered by the CC-SA license has been accessed and copied in a manner consistent with CC-SA, can CrunchBase assert rights to “get it back?”  That seems to be an incredibly difficult road to hoe, and inconsistent with the very broad terms of the CC-SA license grant.  Worse yet, according to the Wired article, the General Counsel of the Creative Commons Corporation doesn’t think so.  The Electronic Frontier Foundation represents Pro Populi.

Oops.

CrunchBase could have stayed within the CC license scheme and chosen a different CC license type for the underlying data – including one which specifically prohibits the use of the content for commercial purposes, which prohibits the creation of derivative works, and which requires specific attribution to them.  That license type is CC BY-NC-ND.  On a case by case basis they could have authorized/waived the restrictions contained in the license.  CrunchBase could have also changed the license grant for content accessed via the API.   This is solvable.

For an interesting view of this dispute from TechCrunch (a sister company to CrunchBase), see their take on the dispute.

 

Impact On Other “Hybrid” Commercial Use of Crowdsourced Data?

While CrunchBase and Pro Populi resolve their dispute, I am most interested in thinking about how this potentially impacts other crowdsourced data platforms and the applications built on top of them.   It is an interesting dilemma and question – how can/should crowdsourced data platforms be able to commercially benefit from their efforts – including restricting other potential competitors from a copying of data for their own purposes – commercial or otherwise?  Sourcing, filtering, vetting, editing, organizing, etc. hundreds, thousands and millions of data points is a complex undertaking.  It takes time, effort, people and ultimately money.  Unless that vetting is also done from a crowdsourced perspective (or mostly so – like the Wikipedia model), allowing potential competitors (commercial or otherwise) to copy that structured content is a potential death knell.  In that instance, openness needs to be balanced against a commercial purpose.

CrunchBase President Matt Kaufmann blogged about the CrunchBase dispute with Pro Populi and the EFF.   He essentially acknowledges the challenge of openness in the context of trying to build a commercial business – but re-affirms his belief that CrunchBase thought they restricted the use of their data (via the API or otherwise) for commercial purposes under their current licensing terms.

[T]o invest in CrunchBase’s constant improvement requires building a business around CrunchBase in a way that successfully takes into account our terms of service and our openness. We are confident that this is possible, and that’s what we are on the path to figuring out.

This is of course the challenge – adding enough value in the stack above the “open” content that can be commercialized.   As an example, take a look at MapBox – MapBox is a cloud-based platform which allows for developers to embed geo rich content into their web and mobile offerings.  They recently took $10M from Foundry Group, and I blogged about that investment – MapBox, Geo Software Platform, Maps $10M from Foundry Group.

MapBox relies on data sourced from OpenStreetMap, the “free wiki world map.”    OpenStreetMap licenses its content in two ways – the underlying data is licensed as open data under the Open Data Commons Open Database License (ODbL) while the cartography and documentation are licensed under the CC BY-SA license, the same license selected by CrunchBase).  BTW, Kevin Scofield likes the MapBox interface too.

It would be a difficult commercial business model indeed for MapBox to go through the effort of building an infrastructure to help source, collect and organize all kinds of mapping data, which was open for other uses, as well as building an application layer on top of it.   MapBox instead focuses on creating a great platform layer on top of the otherwise “open” content (others are free to do so as well).  This model works because there is enough community interest to support an undertaking like OpenStreetMap to begin with.  Can the same be said for the data underlying CrunchBase?

MapBox, Geo Software Platform, Maps $10M from Foundry Group

It is great to see continuing venture capital and public market interest in areas such as data acquisition, unmanned aerial systems, manufacturing, AEC and GIS solutions providers.

MapBox (@MapBox) announced yesterday that it had taken a Series A investment of $10M from Foundry Group (@FoundryGroup).  After three years of bootstrapping the MapBox business, in the words of Eric Gundersen (@ericg), funding lets us plan for years of building the future of geo software, from the ground up.

MapBox is a cloud-based platform which allows for developers to embed geo rich content into their web and mobile offerings.  MapBox sources its mapping data from OpenStreetMap, keeping its operating costs low and without a tie to proprietary back end mapping databases.   It will be interesting to see how MapBox navigates the GIS/Geo Software playing field over the coming years – but more developer choices, relying on crowd-sourced mapping data, could be quite transformational indeed.

Foundry Group continues its string of investments in the technical solutions space.  They were part of a team which invested $30M into Chris Anderson’s (@chr1sa) unmanned aerial systems company 3D Robotics (@3DRobotics) a few weeks ago, which I blogged about here and were also invested into Makerbot (@Makerbot), which was recently acquired by the 3D printing company Stratasys (@Stratasys) (in mid-August 2013) for $403M (+up to $201M in earn-outs).  Seth Levine (@sether) explained some of Foundry Group’s rationale for the MapBox investment here.

Foundry Group is currently also invested into Occipital (@Occipital) which has recently developed a 3D capture device which connects to an iPad, called the Structure Sensor.  Occipital currently has a KickStarter campaign going for the Structure Sensor, and as of today they are only a few thousand dollars shy of the $1M mark. In June 2013 Occipital acquired ManCTL, adding a strong team to an already deep computer vision bench, but in this case on that had the chops to do real time 3D scene reconstruction from PrimeSense powered (a/k/a the Microsoft Kinect) devices.  Foundry Group put $8M into Occipital in August of 2011.

I am very excited to ultimately see what comes from both MapBox and Occipital!

It will be interesting to see whether/if Andreessen Horowitz (@a16z) looks for a big data, geo centric sector investment as well.