Category Archives: Venture Capital

Autodesk REAL 2016 Startup Competition

I had the opportunity to attend the Autodesk REAL 2016 event which is currently taking place at Fort Mason over March 8th and March 9th.   This event focuses on “reality computing” – the ecosystem of reality capture, modeling tools, computational solutions and outputs (whether fully digital workflows or a process that results in a physical object).

The event kicked off with the first Autodesk REAL Deal pitch competition.  Jesse Devitte from Borealis Ventures served as the emcee for this event.   A VC in his own right (as the Managing Director and Co-founder of Borealis), Jesse understands the technical computing space and has a great track record of backing companies that impact the built environment.   The VC panel judging the pitches consisted of: (1) Trae Vassalo, an independent investor/consultant who was previously a general partner of Kleiner Perkins Caufield & Byers; (2) Sven Strohband, CTO, Kholsa Ventures, and (3) Shahin Farshchi, a Partner with Lux Capital.

The winner of the competition will be announced, in conjunction with a VC panel discussion, at the end of the first day’s events starting at 5:00pm on the REAL Live Stage, Herbst Pavillion.

[Note: These were typed in near real time while watching the presenters and their interactions with the REAL Deal VC panelists – my apologies in advance if they don’t flow well, etc.  I’ve tried to be as accurate as possible.]

Lucid VR

Lucid VR was the first presenting company – pitching a 3D stereoscopic camera for the specific purpose of VR.  Han Jin, the CEO and co-founder, presented on behalf of the company.  He started by explaining that 16M headsets will be shipped this year or VR consumption – however – VR content creation is still incredibly difficult to produce.  It is a “journey of pain” transiting time, money, huge data sets, production and sharing difficulties.  Lucid VR has created an integrated hardware device, called the LucidCam, that “captures an experience” and simplifies the content production and publication process of VR content, which can then be consumed by all VR headsets.  Han pitched the vision of combining multiple LucidCam devices to support immersive 360 VR, real time VR live streaming.  Lucid VR hit its $100K crowdfunding campaign goal in November of 2015.

Panel Questions

Sven’s initially asked a two-part question: (1) which market is the company trying to attack first – consumer or enterprise; and (2) what is the technical differentiation for the hardware device (multi camera setups have been around for a while).   Han said that the initial use cases seem to be focusing on training applications – so more of an enterprise setup.  He explained that while dual camera setups have been around, they are complex, multi-part mechanically driven solutions, where they leverage GPU based solutions to complete on device processing for real time for capture and playback – a more silicon versus mechanical based solution.  Trae then asked about market timing – how will you get to market, what will be the pricing, etc.  Han said that they planned to ship at the end of the year, and that as of right now they were primarily working with consumer retailers for content creation.  They expected a GTM price point of between $300 and $400 for their capture device.   Trae’s follow-up – even if you capture and create the content, isn’t one of the gating factors going to be that the consumers will not have the appropriate hardware/software locally to experience it?

Minds Mechanical

The next presentation was from Minds Mechanical, and led by the CEO, Jacob Hockett.

Jacob explained that Minds Mechanical started as a solutions company – integrating various hardware and software to support the product development needs (primarily by providing inspection and compliance services) of some of the largest Tier 1 manufacturers in the world.   While growing and developing this services business they realized that they had identified a generalized challenge – and were working to disrupt the metrology (as opposed to meteorology, as Jacob jokingly pointed out) space.

Jacob explained that current metrology software is very expensive and is often optimized and paired with specific hardware.  Further compounding the problem is that various third party metrology software solutions often give different results on the same part, and even acting on the same data set.   The expense in adding new seats, combined with potentially incompatible results across third party solutions, results in limited metrology information sharing within an organization.

They have developed a cloud-based solution called Quality to help solve these challenge – Jacob suggested that we think of as a PLM type solution for the manufacturing and inspection value chain; tying inspection data back into the design and build process.  Jacob claims that Quality is the first truly cross platform solution available in the industry.

Given their existing customer relationships, they were targeting the aerospace, defense and MRO markets initially, to be followed by medical and automotive later.  They are actively transitioning their business from a solutions business to a software company and were seeking a $700K investment to grow the team. [Note:  Jacob was previously a product manager and AE at Verisurf Software, one of the market leading metrology software applications prior to starting Minds Mechanical.]  The lack of modern, easy to use tools are barriers to the industry and Minds Mechanical is going to try and change the entire market.

Panel Questions

Trae kicked off the questions – asking Jacob to identify who the buyer is within an organization and what is the driver for purchasing (expansion to new opportunities, cost savings, etc.).  Jacob said that the buy decision was mostly a cost savings opportunity.  Their pricing is low enough that it can be a credit card purchase, avoiding internal PO and purchase approval processes entirely.  Trae then followed up by asking how the data was originally captured – Jacob explained that they abstract data from various third party metrology applications which might be used in an account and provide a publication and analytics layer on top of those content creation tools.   Sven then asked about data ownership/regulation compliance for a SaaS solution – was it a barrier to purchase?   Jacob said that they understand the challenges of hosting/acting upon manufacturing data on the cloud; but that the reality was that for certain manufacturers and certain types of projects it just “wasn’t going to happen”.  Trae then asked whether they were working on a local hosted solution for those types of requirements, and Jacob said yes they were.  Shahin from Lux then asked who they were selling to – was it the OEM (and trying to force them to mandate it within the value chain or to the actual supply chain participants?  Jacob said that they will target the suppliers first, and not try and force the OEMs initially to demand use within their supply chain, focusing initially on a bottom-up sales approach first.

AREVO Labs

The next presentation was from Hermant Bheda, the CEO and founder of AREVO Labs.  AREVO’s mission was to leverage additive manufacturing technologies to produce light and strong composite parts to replace metal parts for production applications.  Hermant explained that they have ten pending patent applications and to execute on this vision they need: (1) high performance materials for production, (2) 3D printing software for production parts; and (3) a scalable manufacturing platform.

AREVO has create a continuous carbon fiber composite material which is five times as strong as titanium – unlocked by their proprietary software to weave this material together in a “true” 3D space (rather than 2.5D which they claim the existing FDM based printers use).   AREVO claims to have transport the industry from 2.5D to true 3D by optimizing the tool path/material deposition to generate the best parts – integrating a proprietary solution to estimate the post production part strength, then optimize the tool path to use lowest cost, lowest time, highest strength solution.

Their solution is based around a robotic arm based manufacturing cell – and could be used for small to large parts (up to 2 meters in size).  Markets from medical for single use applications, aerospace/defense for lightweight structural solutions, on demand industrial spare parts as well as oil & gas applications.  They have current customer engagements with Northrup, Airbus, Bombardier, J&J and Schlumberger.

[FWIW, you and see an earlier article on them at 3DPrint.com here, as well as a video of their process.  MarkForged is obviously also in the market and utilizes continuous carbon fiber as part of an AM process.  One of the slides in the AREVO Labs deck which was quickly clicked through was comparison of the two process, would be interesting to learn more about that differentiation indeed!]

Hermant explained that they were currently seeking a Series A raise of $8M.

Panel Questions

Shahin kicked off the questions for the panel – asking whether customers were primarily interested in purchasing parts produced from the technology or whether they wanted to buy the technology so they could produce their own?  Hermant said that the answer is both – some want parts produced for them, others want the tech, it depends on what their anticipated needs were over time.  Sven asked Hermant how he thought the market would settle out over time between continuous fiber (as with their solution) versus chopped fiber.   Hermant said that they view both technologies as complimentary to each other – but in the metals replacement market, continuous fiber is the solution for many higher value, higher materials properties use cases, but both will exist in the market.

UNYQ

The final presentation of the day during the REAL Deal Pitch competition came from UNYQ – they had previously presented at the REAL 2015 event.   Eythor Bendor, the CEO, presented on behalf of UNYQ.  UNYQ develops personalized prosthetic and orthotic devices leveraging additive manufacturing for production.  In 2016 they will be introducing the UNYQ Scoliosis Brace, having licensed the technology from 3D Systems, who are also investors.  According to Crunchbase data UNYQ has raised right around $2.5M across three funding rounds, and expect to be profitable sometime in 2017.

UNYQ has been working a platform for 3D printing manufacturing, personalization and data integration – resulting in devices that are not only personalized using AM for production, but can also integrate various sensors so that they become IoT nodes reporting back various streams of data (performance, how long it has been worn, etc.) which can be shared with clinicians.   UNYQ uses a photogrammetry based app to capture shape data and then leverages Autodesk technology to compute and mesh a solution.  The information is captured in clinics and the devices are primarily produced on FDM printers – going from photos to personalized products in less than four weeks.  They generated roughly $500K in revenues in 2015 starting with their prosthetic covers and have a GTM plan for their scoliosis offering which would have them generate $1M in sales within the first year after launch in May 2016.

UNYQ is currently seeking a $4M Series Seed round.

Panel Questions

Trae asked how UNYQ could accelerate this into market – given the market need, why wasn’t adoption happening faster?   Eythor said that in 2014/15 they had really been focusing on platform and partnership development – it was only at the very end of 2015 that they started creating a direct sales team. Given that there are only roughly 2,000 clinics in the US it was a known market and they had a plan of attack. The limited number of clinics, plus the opportunity to reach consumers directly via social media and other d2c marketing efforts will only accelerate growth in 2016 and beyond.  Trae followed up by asking – where is the resistance to adoption in the market (is it the middleman or something else that bogging things down).  Eythor said that it is more a process resistance (it hasn’t been done this way before, and with manual labor) than it is with the clinics themselves.  Sven then asked about data comparing the treatment efficacy and patient outcomes using the UNYQ devices versus the “traditional” methods of treatment.  Eythor said that while the sample set was limited, one of their strategic advisors had compared their solutions to those traditionally produced and found that the UNYQ offering was as least as good as what is in the market today – but with an absolutely clear preference on the patient side.  The final question came from Shahin at Lux who asked whether there was market conflict in that the clinics (which are the primary way UNYQ gets to market) has a somewhat vested interest in continuing to do things the old way (potentially higher revenues/margins, lots of crafters involved in that value chain, reluctance to change, etc.).  Eythor explained that they were focusing only on the 10-20% of the market that are progressive and landing/winning them; and then over time pull the rest of the market forward.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Mapillary Raises $8M – Crowdsourced Street Photos

Crowdsourced Street Maps – Mapillary Raises $8M

Mapillary, a Malmo, Switzerland based company that is building a crowdsourced street level photo mapping service has raised $8M in their Series A fundraising round, led by Atomico, with participation by Sequoia Capital, LDV Capital and Playfair Capital.   Some have commented that Mapillary wants to compete with Google Street View using crowd sourced, and then geo-located, photos (and presumably video and other assets over time).  Mapillary uses Mapbox as its base mapping platform.  Mapbox itself sources its underlying street mapping data from OpenStreetMap, as well as satellite imagery, terrain and places information from other commercial sources – you can see the full list here.   Very interesting to see that Mapillary has a relationship with ESRI – such that ESRI ArcGIS users can access Mapillary crowd sourced photo data directly via ArcGIS online.

I previously wrote about MapBox and OpenStreetMap in October 2013 when it closed its initial $10M Series A round led by Foundry Group.  You can see that initial blog post here.  MapBox subsequently raised a $52.6M Series B round, led by DFJ, in June of 2015.  I then examined the intersection of crowdsourced data collection and commercial use in the context of the Crunchbase dispute with Pro Populi and contrasted that with the MapBox and OpenStreetMap relationship.

I am fascinated by the opportunities that are unlocked by the continuing improvement in mobile imaging sensors.  The devices themselves are becoming robust enough for local computer vision processing (rather than sending data to the cloud) and we are perhaps a generation away (IMHO) from having an entirely different class of sensors to capture data from. That combined with significant improvements in location services makes it possible to explore some very interesting business and data services in the future.

In late 2013 I predicted that, in time, mobile 3D capture devices (and primarily passive ones) would ultimately be used to capture, and tie together a crowd sourced world model of 3D data.

What could ultimately game changing is if we find updated and refined depth sense technology embedded and delivered directly with the next series of smartphones and augmented reality devices. . .  In that world, everyone has a 3D depth sensor, everyone is capturing data, and the potentials are limitless for applications which can harvest and act upon that data once captured.

Let the era of crowd sourced world 3D data capture begin!

It makes absolute sense that the place to start along this journey is 2D and video imagery, which can be supplement (and ultimately supplanted) over time by richer sources of data  – leveraging an infrastructure that has already been built.  We still have thorny and interesting intellectual property implications to consider (Think Before You Scan That Building) – but regardless – bravo Mapillary! Bravo indeed!

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Paracosm Seeking CV/CG Engineers and C++ Developers!

If you are a computer vision, computer graphics or C++ developer and are looking for a new opportunity with an exciting venture backed company — look no further.   Paracosm is developing an exciting software platform leveraging the newest wave of 3D hardware capture devices in order to build a perceptual map for devices (and humans!) to navigate within.

Job description provided by Paracosm follows — feel free to reach out to Amir (email below) or to me and I will pass your c.v. along:

———————-

About Paracosm:
Paracosm is solving machine perception by generating 3D maps of every interior space on earth.
We are developing a large-scale 3D reconstruction and perception platform that will enable robots and augmented reality apps to fully interact with their environment. You can see some of our fun demos here:vimeo.com/paracosm3d/demo-reel (pass: MINDBLOWN ) and here: paracosm.io/nvidia
We are a venture-backed startup based in Gainesville, FL, and were original development partners on Google’s Project Tango. We are currently working closely with companies like iRobot to commercialize our technology.
Job Role:
We are looking for senior C++ developers, computer-vision engineers, and computer graphics engineers to help us implement our next-gen 3D-reconstruction algorithms. Our algorithms sit at the intersection of SLAM+Computer Vision+Computer Graphics.
As part of this sweet gig, you’ll be working alongside a team of Computer Vision PhD’s to:
* design & implement & test cloud-based 3D-reconstruction algorithms
* develop real-time front-end interfaces designed to run on tablets (Google Tango, Intel RealSense) and AR headsets
* experiment with cutting edge machine-learning techniques to perform object segmentation and detection
Skills:
Proficiency with C++ is pretty critical, ideally you’ll be experienced enough to even teach us a few tricks! Familiarity with complex algorithms is a huge plus, ideally one of the following categories:
– Surface reconstruction + meshing
– 3D dense reconstruction from depth cameras
– SLAM and global optimization techniques
– Visual odometry and sensor fusion
– Localization and place recognition
– Perception: Object segmentation and recognition
Work Environment:
Teamwork, collaboration and exploration of risky new ideas and concepts is a core part of our culture. We all work closely together to implement new approaches that push the state of the art.
We have fresh, healthy & delicious lunch catered every day by our personal chef, a kitchen full of snacks, and backlog of crazy hard problems we need solved.
We actively encourage people on our team to publish their work and present at conferences (we also offer full stipends for attending 2 conferences each year).
Did I mention we’re big on the team work thing? The entire team has significant input into company strategy and product direction, and everyone’s opinion and voice is valued.
Work will take place at our offices in Gainesville, FL
Contact:
If you are interested, please email the CEO directly: Amir Rubin, amir@paracosm.io

 

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

The New Era of 3D Printing – Introducing Carbon3D

When Joe DeSimone takes the stage in Vancouver at TED2015 tonight in the opening gambit he will be publicly introducing the world to Carbon3D, a stealth (Sequoia) venture backed company whose technology and impact might ultimately be as impactful Chuck Hull’s original invention of stereolithography process for additive manufacturing.

The Carbon3D founding team of Joe DeSimone, Alex Ermoshkin, Ed Samulski and Phil DeSimone originally started the company as EIPI Systems in Chapel Hill, NC in mid-2013.  Along the way they took investment from Sequoia and others and have been joined by an incredible group of leaders from within and outside the Bay Area.

Carbon3D Tweet

Carbon3D and their technology stack will ultimately transform the industry in several ways – driving AM as a method of manufacture into areas typically reserved for injection molding:

  • Speed – their process currently allows them to print at 50x – 150x the speed of other methods, so fast that “little” problems like heat need to be managed. It is a sight to behold.
  • Materials – given that the founders have incredible chemistry backgrounds, it shouldn’t be surprising that they are focusing as much on materials, and the science behind them, as their device. The result?  Incredible engineered materials with material strengths simply not possible with existing techniques.
  • Surface Finish – imagine if you could produce surface finishes approaching that of injection molding without post processing?

While the Carbon3D team continues to develop their technology and expand beyond the pilot phase, the future sure does look promising.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

3D Printing Talk at UNCW CIE

I was fortunate yesterday to spend some time with a great crowd at the UNCW Center for Innovation and Entrepreneurship to talk about 3D Printing — sharing the time with an awesome team of presenters from GE Hitachi Nuclear Energy.  Jim Roberts, the Director of the UNCW CIE, a friend of mine since moving to North Carolina, invited me to see his impressive incubator space located at the edge of the UNC Wilmington campus – and I was glad to do so.  He has an impressive facility, and some great partner/tenant companies already working hard, I am excited to see what will be “hatched” under Jim’s leadership.  While there I also had the chance to meet with some great local entrepreneurs as well as spending some time with the Wired Wizard Robotics Team — and incredibly impressive group of young, talented, future scientists, engineers, technologists and mathematicians.   They were planning how to integrate 3D printing into their next design, I came away again believing how much STEM and the entire “capture to make” ecosystem should be intertwined.

One of the things I talked about yesterday was the interesting correlation between the performance of the publicly traded 3D printing companies and the relative rise of “3D Printing” as opposed to the technical term of “additive manufacturing”.  The upper left inserted graph is a Google Trends chart showing those search terms over time — if you haven’t used Google Trends — this data is normalized relative to all search volume over time.   In other words, a flat line would show that as a % of overall search, that term has stayed consistent (even as volume grows).  What you can see from this graph is the explosion of the rise of “3D Printing” as opposed to small, incremental growth of “additive manufacturing.”  Compare the rise of “3D Printing” to the stock charts and you see an interesting correlation indeed.  During the rest of my time I gave some reasons for why I believed this happened — looking at the macro level trends on both “sides” of the content to make ecosystem that may have unlocked this opportunity.

3D Printing + Additive Manufacturing

For those who have interest, you can download the slides I delivered here. TMK Presentation for UNCW on 3D Printing Opportunity (1.17.14 – FOR DISTRIBUTION)

Have a great weekend!

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Light Field Cameras for 3D Imaging

Thanks for reading Part I of this article published at LiDAR News.  Below I examine some of the plenoptic technology providers as well as provide some predictions about 3D imaging in 2014 and beyond.  If you have been directed here from LiDAR News certainly skip ahead to the section starting with Technology Providers below.  Happy Holidays!

Light Field Cameras for 3D Capture and Reconstruction

Plenoptic cameras, or light field cameras, use an array of individual lenses (a microlens) to capture 4D light field about a scene.   This lens arrangement means that multiple light rays can be associated to each sensor pixel and synthetic cameras (created via software) can then process that information.

Phew, that’s a mouthful, right?  It’s actually easier to visualize –

Raytrix Plenoptic Camera Example

Image from Raytrix GmbH Presentation delivered at NVIDIA GTC 2012

This light field information can be used to help solve various computer vision challenges – for example, allowing images to be refocused after they are taken, to substantially improve low light performance with an acceptable signal to noise ratio or even to create a 3D depth map of a scene.   Of course the plenoptic approach is not restricted to single images, plenoptic “video” cameras (with a corresponding increase in data captured) have been developed as well.

The underlying algorithms and concepts behind a plenoptic camera have been around for quite some time.   A great technical backgrounder on this technology can be found in Dr. Ren Ng’s 2005 Stanford publication titled Light Field Photography with a Hand-Held Plenoptic Camera.   He reviews the (then) current state of the art before proposing his solution targeted at synthetic image formation.  Dr. Ng ultimately went on to commercialize his research by founding Lytro, which I discuss later.    Another useful backgrounder is the technical presentation prepared by Raytrix (profiled below) and delivered at the NVIDIA GPU Technology Conference 2012.

In late 2010 at the NVIDIA GPU Conference, Adobe demonstrated a plenoptic camera system (hardware and software) they had been working on – while dated, it is a useful video to watch as it explains both the hardware and software technologies involved with light field imaging as well as the computing horsepower required.  Finally, another interesting source of information and recent news on developments in the light field technology space can be found at the Light Field Forum.

Light field cameras have only become truly practical because of advances in lens and sensor manufacturing techniques coupled with the massive computational horsepower unlocked by GPU compute based solutions.  To me, light field cameras represent a very interesting step in the evolution of digital imaging – which until now – has really been focused on improving what had been a typical analog workflow.

Light Field Cameras and 3D Reconstructions

 Much of the recent marketing around the potential of plenoptic synthetic cameras focuses on the ability of a consumer to interact and share images in an entirely different fashion (i.e. changing the focal point of a captured scene).  While that is certainly interesting in its own right, I am personally much more excited about the potential of extracting depth map information from light field cameras, and then using that depth map to create 3D surface reconstructions.

Pelican Imaging (profiled below) recently published a paper at SIGGRAPH Asia 2013 detailing exactly that — the creation of a depth map, which was then surfaced, using their own plenoptic hardware and software solution called the PiCam.  This paper is published in full at the Pelican Imaging site, see especially pages 10-12.

There is a lot of on-going research in this space, some use traditional stereo imaging methods acting upon the data generated from the plenoptic lens array but others use entirely different technical approaches for depth map extraction.   A very interesting recent paper presented at ICCV 2013 in early December 2013 titled Depth from Combining Defocus and Correspondence Using Light Field Cameras and authored by researchers from the University of California, Berkley and Adobe proposes a novel method for extracting depth data from light field cameras by combining two methods of depth estimation.  The authors of this paper have made available their sample code and representative examples and note in the Introduction:

 The images in this paper were captured from a single passive shot of the $400 consumer Lytro camera in different scenarios, such as high ISO, outdoors and indoors. Most other methods for depth acquisition are not as versatile or too expensive and difficult for ordinary users; even the Kinect is an active sensor that does not work outdoors. Thus, we believe our paper takes a step towards democratizing creation of depth maps and 3D content for a range of real-world scenes.

Technology Providers

Let’s take a look at a non-exhaustive list of light field technology manufacturers – this is no way complete, nor does it even attempt to cover all of the handset manufacturers and others who are incorporating plenoptic technologies – nor those who are developing “proxy” solutions to replicate some of the functionalities which true plenoptic solutions offer (e.g. Nokia’s ReFocus app software).  Apple recently entered the fray of plenoptic technologies when it was reported in late November that it had been granted a range of patents (originally filed in 2011) covering a “hybrid” light field camera setup which can be switched traditional and plenoptic imaging.

Lytro

Lytro (@Lytro) was founded in 2010 by Dr. Ren Ng, building on research he started at Stanford in 2004.  Lytro has raised a total of $90M with an original $50M round in mid-2011 from Andreesen Horowitz ((@a16z, @cdixon), NEA (@NEAVC), Greylock (@GreylockVC), and a new $40M round adding North Bridge Venture Partners (@North_Bridge).   In early 2012 Lytro begin shipping its consumer focused light field camera system, later in that year Dr. Ng stepped down as CEO (he remains the Chairman), with the current CEO, Jason Rosenthal, joining in March 2013.

Lytro camera inside

Inside the Lytro Camera from Lytro

I would suspect that Lytro is pivoting from focusing purely on a consumer camera to instead the development of an imaging platform and infrastructure stack (including cloud services for interaction) that it, along with third party developers, can leverage.  This may also have been the strategy all along – in many cases to market a platform you have to first demonstrate to the market how the platform can be expressed in an application.  Jason Rosenthal seems to acknowledge as much in a recent interview published in the San Francisco Chronicle’s SF Gate blog in August 2013 (prior to their most recent round being publicly announced) where is quoted as saying that the long term Lytro vision is to become  “the new software and hardware stack for everything with a lens and sensor. That’s still cameras, video cameras, medical and industrial imaging, smartphones, the entire imaging ecosystem.”  Jonathan Heiliger, a general partner at North Bridge Venture Partners, in his quote supporting their participation in the latest $40M round supports that vision – [t]he fun you experience when using a Lytro camera comes from the ability to engage with your photos in ways you never could before.  But powering that interactivity is some great software and hardware technology that can be used for everything with a lens and a sensor.”

I am of course intrigued by the suggestion from Rosenthal that Lytro could be developing solutions useful for medical and industrial imaging.  If you are Pelican Imaging, you are of course focusing on the comments relating to “smartphones.”

Pelican Imaging

Pelican Imaging

Image from Pelican Imaging

Pelican Imaging (@pelicanimaging) was founded in 2008 and its current investors include Qualcomm (@Qualcomm), Nokia Growth Partners, Globespan Capital Partners (@Globespancap), Granite Ventures (@GraniteVentures), InterWest Partners (@InterwestVC) and IQT.  Pelican Imaging has raised more than $37M since inception and recently extended its Series C round by adding an investment (undisclosed amount) from Panasonic in August 2013.   Interesting to me is of course the large number of handset manufacturers who have participated in earlier funding rounds, as well as early investment support from In-Q-Tel (IQT), an investment arm aligned with the United States Central Intelligence Agency.

Pelican Imaging has been pretty quiet from a marketing perspective until recently, but no doubt with their recent additional investment from Panasonic and other hardware manufacturers they are making a push to become the embedded plenoptic sensor platform.

Raytrix

Raytrix is a German developer of plenoptic cameras, and has been doing so since 2009.  They have, up until now, primarily focused on using this technology for a host of industrial imaging solutions.   They offer a range of complete plenoptic camera solutions.  A detailed presentation explaining their solutions can also be found on their site and a very interesting video demonstration of the possibilities of a plenoptic video approach for creating 3D videos can be found hosted at the NVIDIA GPU Technology Conference website.  Raytrix has posted a nice example of how they created a depth map and 3D reconstruction using their camera here.   Raytrix plenotpic video cameras can be used for particle image velocimetry (PIV), a method of measuring velocity fields in fluids by tracking how particles move across time.  Raytrix has a video demonstrating these capabilities here.

The Future

For 2014, I believe we will see the following macro-level trends develop in the 3D capture space (these were originally published here).

  • Expansion of light field cameras – Continued acceleration in 3D model and scene reconstruction (both small and large scale using depth sense and time of flight cameras but with an expansion into light field cameras (i.e. like Lytro, Pelican Imaging, Raytrix, as proposed by Apple, etc).
  • Deprecation of 3D capture hardware in lieu of solutions – We will see many companies which had been focusing mostly on data capture pivot more towards a vertical applications stack, deprecating the 3D capture hardware (as it becomes more and more ubiquitous – i.e. plenoptic cameras combined with RTK GPS accurate smartphones).
  • More contraction in the field due to M&A – Continued contraction of players in the capture/modify/make ecosystem, with established players in the commercial 3D printing and scanning market moving into the consumer space (e.g. Stratasys acquiring Makerbot, getting both a 3D scanner and a huge consumer ecosystem with Thingiverse) and with both ends of the market collapsing in to offer more complete solutions from capture to print (e.g. 3D printing companies buying 3D scanner hardware and software companies, vice versa, etc.)
  • Growing open source software alternatives – Redoubled effort on community sourced 3D reconstruction libraries and application software (e.g. Point Cloud Libraries and Meshlab), with perhaps even an attempt made to commercialize these offerings (like the Red Hat model).
  • 3D Sensors everywhere – Starting in 2014, but really accelerating in the years that follow, 3D sensors everywhere (phones, augmented reality glasses, in our cars) which will constantly capture, record and report depth data – the beginnings of a crowd sourced 3D world model.

Over time, I believe that light field cameras will grow to have a significant place in the consumer acquisition of 3D scene information via mobile devices.  They have the benefit of having a relatively small form factor, are a passive imaging system, and can be used in a workflow which consumers already know and understand.   They are of course not a panacea, and ultimately currently suffer similar limitations as does photogrammetry and stereo reconstruction when targets are not used (e.g. difficulty in accurately computing depth data in scenes without a lot of texture, accuracy dependent on depth of the scene from the camera, etc.) but novel approaches to extract more information from a 4D light field hold promise for capturing more accurate 3D depth data from light field cameras.

For consumers, and consumer applications driven from mobile, I predict that light field technologies will take a significant share of sensor technologies, where accuracy is a secondary consideration (at best) and the ease of use, form factor and the “eye candy” quality of the results are most compelling.   Active imaging systems, like those which Apple acquired from PrimeSense certainly have a strong place in the consumer acquisition of 3D data, but in mobile their usefulness maybe limited by the very nature of the sensing technology (e.g. relatively large power draw and form factor, sensor confusion in the presence of multiple other active devices, etc.).

 

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Apple Buys Tech Behind Microsoft Kinect (PrimeSense) – 3D Scanning Impact?

[Update: Apple has confirmed the acquisition of PrimeSense for roughly $350M, when originally published the acquisition was still only rumored.]

It has been reported that Apple (@Apple) has acquired PrimeSense (@GoPrimeSense) for $345M.

I have been long on PrimeSense’s depth sensing cameras for a while – I started following them in the months leading up to the original launch of the Microsoft Kinect in the “Project Natal” days (late 2009).  Photogrammetry was always interesting to me as an approach to create 3D models – but the reconstructions tended to fail frequently (and without warning) and always required post-processing.

My interest in PrimeSense technology was primarily twofold: (1) to find a way to leverage the installed base of Microsoft Kinect devices as 3D capture devices (as well as the Xbox Live payment infrastructure) and (2) to build an inexpensive stand-alone 3D scanner based on PrimeSense technology.  I was only more interested after Microsoft published their real-time scene reconstruction research known as KinectFusion.  Hacks like the Harvard “Drill of Depth”s (a Kinect made mobile by attaching it to a battery powered drill, screen and software, circa early 2011) only further piqued my interest about the possibilities.

Drill of Depth

The writing was on the wall for PrimeSense after Microsoft decided to abandon PrimeSense technology and develop their own depth sensing devices for use with the new Xbox One.  PrimeSense had to transition from a lucrative relationship with one large customer (~30M+ units) to a developer of hardware and firmware solutions seeking broader markets.  The OpenNI initiative (an open source project to develop middleware SDK for 3D sensors which was primarily sponsored by PrimeSense) was an attempt to broaden the potential pool of third party developers who would ultimately build solutions around PrimeSense technologies.

There are many PrimeSense powered 3D scanners in the market today – it will be interesting to see whether this pool expands or contracts after the planned Apple acquisition (e.g. will the direction be inward, focusing the PrimeSense technology to be delivered directly with Apple only devices or will they continue to court third parties developers across all types of hardware and software solutions).   The new PrimeSense Capri form factor already allows for entirely new deployment paradigms for this technology, with one more generation the sensor will have shrunk so much that they can be comfortably embedded directly in phone and tablet devices (but with a trade off in data quality if the sensor shrinks too much).

Here is a quick run-down on a non-exhaustive list of PrimeSense powered 3D scanner hardware technology and vendors (note, this isn’t a profile of the universe of software companies that offer solutions around 3D model and scene reconstruction – as there are many):

Standard Microsoft Kinect – the initial movement for using the PrimeSense technology as a 3D scene reconstruction device came from hacks to the original Microsoft Kinect.  The Kinect was hacked to run independently of the Kinect, and ultimately Microsoft decided to embrace these hacks and develop a standalone Kinect SDK.

Microsoft Kinect for PC – Microsoft began selling a Kinect which would directly interface with Windows devices, it also enabled a “near” mode for the depth camera.

Asus XTION (Pro) – This is an Asus OEM of the PrimeSense technology which provides essentially the same functional specifications as delivered in the Microsoft Kinect (they use the same PrimeSense chipset and reference design).

MatterportMatterport (@Matterport) has raised $10M  since the middle of 2012 to develop a camera system, software and cloud infrastructure for scanning interior spaces.  The camera system itself is built around PrimeSense technologies (along with 2D cameras to capture higher quality images to be referenced to the 3D reconstruction created from the PrimeSense cameras).   Most interesting to me that Matterport counts Red Swan  and Felicis Ventures as investors, both which are also invested into Floored (see below).  A few days ago Forbes profiled the use of the Matterport system, the article is worth a read.

Floored – (@Floored3D), formerly known as Lofty, concentrates primarily on developing software to help visualize interior spaces and is concentrating first on the commercial real estate industry.  Floored has raised at least a little over $1M now, including common investors with Matterport.  For more on the relationship between Matterport and Floored, see this TechCrunch article.  Floored’s CEO is Dave Eisenberg, and he gave a great presentation at the TechCrunch NYC Startup Battlefield in late April 2013 explaining Floored’s value proposition.   Floored is definitely filled with brilliant minds, and obviously a whole lot of computer vision folks who understand how difficult it is to attempt to automatically generate 3D models of interior spaces from scan data (of any quality).  To get a sense of what they are currently thinking about, check out the Floored blog.

Lynx A – This was an offering from a start-up in Austin, Texas known as Lynx Labs (@LynxLabsATX) who launched an early 2013 KickStarter campaign for an “all in one” point and shoot 3D camera.  This device was a sensor, combined with a computing device and software which would allow for the real time capturing and rendering of 3D scenes.  The first round of devices shipped in the middle of September 2013.   I do not know for sure, but my assumption is that this device is PrimeSense powered.

DotProduct (@DotProduct3D) with their DPI-7 scanner.   As with the Lynx A camera, this is a PrimeSense powered device, combined with a Google Nexus, and their scene reconstruction software called Phi.3D.  DotProduct claims 2-4mm accuracy at 1m, achieved through a combination of individual sensor calibration, their software, and rejecting sensors which do not achieve spec.  DotProduct announced in late October 2013, at the Intel Capital Global Summit, that Intel Capital (@IntelCapital) had made a seed investment into DotProduct, spearheaded by Intel’s Perceptual Computing Group.

Occipital Structure SensorOccipital (@occipital) is an extremely interesting company based in Boulder and San Francisco, filled with amazing computer vision expertise.  After cutting their teeth on some computer vision applications for generating panoramas on Apple devices, they have bridged into a complete hardware and software stack for 3D data capture and model creation.  Occipital counts the Foundry Group (@foundrygroup) as one of its investors (having invested roughly $7M into Occipital in late 2011).   Occipital completed a very successful KickStarter campaign for its Structure Sensor raising nearly $1.3M.

Occipital Structure Sensor

The Structure Sensor is a PrimeSense powered device which is officially supported on later generation Apple iPad devices.  What is compelling is Occipital’s approach to create an entire developer ecosystem around this device – no doubt building on the Skanect (@Skanect) technology they acquired from ManCTL in June of 2013.  Skanect was one of the best third party applications available which had implemented and made available the Microsoft Fusion technology (allowing for real time 3D scene reconstruction from depth cameras).   If it is true, and Apple in fact does buy PrimeSense, then that is potentially problematic for Occipital’s current development direction if Apple has aspirations for embedding this technology in mobile devices (as opposed to Apple TV).  Even if Apple did want to embed in their iDevices, it would seem then that Occipital becomes an immediately interesting acquisition target (in one swoop you get hardware, and most importantly the computer vision software expertise).  Given the depth of talent at Occipital, I’m sure things are going to work out just fine.

Sense™ 3D Scanner by 3D Systems – This is the newest 3D scanner entrant in this space (announced a few weeks ago) delivered by 3D Systems (@3dsystemscorp), which acquired my former company, Geomagic.  The Sense uses the new PrimeSense Carmine sensor – a further evolution of the PrimeSense depth camera technology, allowing for greater depth accuracy across more pixels in the field (and ultimately reconstruction quality).  PrimeSense has a case study on the Sense.

What Are Competitive/Replacement Technologies for PrimeSense Depth Sensors?

In my opinion, the closest competitor in the market today to PrimeSense technologies are made by a company called SoftKinetic (@softkinetic) with their line of DepthSense cameras, sensors, middleware and software.

SoftKinetic

On paper, the functional specifications of these devices stack up well against the PrimeSense reference designs.  Unlike PrimeSense, SoftKinetic sells complete cameras, as well as modules and associated software and middleware.  SoftKinetic uses a time of flight (ToF) approach to capture depth data (which is different than PrimeSense).  Softkinetic has provided software middleware to Sony for the PS4 providing a middleware layer for third party developers to create gesture tracking applications using the PlayStation(R)Camera for PS4.   Softkinetic announced a similar middleware deal with Intel to accelerate perceptual computing in the early summer of 2013 too.

There are other companies in the industrial imaging space (who presently develop machine vision cameras or other time of flight scanners) which could provide consumer devices if they chose to (e.g. such as PMD Technologies in Germany).

I believe the true replacement technology, at least in the consumer space, for 3D data acquisition and reconstruction will come from light field cameras as a class in order to provide range data (e.g. z depth), and not necessarily from active imaging solutions.  See my thoughts on this below.

Predictions for 2014 and Beyond

Early in 2013, when I was asked by my friends at Develop3D to predict what 2013 would bring, I said:

In 2013 we will move through the tipping point of the create/modify/make ecosystem.

Low cost 3D content acquisition, combined with simple, powerful tools will create the 3D content pipeline required for more mainstream 3D printing adoption.  

Sensors, like the Microsoft Kinect, the LeapMotion device, and [Geomagic, now 3D Systems’] Sensable haptic devices, will unlock new interaction paradigms with reality, once digitized.  

Despite the innovation, intellectual property concerns will abound, as we are at the dawn of the next ‘Napster’ era, this one for 3D content.

I believe much of that prediction has come/is coming true.

For 2014 I believe we will see the following macro-level trends in the 3D capture space:

  • Expansion of light field cameras – Continued acceleration in 3D model and scene reconstruction (both small and large scale using depth sense and time of flight cameras but with an expansion into light field cameras (i.e. like Lytro (@Lytro) and Pelican Imaging (@pelicanimaging)).
  • Deprecation of 3D capture hardware in lieu of solutions – We will see many companies which had been focusing mostly on data capture pivot more towards a vertical applications stack, deprecating the 3D capture hardware (as it becomes more and more ubiquitous).
  • More contraction in the field due to M&A – Continued contraction of players in the capture/modify/make ecosystem, with established players in the commercial 3D printing and scanning market moving into the consumer space (e.g. Stratasys acquiring Makerbot, getting both a 3D scanner and a huge consumer ecosystem with Thingiverse) and with both ends of the market collapsing in to offer more complete solutions from capture to print (e.g. 3D printing companies buying 3D scanner hardware and software companies, vice versa, etc.)
  • Growing open source alternatives – Redoubled effort on community sourced 3D reconstruction libraries and application software (e.g. Point Cloud Libraries and Meshlab), with perhaps even an attempt made to commercialize these offerings (like the Red Hat model).
  • 3D Sensors everywhere – Starting in 2014, but really accelerating in the years that follow, 3D sensors everywhere (phones, augmented reality glasses, in our cars) which will constantly capture, record and report depth data – the beginnings of a crowd sourced 3D world model.

The Use of Light Field Cameras and 3D Data Acquisition and Reconstruction Will Explode

While the use of light field cameras to create 3D reconstructions are just at their infancy, just like the PrimeSense technology (which was designed to be used for an interaction paradigm, not for capturing depth data), I can see (no pun intended) this one coming.  Light field cameras have a strong benefit of being a passive approach to 3D data acquisition (like photogrammetry).  For what is possible in depth map creation from these types of camera systems, check out this marketing video from Pelican Imaging (note the 3D Systems Cube 3D printer)] and a more technical one here .

Pelican Imaging Sensor

Image from Pelican Imaging.

I will have a separate post looking in more depth at light field cameras as a class including Lytro’s recent new $40M round of funding and the addition of North Bridge.  I believe, after refinement, that they ultimately become a strong solution for consumer mobile devices for 3D content capture because of their size, power needs, passive approach, etc.  In the interim, if you have interest in this space you should read the Pelican Imaging presentation recently made at SIGGRAPH Asia on the PiCam and reproduced in full at the Pelican Imaging site.  Fast forward to pages 10-12 in this technical presentation for an example of using the Pelican Imaging camera to produce a depth map which is then surfaced.

What could ultimately game changing is if we find updated and refined depth sense technology embedded and delivered directly with the next series of smartphones and augmented reality devices (e.g. Google Glass).  In that world, everyone has a 3D depth sensor, everyone is capturing data, and the potentials are limitless for applications which can harvest and act upon that data once captured.

Let the era of crowd sourced world 3D data capture begin!

(. . . but wait, who owns that 3D world database once created. . .

This article was original published on DEVELOP3D on November 18th, 2013, it has been modified since that original posting.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

littleBits Raises An Additional $11.1M in Series B Funding

littleBits, the New York City based open hardware startup, has raised a $11.M Series B round of funding led by True Ventures (@trueventures) and Foundry Group (@foundrygroup) and includes new investors Two Sigma Ventures (who had also just led an $11.5M investment in Rethink Robotics, and is invested into Floored (@Floored), who I have blogged about before) and Vegas Tech Fund (@VegasTechFund).   Returning investors Khosla Ventures (@vkhosla), Mena Ventures, Neoteny Labs, O’Reilly AlphaTech (@OATV), Lerer Ventures (also invested into Floored) (@lererventures) and new and returning angel investors also participated.  littleBits had previously raised $3.65M in Series A funding, and $850K in seed funding, bringing its total raised to date to over $15M.

littleBits mission is to “turn everyone into an inventor by making electronics accessible as a material.” littleBits makes “Bits modules” that snap together magnetically to make it easy for children and adults to build simple circuits and inventive projects in seconds. littleBits, and its CEO Ayah Bdeir (@ayahbdeir) have won numerous awards and are viewed as leaders in the maker movement.

I previously profiled littleBits in my two part blog series in November and December 2012 looking examining the intersection of the makers movement with the “Minecraft generation” in my own house – as I try to get my own kids to focus more on the worlds of atoms instead of bits.  You can find those two posts here: (1) http://3dsolver.com/the-makers-movement-intersects-with-the-minecraft-generation/  and (2) http://3dsolver.com/the-maker-in-the-minecraft-generation-part-duex/

Congratulations to littleBits!

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS