Category Archives: Companies

Autodesk REAL 2016 Startup Competition

I had the opportunity to attend the Autodesk REAL 2016 event which is currently taking place at Fort Mason over March 8th and March 9th.   This event focuses on “reality computing” – the ecosystem of reality capture, modeling tools, computational solutions and outputs (whether fully digital workflows or a process that results in a physical object).

The event kicked off with the first Autodesk REAL Deal pitch competition.  Jesse Devitte from Borealis Ventures served as the emcee for this event.   A VC in his own right (as the Managing Director and Co-founder of Borealis), Jesse understands the technical computing space and has a great track record of backing companies that impact the built environment.   The VC panel judging the pitches consisted of: (1) Trae Vassalo, an independent investor/consultant who was previously a general partner of Kleiner Perkins Caufield & Byers; (2) Sven Strohband, CTO, Kholsa Ventures, and (3) Shahin Farshchi, a Partner with Lux Capital.

The winner of the competition will be announced, in conjunction with a VC panel discussion, at the end of the first day’s events starting at 5:00pm on the REAL Live Stage, Herbst Pavillion.

[Note: These were typed in near real time while watching the presenters and their interactions with the REAL Deal VC panelists – my apologies in advance if they don’t flow well, etc.  I’ve tried to be as accurate as possible.]

Lucid VR

Lucid VR was the first presenting company – pitching a 3D stereoscopic camera for the specific purpose of VR.  Han Jin, the CEO and co-founder, presented on behalf of the company.  He started by explaining that 16M headsets will be shipped this year or VR consumption – however – VR content creation is still incredibly difficult to produce.  It is a “journey of pain” transiting time, money, huge data sets, production and sharing difficulties.  Lucid VR has created an integrated hardware device, called the LucidCam, that “captures an experience” and simplifies the content production and publication process of VR content, which can then be consumed by all VR headsets.  Han pitched the vision of combining multiple LucidCam devices to support immersive 360 VR, real time VR live streaming.  Lucid VR hit its $100K crowdfunding campaign goal in November of 2015.

Panel Questions

Sven’s initially asked a two-part question: (1) which market is the company trying to attack first – consumer or enterprise; and (2) what is the technical differentiation for the hardware device (multi camera setups have been around for a while).   Han said that the initial use cases seem to be focusing on training applications – so more of an enterprise setup.  He explained that while dual camera setups have been around, they are complex, multi-part mechanically driven solutions, where they leverage GPU based solutions to complete on device processing for real time for capture and playback – a more silicon versus mechanical based solution.  Trae then asked about market timing – how will you get to market, what will be the pricing, etc.  Han said that they planned to ship at the end of the year, and that as of right now they were primarily working with consumer retailers for content creation.  They expected a GTM price point of between $300 and $400 for their capture device.   Trae’s follow-up – even if you capture and create the content, isn’t one of the gating factors going to be that the consumers will not have the appropriate hardware/software locally to experience it?

Minds Mechanical

The next presentation was from Minds Mechanical, and led by the CEO, Jacob Hockett.

Jacob explained that Minds Mechanical started as a solutions company – integrating various hardware and software to support the product development needs (primarily by providing inspection and compliance services) of some of the largest Tier 1 manufacturers in the world.   While growing and developing this services business they realized that they had identified a generalized challenge – and were working to disrupt the metrology (as opposed to meteorology, as Jacob jokingly pointed out) space.

Jacob explained that current metrology software is very expensive and is often optimized and paired with specific hardware.  Further compounding the problem is that various third party metrology software solutions often give different results on the same part, and even acting on the same data set.   The expense in adding new seats, combined with potentially incompatible results across third party solutions, results in limited metrology information sharing within an organization.

They have developed a cloud-based solution called Quality to help solve these challenge – Jacob suggested that we think of as a PLM type solution for the manufacturing and inspection value chain; tying inspection data back into the design and build process.  Jacob claims that Quality is the first truly cross platform solution available in the industry.

Given their existing customer relationships, they were targeting the aerospace, defense and MRO markets initially, to be followed by medical and automotive later.  They are actively transitioning their business from a solutions business to a software company and were seeking a $700K investment to grow the team. [Note:  Jacob was previously a product manager and AE at Verisurf Software, one of the market leading metrology software applications prior to starting Minds Mechanical.]  The lack of modern, easy to use tools are barriers to the industry and Minds Mechanical is going to try and change the entire market.

Panel Questions

Trae kicked off the questions – asking Jacob to identify who the buyer is within an organization and what is the driver for purchasing (expansion to new opportunities, cost savings, etc.).  Jacob said that the buy decision was mostly a cost savings opportunity.  Their pricing is low enough that it can be a credit card purchase, avoiding internal PO and purchase approval processes entirely.  Trae then followed up by asking how the data was originally captured – Jacob explained that they abstract data from various third party metrology applications which might be used in an account and provide a publication and analytics layer on top of those content creation tools.   Sven then asked about data ownership/regulation compliance for a SaaS solution – was it a barrier to purchase?   Jacob said that they understand the challenges of hosting/acting upon manufacturing data on the cloud; but that the reality was that for certain manufacturers and certain types of projects it just “wasn’t going to happen”.  Trae then asked whether they were working on a local hosted solution for those types of requirements, and Jacob said yes they were.  Shahin from Lux then asked who they were selling to – was it the OEM (and trying to force them to mandate it within the value chain or to the actual supply chain participants?  Jacob said that they will target the suppliers first, and not try and force the OEMs initially to demand use within their supply chain, focusing initially on a bottom-up sales approach first.

AREVO Labs

The next presentation was from Hermant Bheda, the CEO and founder of AREVO Labs.  AREVO’s mission was to leverage additive manufacturing technologies to produce light and strong composite parts to replace metal parts for production applications.  Hermant explained that they have ten pending patent applications and to execute on this vision they need: (1) high performance materials for production, (2) 3D printing software for production parts; and (3) a scalable manufacturing platform.

AREVO has create a continuous carbon fiber composite material which is five times as strong as titanium – unlocked by their proprietary software to weave this material together in a “true” 3D space (rather than 2.5D which they claim the existing FDM based printers use).   AREVO claims to have transport the industry from 2.5D to true 3D by optimizing the tool path/material deposition to generate the best parts – integrating a proprietary solution to estimate the post production part strength, then optimize the tool path to use lowest cost, lowest time, highest strength solution.

Their solution is based around a robotic arm based manufacturing cell – and could be used for small to large parts (up to 2 meters in size).  Markets from medical for single use applications, aerospace/defense for lightweight structural solutions, on demand industrial spare parts as well as oil & gas applications.  They have current customer engagements with Northrup, Airbus, Bombardier, J&J and Schlumberger.

[FWIW, you and see an earlier article on them at 3DPrint.com here, as well as a video of their process.  MarkForged is obviously also in the market and utilizes continuous carbon fiber as part of an AM process.  One of the slides in the AREVO Labs deck which was quickly clicked through was comparison of the two process, would be interesting to learn more about that differentiation indeed!]

Hermant explained that they were currently seeking a Series A raise of $8M.

Panel Questions

Shahin kicked off the questions for the panel – asking whether customers were primarily interested in purchasing parts produced from the technology or whether they wanted to buy the technology so they could produce their own?  Hermant said that the answer is both – some want parts produced for them, others want the tech, it depends on what their anticipated needs were over time.  Sven asked Hermant how he thought the market would settle out over time between continuous fiber (as with their solution) versus chopped fiber.   Hermant said that they view both technologies as complimentary to each other – but in the metals replacement market, continuous fiber is the solution for many higher value, higher materials properties use cases, but both will exist in the market.

UNYQ

The final presentation of the day during the REAL Deal Pitch competition came from UNYQ – they had previously presented at the REAL 2015 event.   Eythor Bendor, the CEO, presented on behalf of UNYQ.  UNYQ develops personalized prosthetic and orthotic devices leveraging additive manufacturing for production.  In 2016 they will be introducing the UNYQ Scoliosis Brace, having licensed the technology from 3D Systems, who are also investors.  According to Crunchbase data UNYQ has raised right around $2.5M across three funding rounds, and expect to be profitable sometime in 2017.

UNYQ has been working a platform for 3D printing manufacturing, personalization and data integration – resulting in devices that are not only personalized using AM for production, but can also integrate various sensors so that they become IoT nodes reporting back various streams of data (performance, how long it has been worn, etc.) which can be shared with clinicians.   UNYQ uses a photogrammetry based app to capture shape data and then leverages Autodesk technology to compute and mesh a solution.  The information is captured in clinics and the devices are primarily produced on FDM printers – going from photos to personalized products in less than four weeks.  They generated roughly $500K in revenues in 2015 starting with their prosthetic covers and have a GTM plan for their scoliosis offering which would have them generate $1M in sales within the first year after launch in May 2016.

UNYQ is currently seeking a $4M Series Seed round.

Panel Questions

Trae asked how UNYQ could accelerate this into market – given the market need, why wasn’t adoption happening faster?   Eythor said that in 2014/15 they had really been focusing on platform and partnership development – it was only at the very end of 2015 that they started creating a direct sales team. Given that there are only roughly 2,000 clinics in the US it was a known market and they had a plan of attack. The limited number of clinics, plus the opportunity to reach consumers directly via social media and other d2c marketing efforts will only accelerate growth in 2016 and beyond.  Trae followed up by asking – where is the resistance to adoption in the market (is it the middleman or something else that bogging things down).  Eythor said that it is more a process resistance (it hasn’t been done this way before, and with manual labor) than it is with the clinics themselves.  Sven then asked about data comparing the treatment efficacy and patient outcomes using the UNYQ devices versus the “traditional” methods of treatment.  Eythor said that while the sample set was limited, one of their strategic advisors had compared their solutions to those traditionally produced and found that the UNYQ offering was as least as good as what is in the market today – but with an absolutely clear preference on the patient side.  The final question came from Shahin at Lux who asked whether there was market conflict in that the clinics (which are the primary way UNYQ gets to market) has a somewhat vested interest in continuing to do things the old way (potentially higher revenues/margins, lots of crafters involved in that value chain, reluctance to change, etc.).  Eythor explained that they were focusing only on the 10-20% of the market that are progressive and landing/winning them; and then over time pull the rest of the market forward.

3D Printing Talk at UNCW CIE

I was fortunate yesterday to spend some time with a great crowd at the UNCW Center for Innovation and Entrepreneurship to talk about 3D Printing — sharing the time with an awesome team of presenters from GE Hitachi Nuclear Energy.  Jim Roberts, the Director of the UNCW CIE, a friend of mine since moving to North Carolina, invited me to see his impressive incubator space located at the edge of the UNC Wilmington campus – and I was glad to do so.  He has an impressive facility, and some great partner/tenant companies already working hard, I am excited to see what will be “hatched” under Jim’s leadership.  While there I also had the chance to meet with some great local entrepreneurs as well as spending some time with the Wired Wizard Robotics Team — and incredibly impressive group of young, talented, future scientists, engineers, technologists and mathematicians.   They were planning how to integrate 3D printing into their next design, I came away again believing how much STEM and the entire “capture to make” ecosystem should be intertwined.

One of the things I talked about yesterday was the interesting correlation between the performance of the publicly traded 3D printing companies and the relative rise of “3D Printing” as opposed to the technical term of “additive manufacturing”.  The upper left inserted graph is a Google Trends chart showing those search terms over time — if you haven’t used Google Trends — this data is normalized relative to all search volume over time.   In other words, a flat line would show that as a % of overall search, that term has stayed consistent (even as volume grows).  What you can see from this graph is the explosion of the rise of “3D Printing” as opposed to small, incremental growth of “additive manufacturing.”  Compare the rise of “3D Printing” to the stock charts and you see an interesting correlation indeed.  During the rest of my time I gave some reasons for why I believed this happened — looking at the macro level trends on both “sides” of the content to make ecosystem that may have unlocked this opportunity.

3D Printing + Additive Manufacturing

For those who have interest, you can download the slides I delivered here. TMK Presentation for UNCW on 3D Printing Opportunity (1.17.14 – FOR DISTRIBUTION)

Have a great weekend!

Light Field Cameras for 3D Imaging

Thanks for reading Part I of this article published at LiDAR News.  Below I examine some of the plenoptic technology providers as well as provide some predictions about 3D imaging in 2014 and beyond.  If you have been directed here from LiDAR News certainly skip ahead to the section starting with Technology Providers below.  Happy Holidays!

Light Field Cameras for 3D Capture and Reconstruction

Plenoptic cameras, or light field cameras, use an array of individual lenses (a microlens) to capture 4D light field about a scene.   This lens arrangement means that multiple light rays can be associated to each sensor pixel and synthetic cameras (created via software) can then process that information.

Phew, that’s a mouthful, right?  It’s actually easier to visualize –

Raytrix Plenoptic Camera Example

Image from Raytrix GmbH Presentation delivered at NVIDIA GTC 2012

This light field information can be used to help solve various computer vision challenges – for example, allowing images to be refocused after they are taken, to substantially improve low light performance with an acceptable signal to noise ratio or even to create a 3D depth map of a scene.   Of course the plenoptic approach is not restricted to single images, plenoptic “video” cameras (with a corresponding increase in data captured) have been developed as well.

The underlying algorithms and concepts behind a plenoptic camera have been around for quite some time.   A great technical backgrounder on this technology can be found in Dr. Ren Ng’s 2005 Stanford publication titled Light Field Photography with a Hand-Held Plenoptic Camera.   He reviews the (then) current state of the art before proposing his solution targeted at synthetic image formation.  Dr. Ng ultimately went on to commercialize his research by founding Lytro, which I discuss later.    Another useful backgrounder is the technical presentation prepared by Raytrix (profiled below) and delivered at the NVIDIA GPU Technology Conference 2012.

In late 2010 at the NVIDIA GPU Conference, Adobe demonstrated a plenoptic camera system (hardware and software) they had been working on – while dated, it is a useful video to watch as it explains both the hardware and software technologies involved with light field imaging as well as the computing horsepower required.  Finally, another interesting source of information and recent news on developments in the light field technology space can be found at the Light Field Forum.

Light field cameras have only become truly practical because of advances in lens and sensor manufacturing techniques coupled with the massive computational horsepower unlocked by GPU compute based solutions.  To me, light field cameras represent a very interesting step in the evolution of digital imaging – which until now – has really been focused on improving what had been a typical analog workflow.

Light Field Cameras and 3D Reconstructions

 Much of the recent marketing around the potential of plenoptic synthetic cameras focuses on the ability of a consumer to interact and share images in an entirely different fashion (i.e. changing the focal point of a captured scene).  While that is certainly interesting in its own right, I am personally much more excited about the potential of extracting depth map information from light field cameras, and then using that depth map to create 3D surface reconstructions.

Pelican Imaging (profiled below) recently published a paper at SIGGRAPH Asia 2013 detailing exactly that — the creation of a depth map, which was then surfaced, using their own plenoptic hardware and software solution called the PiCam.  This paper is published in full at the Pelican Imaging site, see especially pages 10-12.

There is a lot of on-going research in this space, some use traditional stereo imaging methods acting upon the data generated from the plenoptic lens array but others use entirely different technical approaches for depth map extraction.   A very interesting recent paper presented at ICCV 2013 in early December 2013 titled Depth from Combining Defocus and Correspondence Using Light Field Cameras and authored by researchers from the University of California, Berkley and Adobe proposes a novel method for extracting depth data from light field cameras by combining two methods of depth estimation.  The authors of this paper have made available their sample code and representative examples and note in the Introduction:

 The images in this paper were captured from a single passive shot of the $400 consumer Lytro camera in different scenarios, such as high ISO, outdoors and indoors. Most other methods for depth acquisition are not as versatile or too expensive and difficult for ordinary users; even the Kinect is an active sensor that does not work outdoors. Thus, we believe our paper takes a step towards democratizing creation of depth maps and 3D content for a range of real-world scenes.

Technology Providers

Let’s take a look at a non-exhaustive list of light field technology manufacturers – this is no way complete, nor does it even attempt to cover all of the handset manufacturers and others who are incorporating plenoptic technologies – nor those who are developing “proxy” solutions to replicate some of the functionalities which true plenoptic solutions offer (e.g. Nokia’s ReFocus app software).  Apple recently entered the fray of plenoptic technologies when it was reported in late November that it had been granted a range of patents (originally filed in 2011) covering a “hybrid” light field camera setup which can be switched traditional and plenoptic imaging.

Lytro

Lytro (@Lytro) was founded in 2010 by Dr. Ren Ng, building on research he started at Stanford in 2004.  Lytro has raised a total of $90M with an original $50M round in mid-2011 from Andreesen Horowitz ((@a16z, @cdixon), NEA (@NEAVC), Greylock (@GreylockVC), and a new $40M round adding North Bridge Venture Partners (@North_Bridge).   In early 2012 Lytro begin shipping its consumer focused light field camera system, later in that year Dr. Ng stepped down as CEO (he remains the Chairman), with the current CEO, Jason Rosenthal, joining in March 2013.

Lytro camera inside

Inside the Lytro Camera from Lytro

I would suspect that Lytro is pivoting from focusing purely on a consumer camera to instead the development of an imaging platform and infrastructure stack (including cloud services for interaction) that it, along with third party developers, can leverage.  This may also have been the strategy all along – in many cases to market a platform you have to first demonstrate to the market how the platform can be expressed in an application.  Jason Rosenthal seems to acknowledge as much in a recent interview published in the San Francisco Chronicle’s SF Gate blog in August 2013 (prior to their most recent round being publicly announced) where is quoted as saying that the long term Lytro vision is to become  “the new software and hardware stack for everything with a lens and sensor. That’s still cameras, video cameras, medical and industrial imaging, smartphones, the entire imaging ecosystem.”  Jonathan Heiliger, a general partner at North Bridge Venture Partners, in his quote supporting their participation in the latest $40M round supports that vision – [t]he fun you experience when using a Lytro camera comes from the ability to engage with your photos in ways you never could before.  But powering that interactivity is some great software and hardware technology that can be used for everything with a lens and a sensor.”

I am of course intrigued by the suggestion from Rosenthal that Lytro could be developing solutions useful for medical and industrial imaging.  If you are Pelican Imaging, you are of course focusing on the comments relating to “smartphones.”

Pelican Imaging

Pelican Imaging

Image from Pelican Imaging

Pelican Imaging (@pelicanimaging) was founded in 2008 and its current investors include Qualcomm (@Qualcomm), Nokia Growth Partners, Globespan Capital Partners (@Globespancap), Granite Ventures (@GraniteVentures), InterWest Partners (@InterwestVC) and IQT.  Pelican Imaging has raised more than $37M since inception and recently extended its Series C round by adding an investment (undisclosed amount) from Panasonic in August 2013.   Interesting to me is of course the large number of handset manufacturers who have participated in earlier funding rounds, as well as early investment support from In-Q-Tel (IQT), an investment arm aligned with the United States Central Intelligence Agency.

Pelican Imaging has been pretty quiet from a marketing perspective until recently, but no doubt with their recent additional investment from Panasonic and other hardware manufacturers they are making a push to become the embedded plenoptic sensor platform.

Raytrix

Raytrix is a German developer of plenoptic cameras, and has been doing so since 2009.  They have, up until now, primarily focused on using this technology for a host of industrial imaging solutions.   They offer a range of complete plenoptic camera solutions.  A detailed presentation explaining their solutions can also be found on their site and a very interesting video demonstration of the possibilities of a plenoptic video approach for creating 3D videos can be found hosted at the NVIDIA GPU Technology Conference website.  Raytrix has posted a nice example of how they created a depth map and 3D reconstruction using their camera here.   Raytrix plenotpic video cameras can be used for particle image velocimetry (PIV), a method of measuring velocity fields in fluids by tracking how particles move across time.  Raytrix has a video demonstrating these capabilities here.

The Future

For 2014, I believe we will see the following macro-level trends develop in the 3D capture space (these were originally published here).

  • Expansion of light field cameras – Continued acceleration in 3D model and scene reconstruction (both small and large scale using depth sense and time of flight cameras but with an expansion into light field cameras (i.e. like Lytro, Pelican Imaging, Raytrix, as proposed by Apple, etc).
  • Deprecation of 3D capture hardware in lieu of solutions – We will see many companies which had been focusing mostly on data capture pivot more towards a vertical applications stack, deprecating the 3D capture hardware (as it becomes more and more ubiquitous – i.e. plenoptic cameras combined with RTK GPS accurate smartphones).
  • More contraction in the field due to M&A – Continued contraction of players in the capture/modify/make ecosystem, with established players in the commercial 3D printing and scanning market moving into the consumer space (e.g. Stratasys acquiring Makerbot, getting both a 3D scanner and a huge consumer ecosystem with Thingiverse) and with both ends of the market collapsing in to offer more complete solutions from capture to print (e.g. 3D printing companies buying 3D scanner hardware and software companies, vice versa, etc.)
  • Growing open source software alternatives – Redoubled effort on community sourced 3D reconstruction libraries and application software (e.g. Point Cloud Libraries and Meshlab), with perhaps even an attempt made to commercialize these offerings (like the Red Hat model).
  • 3D Sensors everywhere – Starting in 2014, but really accelerating in the years that follow, 3D sensors everywhere (phones, augmented reality glasses, in our cars) which will constantly capture, record and report depth data – the beginnings of a crowd sourced 3D world model.

Over time, I believe that light field cameras will grow to have a significant place in the consumer acquisition of 3D scene information via mobile devices.  They have the benefit of having a relatively small form factor, are a passive imaging system, and can be used in a workflow which consumers already know and understand.   They are of course not a panacea, and ultimately currently suffer similar limitations as does photogrammetry and stereo reconstruction when targets are not used (e.g. difficulty in accurately computing depth data in scenes without a lot of texture, accuracy dependent on depth of the scene from the camera, etc.) but novel approaches to extract more information from a 4D light field hold promise for capturing more accurate 3D depth data from light field cameras.

For consumers, and consumer applications driven from mobile, I predict that light field technologies will take a significant share of sensor technologies, where accuracy is a secondary consideration (at best) and the ease of use, form factor and the “eye candy” quality of the results are most compelling.   Active imaging systems, like those which Apple acquired from PrimeSense certainly have a strong place in the consumer acquisition of 3D data, but in mobile their usefulness maybe limited by the very nature of the sensing technology (e.g. relatively large power draw and form factor, sensor confusion in the presence of multiple other active devices, etc.).

 

Apple Buys Tech Behind Microsoft Kinect (PrimeSense) – 3D Scanning Impact?

[Update: Apple has confirmed the acquisition of PrimeSense for roughly $350M, when originally published the acquisition was still only rumored.]

It has been reported that Apple (@Apple) has acquired PrimeSense (@GoPrimeSense) for $345M.

I have been long on PrimeSense’s depth sensing cameras for a while – I started following them in the months leading up to the original launch of the Microsoft Kinect in the “Project Natal” days (late 2009).  Photogrammetry was always interesting to me as an approach to create 3D models – but the reconstructions tended to fail frequently (and without warning) and always required post-processing.

My interest in PrimeSense technology was primarily twofold: (1) to find a way to leverage the installed base of Microsoft Kinect devices as 3D capture devices (as well as the Xbox Live payment infrastructure) and (2) to build an inexpensive stand-alone 3D scanner based on PrimeSense technology.  I was only more interested after Microsoft published their real-time scene reconstruction research known as KinectFusion.  Hacks like the Harvard “Drill of Depth”s (a Kinect made mobile by attaching it to a battery powered drill, screen and software, circa early 2011) only further piqued my interest about the possibilities.

Drill of Depth

The writing was on the wall for PrimeSense after Microsoft decided to abandon PrimeSense technology and develop their own depth sensing devices for use with the new Xbox One.  PrimeSense had to transition from a lucrative relationship with one large customer (~30M+ units) to a developer of hardware and firmware solutions seeking broader markets.  The OpenNI initiative (an open source project to develop middleware SDK for 3D sensors which was primarily sponsored by PrimeSense) was an attempt to broaden the potential pool of third party developers who would ultimately build solutions around PrimeSense technologies.

There are many PrimeSense powered 3D scanners in the market today – it will be interesting to see whether this pool expands or contracts after the planned Apple acquisition (e.g. will the direction be inward, focusing the PrimeSense technology to be delivered directly with Apple only devices or will they continue to court third parties developers across all types of hardware and software solutions).   The new PrimeSense Capri form factor already allows for entirely new deployment paradigms for this technology, with one more generation the sensor will have shrunk so much that they can be comfortably embedded directly in phone and tablet devices (but with a trade off in data quality if the sensor shrinks too much).

Here is a quick run-down on a non-exhaustive list of PrimeSense powered 3D scanner hardware technology and vendors (note, this isn’t a profile of the universe of software companies that offer solutions around 3D model and scene reconstruction – as there are many):

Standard Microsoft Kinect – the initial movement for using the PrimeSense technology as a 3D scene reconstruction device came from hacks to the original Microsoft Kinect.  The Kinect was hacked to run independently of the Kinect, and ultimately Microsoft decided to embrace these hacks and develop a standalone Kinect SDK.

Microsoft Kinect for PC – Microsoft began selling a Kinect which would directly interface with Windows devices, it also enabled a “near” mode for the depth camera.

Asus XTION (Pro) – This is an Asus OEM of the PrimeSense technology which provides essentially the same functional specifications as delivered in the Microsoft Kinect (they use the same PrimeSense chipset and reference design).

MatterportMatterport (@Matterport) has raised $10M  since the middle of 2012 to develop a camera system, software and cloud infrastructure for scanning interior spaces.  The camera system itself is built around PrimeSense technologies (along with 2D cameras to capture higher quality images to be referenced to the 3D reconstruction created from the PrimeSense cameras).   Most interesting to me that Matterport counts Red Swan  and Felicis Ventures as investors, both which are also invested into Floored (see below).  A few days ago Forbes profiled the use of the Matterport system, the article is worth a read.

Floored – (@Floored3D), formerly known as Lofty, concentrates primarily on developing software to help visualize interior spaces and is concentrating first on the commercial real estate industry.  Floored has raised at least a little over $1M now, including common investors with Matterport.  For more on the relationship between Matterport and Floored, see this TechCrunch article.  Floored’s CEO is Dave Eisenberg, and he gave a great presentation at the TechCrunch NYC Startup Battlefield in late April 2013 explaining Floored’s value proposition.   Floored is definitely filled with brilliant minds, and obviously a whole lot of computer vision folks who understand how difficult it is to attempt to automatically generate 3D models of interior spaces from scan data (of any quality).  To get a sense of what they are currently thinking about, check out the Floored blog.

Lynx A – This was an offering from a start-up in Austin, Texas known as Lynx Labs (@LynxLabsATX) who launched an early 2013 KickStarter campaign for an “all in one” point and shoot 3D camera.  This device was a sensor, combined with a computing device and software which would allow for the real time capturing and rendering of 3D scenes.  The first round of devices shipped in the middle of September 2013.   I do not know for sure, but my assumption is that this device is PrimeSense powered.

DotProduct (@DotProduct3D) with their DPI-7 scanner.   As with the Lynx A camera, this is a PrimeSense powered device, combined with a Google Nexus, and their scene reconstruction software called Phi.3D.  DotProduct claims 2-4mm accuracy at 1m, achieved through a combination of individual sensor calibration, their software, and rejecting sensors which do not achieve spec.  DotProduct announced in late October 2013, at the Intel Capital Global Summit, that Intel Capital (@IntelCapital) had made a seed investment into DotProduct, spearheaded by Intel’s Perceptual Computing Group.

Occipital Structure SensorOccipital (@occipital) is an extremely interesting company based in Boulder and San Francisco, filled with amazing computer vision expertise.  After cutting their teeth on some computer vision applications for generating panoramas on Apple devices, they have bridged into a complete hardware and software stack for 3D data capture and model creation.  Occipital counts the Foundry Group (@foundrygroup) as one of its investors (having invested roughly $7M into Occipital in late 2011).   Occipital completed a very successful KickStarter campaign for its Structure Sensor raising nearly $1.3M.

Occipital Structure Sensor

The Structure Sensor is a PrimeSense powered device which is officially supported on later generation Apple iPad devices.  What is compelling is Occipital’s approach to create an entire developer ecosystem around this device – no doubt building on the Skanect (@Skanect) technology they acquired from ManCTL in June of 2013.  Skanect was one of the best third party applications available which had implemented and made available the Microsoft Fusion technology (allowing for real time 3D scene reconstruction from depth cameras).   If it is true, and Apple in fact does buy PrimeSense, then that is potentially problematic for Occipital’s current development direction if Apple has aspirations for embedding this technology in mobile devices (as opposed to Apple TV).  Even if Apple did want to embed in their iDevices, it would seem then that Occipital becomes an immediately interesting acquisition target (in one swoop you get hardware, and most importantly the computer vision software expertise).  Given the depth of talent at Occipital, I’m sure things are going to work out just fine.

Sense™ 3D Scanner by 3D Systems – This is the newest 3D scanner entrant in this space (announced a few weeks ago) delivered by 3D Systems (@3dsystemscorp), which acquired my former company, Geomagic.  The Sense uses the new PrimeSense Carmine sensor – a further evolution of the PrimeSense depth camera technology, allowing for greater depth accuracy across more pixels in the field (and ultimately reconstruction quality).  PrimeSense has a case study on the Sense.

What Are Competitive/Replacement Technologies for PrimeSense Depth Sensors?

In my opinion, the closest competitor in the market today to PrimeSense technologies are made by a company called SoftKinetic (@softkinetic) with their line of DepthSense cameras, sensors, middleware and software.

SoftKinetic

On paper, the functional specifications of these devices stack up well against the PrimeSense reference designs.  Unlike PrimeSense, SoftKinetic sells complete cameras, as well as modules and associated software and middleware.  SoftKinetic uses a time of flight (ToF) approach to capture depth data (which is different than PrimeSense).  Softkinetic has provided software middleware to Sony for the PS4 providing a middleware layer for third party developers to create gesture tracking applications using the PlayStation(R)Camera for PS4.   Softkinetic announced a similar middleware deal with Intel to accelerate perceptual computing in the early summer of 2013 too.

There are other companies in the industrial imaging space (who presently develop machine vision cameras or other time of flight scanners) which could provide consumer devices if they chose to (e.g. such as PMD Technologies in Germany).

I believe the true replacement technology, at least in the consumer space, for 3D data acquisition and reconstruction will come from light field cameras as a class in order to provide range data (e.g. z depth), and not necessarily from active imaging solutions.  See my thoughts on this below.

Predictions for 2014 and Beyond

Early in 2013, when I was asked by my friends at Develop3D to predict what 2013 would bring, I said:

In 2013 we will move through the tipping point of the create/modify/make ecosystem.

Low cost 3D content acquisition, combined with simple, powerful tools will create the 3D content pipeline required for more mainstream 3D printing adoption.  

Sensors, like the Microsoft Kinect, the LeapMotion device, and [Geomagic, now 3D Systems’] Sensable haptic devices, will unlock new interaction paradigms with reality, once digitized.  

Despite the innovation, intellectual property concerns will abound, as we are at the dawn of the next ‘Napster’ era, this one for 3D content.

I believe much of that prediction has come/is coming true.

For 2014 I believe we will see the following macro-level trends in the 3D capture space:

  • Expansion of light field cameras – Continued acceleration in 3D model and scene reconstruction (both small and large scale using depth sense and time of flight cameras but with an expansion into light field cameras (i.e. like Lytro (@Lytro) and Pelican Imaging (@pelicanimaging)).
  • Deprecation of 3D capture hardware in lieu of solutions – We will see many companies which had been focusing mostly on data capture pivot more towards a vertical applications stack, deprecating the 3D capture hardware (as it becomes more and more ubiquitous).
  • More contraction in the field due to M&A – Continued contraction of players in the capture/modify/make ecosystem, with established players in the commercial 3D printing and scanning market moving into the consumer space (e.g. Stratasys acquiring Makerbot, getting both a 3D scanner and a huge consumer ecosystem with Thingiverse) and with both ends of the market collapsing in to offer more complete solutions from capture to print (e.g. 3D printing companies buying 3D scanner hardware and software companies, vice versa, etc.)
  • Growing open source alternatives – Redoubled effort on community sourced 3D reconstruction libraries and application software (e.g. Point Cloud Libraries and Meshlab), with perhaps even an attempt made to commercialize these offerings (like the Red Hat model).
  • 3D Sensors everywhere – Starting in 2014, but really accelerating in the years that follow, 3D sensors everywhere (phones, augmented reality glasses, in our cars) which will constantly capture, record and report depth data – the beginnings of a crowd sourced 3D world model.

The Use of Light Field Cameras and 3D Data Acquisition and Reconstruction Will Explode

While the use of light field cameras to create 3D reconstructions are just at their infancy, just like the PrimeSense technology (which was designed to be used for an interaction paradigm, not for capturing depth data), I can see (no pun intended) this one coming.  Light field cameras have a strong benefit of being a passive approach to 3D data acquisition (like photogrammetry).  For what is possible in depth map creation from these types of camera systems, check out this marketing video from Pelican Imaging (note the 3D Systems Cube 3D printer)] and a more technical one here .

Pelican Imaging Sensor

Image from Pelican Imaging.

I will have a separate post looking in more depth at light field cameras as a class including Lytro’s recent new $40M round of funding and the addition of North Bridge.  I believe, after refinement, that they ultimately become a strong solution for consumer mobile devices for 3D content capture because of their size, power needs, passive approach, etc.  In the interim, if you have interest in this space you should read the Pelican Imaging presentation recently made at SIGGRAPH Asia on the PiCam and reproduced in full at the Pelican Imaging site.  Fast forward to pages 10-12 in this technical presentation for an example of using the Pelican Imaging camera to produce a depth map which is then surfaced.

What could ultimately game changing is if we find updated and refined depth sense technology embedded and delivered directly with the next series of smartphones and augmented reality devices (e.g. Google Glass).  In that world, everyone has a 3D depth sensor, everyone is capturing data, and the potentials are limitless for applications which can harvest and act upon that data once captured.

Let the era of crowd sourced world 3D data capture begin!

(. . . but wait, who owns that 3D world database once created. . .

This article was original published on DEVELOP3D on November 18th, 2013, it has been modified since that original posting.

littleBits Raises An Additional $11.1M in Series B Funding

littleBits, the New York City based open hardware startup, has raised a $11.M Series B round of funding led by True Ventures (@trueventures) and Foundry Group (@foundrygroup) and includes new investors Two Sigma Ventures (who had also just led an $11.5M investment in Rethink Robotics, and is invested into Floored (@Floored), who I have blogged about before) and Vegas Tech Fund (@VegasTechFund).   Returning investors Khosla Ventures (@vkhosla), Mena Ventures, Neoteny Labs, O’Reilly AlphaTech (@OATV), Lerer Ventures (also invested into Floored) (@lererventures) and new and returning angel investors also participated.  littleBits had previously raised $3.65M in Series A funding, and $850K in seed funding, bringing its total raised to date to over $15M.

littleBits mission is to “turn everyone into an inventor by making electronics accessible as a material.” littleBits makes “Bits modules” that snap together magnetically to make it easy for children and adults to build simple circuits and inventive projects in seconds. littleBits, and its CEO Ayah Bdeir (@ayahbdeir) have won numerous awards and are viewed as leaders in the maker movement.

I previously profiled littleBits in my two part blog series in November and December 2012 looking examining the intersection of the makers movement with the “Minecraft generation” in my own house – as I try to get my own kids to focus more on the worlds of atoms instead of bits.  You can find those two posts here: (1) http://3dsolver.com/the-makers-movement-intersects-with-the-minecraft-generation/  and (2) http://3dsolver.com/the-maker-in-the-minecraft-generation-part-duex/

Congratulations to littleBits!

CrunchBase: Using Crowdsourced Data for Commercial Purposes

For those of you who don’t know about CrunchBase (@crunchbase), it is a crowdsourced database of information about startups, people and investors.  Crunchbase describes themselves as “the free database of technology companies, people, and investors that anyone can edit. Our mission is to make information about the startup world available to everyone and maintainable by anyone.”  AOL acquired Crunchbase and TechCrunch in 2010 from Michael Arrington.

Crunchbase has been very successful in sourcing data, and have established strong relationships with many of the leading venture capital firms who regularly share data about their portfolio companies (fundraising, people, etc.).  CrunchBase has even developed an Excel Data Exporter, in addition to its API access, to allow for the broader distribution of the information contained in its databases.

The current Crunchbase Terms of Service, Privacy Policy,and Licensing Policy govern the use and access of Crunchbase data.

As of the date of this blog, the Licensing Policy provides that:

We permit anyone to republish our content in accordance with this licensing policy.

We provide CrunchBase’s content under the Creative Commons Attribution License [CC-BY]. Our content includes structured data, overviews and media files associated with companies and people. Our schema, and documentation are also offered under the Creative Commons license.

We ask that API users link back to CrunchBase from any pages that use CrunchBase data. We want to make sure that everyone is able to find the source of the content to keep the service up-to-date and accurate.

This Licensing Policy may be updated from time to time as our services change and grow. If you have any questions about this policy please contact us at licensing@crunchbase.com.

CrunchBase provides a specific licensing contract for services that charge for the use of their data. Contact licensing@crunchbase.com

The CrunchBase Terms of Service provide further restrictions on how the API maybe used:

We provide access to portions of the Site and Service through an API thereby enabling people to build applications on top of the CrunchBase platform. For purposes of this Terms of Service, any use of the API constitutes use of the Site and Service. You agree only to use the API as outlined in the documentation provided by us on the Site.

 On any Web page or Application where you display CrunchBase company or people results, each page must include a hypertext link to the appropriate company or person profile Web page on CrunchBase.com. Additional CrunchBase Branding Requirements can be found on the following Web page: http://info.crunchbase.com/docs/licensing-policy/. CrunchBase may grant exceptions on a case-by-case basis. Contact us atlicensing@crunchbase.com for special branding requests, which must be approved in advance in writing.

CrunchBase will utilize commercially reasonable efforts to provide the CrunchBase API on a 24/7 basis but it shall not be responsible for any disruption, regardless of length. Furthermore, CrunchBase shall not be liable for losses or damages you may incur due to any errors or omissions in any CrunchBase Content, or due to your inability to access data due to disruption of the CrunchBase API.

CrunchBase reserves the right to continually review and evaluate all uses of the API, including those that appear more competitive than complementary in nature.

CrunchBase provides a specific licensing contract for services that charge for the use of their data. Contact licensing@crunchbase.com

CrunchBase reserves the right in its sole discretion (for any reason or for no reason) and at anytime without notice to You to change, suspend or discontinue the CrunchBase API and/or suspend or terminate your rights under these General Terms of Service to access, use and/or display the CrunchBase API, Brand Features and any CrunchBase content.

I previously reviewed various licensing schemes, including the Creative Commons scheme, in a two part earlier blog series The Call for a Harmonized Community License for 3D Content where I proposed a harmonized “community” type license for content which could be produced on 3D printers (arguing that the existing license types do not “fit” for content which can mix copyright, patent, trade dress and other rights)

For those of you who are not aware, the CC-BY license type is a very broad license grant – providing for the “maximum dissimentation of licensed materials”.  You can find the existing CC license types here and specifically the summary of CC-BY license.

Crunchbase was careful to make clear that uploaded material which they link or provide along with the company information might be licensed differently (e.g. not under the CC-BY license) and specifically made clear that:

The graphical layout of the CrunchBase website and other elements of the Site, Content or Service not described above are the copyright of CrunchBase, and may not be reproduced without permission.

 

Enter Pro Populi and People+

Pro Populi, a small three person startup, has been developing applications utilizing the CrunchBase dataset, one app called People+.  Pro Populi has apparently been accessing the CrunchBase data (originally via the API, but also through other means apparently) to populate their own database of content and then accessing that content (and other content) from their applications.

Wired (@Wired) reporter David Kravets (@dmkravets) broke the story on November 5th in a story titled AOL Smacks Startup for Using CrunchBase Content It Gave Away.  If you click through the link to the original Wired article, you can review some of the correspondence gathered by David Kravets in support of the story.

Pro Populi was served with a cease and desist letter from AOL (the parent company of CrunchBase).  Quoting from the Wired article, an AOL Assistant General Counsel apparently sent the following in an email to the Pro Populi CEO after a meeting with the President of CrunchBase last Friday:

On the chance that you may have misinterpreted Matt’s willingness to discuss the matter with you last week, and our reference to this as a ‘request,’ let me make clear, in more formal language, that we demand that People+ immediately cease and desist from its current violation and infringement of AOL’s/TechCrunch’s proprietary rights and other rights to CrunchBase, by removing the CrunchBase content from your People+ product and by ceasing any other use of CrunchBase-provided content.

But if CrunchBase didn’t want to allow others to use the data, why does it license its content under the CC-SA scheme?

Hopefully CrunchBase and Pro Populi can come to an agreement which works for both of them and their interests.

While CrunchBase can likely legitimately claim to restrict access to their content via their API (licensed separately, not covered by the CC-SA scheme, and with separate terms), once content covered by the CC-SA license has been accessed and copied in a manner consistent with CC-SA, can CrunchBase assert rights to “get it back?”  That seems to be an incredibly difficult road to hoe, and inconsistent with the very broad terms of the CC-SA license grant.  Worse yet, according to the Wired article, the General Counsel of the Creative Commons Corporation doesn’t think so.  The Electronic Frontier Foundation represents Pro Populi.

Oops.

CrunchBase could have stayed within the CC license scheme and chosen a different CC license type for the underlying data – including one which specifically prohibits the use of the content for commercial purposes, which prohibits the creation of derivative works, and which requires specific attribution to them.  That license type is CC BY-NC-ND.  On a case by case basis they could have authorized/waived the restrictions contained in the license.  CrunchBase could have also changed the license grant for content accessed via the API.   This is solvable.

For an interesting view of this dispute from TechCrunch (a sister company to CrunchBase), see their take on the dispute.

 

Impact On Other “Hybrid” Commercial Use of Crowdsourced Data?

While CrunchBase and Pro Populi resolve their dispute, I am most interested in thinking about how this potentially impacts other crowdsourced data platforms and the applications built on top of them.   It is an interesting dilemma and question – how can/should crowdsourced data platforms be able to commercially benefit from their efforts – including restricting other potential competitors from a copying of data for their own purposes – commercial or otherwise?  Sourcing, filtering, vetting, editing, organizing, etc. hundreds, thousands and millions of data points is a complex undertaking.  It takes time, effort, people and ultimately money.  Unless that vetting is also done from a crowdsourced perspective (or mostly so – like the Wikipedia model), allowing potential competitors (commercial or otherwise) to copy that structured content is a potential death knell.  In that instance, openness needs to be balanced against a commercial purpose.

CrunchBase President Matt Kaufmann blogged about the CrunchBase dispute with Pro Populi and the EFF.   He essentially acknowledges the challenge of openness in the context of trying to build a commercial business – but re-affirms his belief that CrunchBase thought they restricted the use of their data (via the API or otherwise) for commercial purposes under their current licensing terms.

[T]o invest in CrunchBase’s constant improvement requires building a business around CrunchBase in a way that successfully takes into account our terms of service and our openness. We are confident that this is possible, and that’s what we are on the path to figuring out.

This is of course the challenge – adding enough value in the stack above the “open” content that can be commercialized.   As an example, take a look at MapBox – MapBox is a cloud-based platform which allows for developers to embed geo rich content into their web and mobile offerings.  They recently took $10M from Foundry Group, and I blogged about that investment – MapBox, Geo Software Platform, Maps $10M from Foundry Group.

MapBox relies on data sourced from OpenStreetMap, the “free wiki world map.”    OpenStreetMap licenses its content in two ways – the underlying data is licensed as open data under the Open Data Commons Open Database License (ODbL) while the cartography and documentation are licensed under the CC BY-SA license, the same license selected by CrunchBase).  BTW, Kevin Scofield likes the MapBox interface too.

It would be a difficult commercial business model indeed for MapBox to go through the effort of building an infrastructure to help source, collect and organize all kinds of mapping data, which was open for other uses, as well as building an application layer on top of it.   MapBox instead focuses on creating a great platform layer on top of the otherwise “open” content (others are free to do so as well).  This model works because there is enough community interest to support an undertaking like OpenStreetMap to begin with.  Can the same be said for the data underlying CrunchBase?

Rainbow Loom App for 3D Printing – What Will It Be?

If you don’t know what a Rainbow Loom® is – you probably don’t come into regular contact with kids between the ages of 7 – 12.   It is one of the hottest little trends out there, and it is yet another example of how we are all born makers, and how children in particular are driven with an innate ability to express themselves and make creative objects.   As I blogged about before The Makers Movement Intersects with the Minecraft Generation, I truly do believe that we are all born “makers” as Chris Anderson writes about in his book “Makers: The New Industrial Revolution” and as I profile in that blog.

So what is a Rainbow Loom (formerly known as Twist Banz), and what do you do with it you might ask?  It is, as you might have a guessed, a crafting kit that makes it (somewhat) easy for folks to knit together colored rubber bands (some even glow in the dark!) to create all kinds of wearable items, but mostly bracelets.  Here is a great NY Times article profiling Cheong Choon Ng, the Detroit entrepreneur/founder who developed the idea for the Rainbow Loom craft kit in the basement of his home in 2011.

As with Minecraft, the Kurke household is participating in this crafting trend – my youngest son has been creating all kinds of bands and bracelets for trade (and yes, gulp even sale!) in his elementary school.   I just ordered today another 3000 (yes, that’s right, 3000) rubber bands from Amazon so that he can continue with his “making”.  I watch him with fascination as he spends free time creating these intricate creations (there is a whole community of folks that have posted “how to videos” on Youtube, as with Minecraft).  I am amazed again when I realize that many of the posted videos have been made by kids – for kids – teaching each other how to use their Rainbow Loom.

Kurke Rainbow Loom

Pictured above is a Kurke WIP bracelet on a Rainbow Loom.

What will it take for a Rainbow Loom like “app” to unlock the potential demand for consumer 3D printers?   The Rainbow Loom phenomena certainly makes clear that something which is challenging (e.g. you just don’t turn it on out of the box) is not an impediment to kids.  So, while I continue to believe that substantial work needs to be done to simplify the capture/modify/make ecosystem (e.g. easier 3D model capture, simplified printing workflows that don’t require seven different software applications, etc.),  Minecraft, Rainbow Loom, etc. seem to show that something which is hard, but has a low barrier to initial entry, is not an impediment to making a market for kids.   In fact, you can even argue that the challenge is part of the reason why the Rainbow Loom is taking off – kids show (and trade) their more advanced creations as badges of honor – and teach others how to make them for themselves.

If kids could go home and 3D print a bracelet, would it have the same impact or societal worth?  Part of the inherent value of the Rainbow Loom bracelets which get created/traded is the understanding that it took time (and in some cases significant time and learning) to “make” it.  Would it be the same if you could go home and push the “print” button?

Unmanned Aerial Systems: Global Trends 2030 Part Deux

I have previously blogged about the late 2012 publication from the United States National Intelligence Council titled Global Trends 2030: Alternative Worlds.  This is the fifth in a series of publications from the NIC examining future scenarios (the first was published in 1996/97) – but this is first time that the authors have included sections devoted to potentially disruptive future technologies.

Global Trends 2030 covers a wide range of topics and represents a framework for thinking about the future – identifying critical trends and the potential discontinuities or breaks that might occur.  The report identifies “megatrends” (those trends which will likely occur under any future scenario) and “game-changers” (representing variables which may significantly impact or change any of the future scenarios).   As before, I would highly suggest that you download and read a copy of this publication on your own – it represents the considered, critical thinking of hundreds, if not thousands, of the best analysts in the world – and if you have the inclination (and time!) to look at the source materials which can be found hosted on the National Intelligence Council’s website. It would seem that the study authors have identified many potentially disruptive future applications of technology into unmet market needs – this should be on every venture capitalist’s reading list!

In my earlier blog Global Trends 2030: Is 3D Printing the Catalyst for a Worldwide Industrial Revolution I gave a general overview of the Global Trends 2030 content (which I will not repeat here, but encourage you to read) and then concentrated on the future trends identified by the authors in advanced manufacturing, including 3D printing.  Today I am going to review the sections of the report covering the potential future impacts of remote and autonomous vehicles, including unmanned aerial systems (a/k/a “drones”).

Within Game Changer #5 (the impact of New Technologies) – the authors discuss the potential impacts of various automation and advanced manufacturing technologies, concentrating on the transformation of robotics from an industrial process enhancer (and various military uses) to consumer (and health) markets, the use of remote or autonomous vehicles (including unmanned aerial systems), as well as the impact of additive manufacturing/3D printing technologies (which I covered previously).  The authors first differentiate between remote vehicles (which are human operated and controlled, via telepresence or otherwise) versus autonomous vehicles (which are mobile platforms which can operate without any direct human control, relying on sensors and software to navigate, avoid obstacles, and perform their mission).

UAS Global Trends 2030

Chart from page Global Trends 2030, page 91.  The authors note that “Low-cost UAVs with cameras and other types of sensors could support wide-area geo-prospecting, support precision farming, or inspect remote power lines.”  Global Trends 2030, page 92.

The democratization of UAS products and platforms will no doubt bring substantial societal benefits – but as with all new and emerging technologies, there are areas for legitimate concern as well – ranging from the risks of UAS collisions (with other aerial systems, ground based assets, or people), privacy concerns (UAS overflights capturing all kinds of surveillance data, whether done by individuals or government agencies), to a UAS being used as a platform for terrorism, among others.  Consider the “what if” an individual or small team has access to disruptive UAS technologies that were formerly reserved to nation states (e.g cm accurate GPS UAS). For what Global Trends 2030 authors think on the negatives to this kind of “individual empowerment” – See Global Trends, pages 67-70.

I am personally excited about the tremendous potential that UAS platforms will provide for future generations – there are many near term potential opportunities. I am confident that the risks of UAS platforms can be managed and minimized through the smart application of technology, best practices and process, and an appropriate regulatory framework.  It is always important to recognize that UAS devices are not new – individual hobbyists and makers have been flying all types of devices (fixed wing, single rotor, multi-rotor) for many years.  What is new is the potential democratization of this technology through lower price points, broader access to technology (via crowd funding devices like Kickstarter), and the commercial successes of existing devices (like the Parrot AR Drone series).

While the current regulatory framework within the United States has limited the commercial application of UAS in the United States, it is only a matter of time (and very little time at that) before these types of sensor platforms are used and exploited within our borders (as they are used elsewhere in the world).  Precision agriculture (using a UAS for the precision localization/application of fertilizer, insecticide, water management, etc.), first responder/emergency/humanitarian use (deploy a UAS to hover on station immediately upon a 911 call, use in searches, fly and hover to avalanche beacons, etc.), and infrastructure maintenance and management (use a UAS to inspect large constructed assets such as bridges, pipe/power lines) are first among many in book.

I am particularly excited about the use of unmanned aerial systems as a sensor platform coupled with high precision cameras, z-depth cameras or even laser scanners in order to complete real time 3D scene reconstruction.  The combination of highly accurate GPS location with such sensor platforms would allow for the capture of highly accurate 3D representations of real world assets (constructed or otherwise) supporting all types of markets and functions (modeling, inspection, enterprise asset maintenance, etc.).