Following the official launch of the Samsung Galaxy Note 10, consumers and media outlets assessed a unique feature powered by the phone’s depth-sensitive camera array: the ability to generate a seemingly photo-realistic 3D scan of a given real-world object in less than a minute.
The company touts a wide variety of applications for the new “quick scanning” feature. Standouts included simplified asset generation for 3D videos and experiences, including traditional virtual and augmented reality, as well as integration with 3D printers. 3D models created by the Note 10 could also be rigged to animate, pairing the 3D model to a human, matching the movements of the human to the 3D model. The animation feature will further streamline certain content creation pipelines for developers of avatar-driven virtual experiences or other forms of computer-generated imagery. Developers like Niantic could also incorporate scanning into existing games or experience which already leverage some form of AR.
The notion of rapid 3D scanning in a compact, multi-lens form factor is by no means new to the market. Sensor maker Occipital’s own tablet-mounted device established a precedent for the commercialization of discrete mass-market scanning hardware as early as 2014. Furthermore, the underlying technology which powers such hardware, known as structured light 3D scanning, has been in almost constant use as an enabler of various forms of computer vision. Examples such as Microsoft’s Kinect platform and Google’s Tango platform for simultaneous localization and mapping (SLAM) employed structured light 3D scanning as one of many data sources for high-quality AR.
The remainder of this analysis is restricted to Intelligence Service subscribers. Contact us to receive our Intelligence Services.