On January 15th, media outlets reported the acquisition of edge computing technology startup Xnor.ai by Apple. The Seattle-based company is known for developing AI and machine learning tools for efficient, low power computing environments deployable at the edge. The company is the latest in a long series of technology-focused acquisitions by Apple and is expected to significantly reduce iOS platforms’ reliance on networking and data transfer with the cloud to achieve certain compute-intensive tasks. Xnor.ai had raised just over $14 million in seed and venture funding as of May 2018.
The acquisition has already invited comparisons to a similar acquisition from Apple of Turi in 2016 for a reportedly similar sum. Turi’s technology portfolio was likewise expected to aid in Apple’s development of computer vision algorithms and other machine learning tasks.
Deployment of machine learning at the edge has rapidly emerged over the last nine months as a critical goal for major mobile computing stakeholders. Advancements in on-device ML from Google have enabled applications such as local transcription and live captioning. Mobile AR in particular stand to benefit greatly from improvements to on-device ML. Functions such as SLAM-based tracking will become more accurate, and mitigate the significant battery drain associated with mobile AR in general.
Crucially, platform providers such as Apple, Google, and Facebook have further interest in reducing the amount of data shared between handsets and the cloud. Allowing functions such as tracking, transcription, and image processing to be handled exclusively by local SoC and other forms of edge computing hardware will lessen the load of data being processed by proprietary data servers and networks, resulting in further cost savings for platform providers, in addition to ancillary benefits for maintaining user privacy and security.
Overall, innovation in computing at the edge represents an extremely important long-term goal for the entire spatial computing industry; technology like Xnor.ai’s will afford developers of xR software and systems significantly more computing power to improve processing, graphics rendering, and tracking. AR stakeholders should be aware of the specific computing tasks currently finding success with machine learning; these will be best positioned for deployment in AR environments. For instance, live AR translation enabling rapid bilingual conversation has long been proposed as an augmented reality use case, but its readiness will be more immediately affected by the deployment of edge AI than other major technology drivers such as 5G.