MITB Banner

New ARCore Capabilities Announced At Google I/O 2021

Share

Google’s augmented reality platform for developers, ARCore, has created compelling experiences for use cases across gaming, navigation, e-commerce and social media since its launch in 2018.

ARCore–used to develop AR experiences for Android–powers more than 850 million smartphone users and is installed in over one billion devices globally. To enable seamless integration of the virtual and real, Google has been building out ARCore skill tree, empowering developers with foundational capabilities around realism, perception and asynchronous interaction. 

On the second day of the Google I/O, Google announced its new capabilities in ARCore. The session was hosted by Rajat Paharia, Product Lead of ARCore; and Jared Finder, Engineering Manager of ARCore. 

Realism in AR 

The realism branch of the ARCore skill tree includes foundational AR Capabilities around motion tracking, lighting, and depth. 

  • Realism Capabilities allow putting digital content into the physical world so that users cannot tell the difference between what’s real and what is not. 
  • Motion tracking enables virtual objects to stay in place even when the camera is moving. 

“In the last year, we made AECore’s motion-tracking even more robust with an up to 36 percent decrease in CPU usage and dramatic reduction in the number of tracking resets, in many cases eliminating them completely,” Rajat said. 

  • Lighting: ARCore’s environmental HDR understands the lightning in the scene so that the virtual object(s) can look and behave exactly like the real ones. 
  • Depth API: It uses a single standard smartphone camera to provide a depth map, making experiences more immersive. “Your virtual objects can now interact with the real world, rather than being stickers on the screen,” Rajat said. 

Under Depth API, ARCore offers particle effects, interactivity, occlusion and lighting effects. 

Depth perception 

Perception enables one to detect certain objects in a scene and help the user augment them. It includes capabilities such as instant placement, augmented images, ML models, augmented faces, and ARCore’s latest capability, Raw Depth API.

  • Instant Placement enables users to place virtual content without first having to pan and scan to find a plane. “With Instant Placement, we have seen a 16 percent increase in placement success, and 26 percent decrease in placement time,” Jared said. 
  • Augmented Images API tracks moving images keeping the attached content intact. For users who want to create face effects, ARCore offers an Augmented Faces API. It provides high-quality, 468 point 3D mesh to allow users to add effects. This feature is available on both Android and iOS.
  • ML models: ARCore allows users to integrate their own ML models into AR experiences.

Asynchronous interaction 

This enables users to build AR experiences where users can interact with places and with each other, across space and time. The capabilities include Cloud Anchors, and Recording and Playback. 

  • Cloud Anchors allows users to annotate the world with AR content, allowing users to build their own constantly evolving layers on top of the real world, and create location-based experiences. These annotations can be experienced in both Android and iOS. 

New releases 

Google announced the launch of Raw Depth API, which enables developers to build more accurate ‘measurement, reconstruction and interaction apps’ by giving users access to more detailed point cloud representation than its usual standard depth API. 

The tech giant also announced the launch of the ARCore Recording and Playback API. “For your developer velocity, you can now record a video and then play it back through ARCore,” Jared said. 

He said users or businesses looking to build an experience in a shopping mall need not go to the mall every time they want to test a change. They can record their visit once and test from the comfort of their own desk. Enterprises can enable post-capture AR experiences for users, he added.

PS: The story was written using a keyboard.
Share
Picture of Debolina Biswas

Debolina Biswas

After diving deep into the Indian startup ecosystem, Debolina is now a Technology Journalist. When not writing, she is found reading or playing with paint brushes and palette knives. She can be reached at debolina.biswas@analyticsindiamag.com
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India