Listen to this story
|
With the release of the Vision Pro headset, Apple entered the VR market with a splash. In a classic Apple act, the headset was presented as the next step in computing with the tagline: ‘Apple’s first spatial computer’. Even though this move might seem a bit out of left field from Apple, the reality is that the tech giant has slowly been moving towards creating a mixed-reality headset.
Building on various facets of innovation, Vision Pro brings together advancements from many different products into a single, huge product. When looking at what Vision Pro brings to the table, one thing is amply clear — no one could have done it but Apple.
VR Isn’t a Cakewalk
Magic Leap, a VR company established in 2010, has reached a working product after more than a decade of constant research and development. The product also had to be refined five times, exemplifying the difficulties involved in building a working VR headset. The latest product, Magic Leap 2, required Magic Leap to file 1,763 patents to bring the product to market.
On Monday, while announcing Vision Pro, Apple revealed that it has filed over 5,000 patents to make the VR headset a reality.
Not only has Apple broken new ground with the product, it has also combined expertise from all of its verticals such as OS design, chipmaking, graphics, cameras, machine learning, and secure payments. All these verticals have been the focus of Apple’s devices for the past decade. Bringing it all together into XR is not only fitting, but the natural next step for the tech giant.
XR, the term used to denote a combination of augmented reality, mixed reality, and virtual reality, is a natural fit for Apple. XR is arguably the most technologically comprehensive vertical to compete in, as it needs research and development in discrete verticals working together.
Looking at what Apple has created, the company has not only imbibed vertical integration, but perfected it. Unbeknownst to the world, Apple has been building the tech stack for Vision Pro for the past 10 years.
The Vision Pro’s family tree
If one were to map the feature set of Vision Pro, we would find family roots in unlikely places. For example, the M-series chip at the heart of the headset can be traced back to the launch of the A11 chip. Powering the iPhone 8, this chip represented the first foray of Apple into chipmaking, and laid the foundation for the creation of the M1 processor for MacBooks.
Apple also has considerable expertise in integrating the hardware and software stack for its devices — from its MacBooks and iPhones, to iPads and watches. Each of these products has its separate OS, and the Vision Pro is no different. Building on its experience, Apple launched visionOS alongside the Vision Pro, creating an operating system for the metaverse.
Machine learning and Apple now go hand-in-hand. While the company has steered clear of the use of ‘AI’ as a buzzword, its products are chock-full of ML. Ever since Apple adopted an ML-first approach, marked by the hiring of ex-Googler John Giannandrea, it has integrated ML in places people would have never thought possible or even necessary. From image processing, to FaceID, to sound recognition, Apple is no stranger to ML features. Now, this has evolved into ML-powered facial reconstruction in the Vision Pro, along with other features we haven’t heard of yet.
Similarly, the depth sensors that make full 3D tracking possible in the Vision Pro headset, dubbed the TrueDepth sensor by Apple, made its debut with the iPhone X. The phone’s killer feature, FaceID, was made possible thanks to the TrueDepth sensor. Now, the Vision Pro has two of them.
Cameras have always been one of the biggest selling points for Apple’s phones. What started with high-quality lenses and sensors, evolved into a custom image signal processor and video encoder baked into the SoC. Undoubtedly, this technology is also used for the Vision Pro’s 12 cameras, especially for 4K video pass-through in close to real-time.
This upgrade path is mirrored in the iPhone’s priority on augmented reality experiences. Over generations, iPhones, and even iPads, added LiDAR sensors to their camera arrays. Along with creating a robust developer ecosystem for AR through ARKit, this also laid the groundwork for Vision Pro’s LiDAR.
Apple definitely faced the issue of adding latency when handling so many sensors and cameras to the device, but they came up with a distinctly Apple solution. Employing its chipmaking expertise, the company created the R1 chip specifically to process information from the sensors. With a blazing-fast latency of 12ms, this all but removes the possibility of motion sickness, powered by Apple Silicon.
Other honourable mentions include the creation and eventual perfection of the spatial audio, eye tracking with the user-facing sensor array, integration with Apple Pay for secure payments, and a 3D camera technology powered by ML.
When looking at the various advancements Apple has brought to the table, it is clear that the golden thread of innovation can be followed through the labyrinth of products. With the launch of the Vision Pro, Apple has taken this thread, wrapped it up in a bow, and presented it as its crown jewel, in what might be the next big step in computing.