An Inside Info From An Apple Vision Pro Developer
An Inside Info From An Apple Vision Pro Developer

An Inside Info From An Apple Vision Pro Developer

2023-06-15
4 mins read

Sterling Crispin is an AR/VR specialist, who worked at Apple as one of Vision Pro’s neurotechnologists. In a public post, Crispin elaborates on the fascinating technologies and capabilities of the Vision Pro, and moreover, the expected anomalous evolution of VR/AR devices (especially Apple). The post contains some intriguing definitions like biofeedback, brain-computer interface, neurotechnology, mindfulness indications, and mental state prediction. Read on.

Apple Vision Pro. A pure anomaly
Apple Vision Pro. A pure anomaly

Insights from a Vision Pro developer

One of our recent articles deals with ‘Spatial Editing’. Titled The Era of Spatial Editing: Vision Pro and Final Cut Pro, the article claims that the newly announced Apple Vision Pro combined with the FCP of iPad, will allow the privilege of spatial editing. That means, utilizing Vision Pro’s numerous sensors, to refine your timeline, while editing professionally. That may sound complicated and/or overkill. However, when reading Apple’s AR/VR plans regarding its Vision Pro, one can be convinced that this is the future of editing. Sterling Crispin is an AR/VR specialist, who worked at Apple as one of Vision Pro’s neurotechnologists. In a public post, he exposes some of the development processes behind Vision Pro, and what is planned in the near future. Here are his words:

I spent 10% of my life contributing to the development of the #VisionPro while I worked at Apple as a Neurotechnology Prototyping Researcher in the Technology Development Group. It’s the longest I’ve ever worked on a single effort. I’m proud and relieved that it’s finally announced. I’ve been working on AR and VR for ten years, and in many ways, this is a culmination of the whole industry into a single product. I’m thankful I helped make it real, and I’m open to consulting and taking calls if you’re looking to enter the space or refine your strategy.

The work I did support the foundational development of Vision Pro, the mindfulness experiences, ▇▇▇▇▇▇ products, and also more ambitious moonshot research with neurotechnology. Like, predicting you’ll click on something before you do, basically mind reading. I was there for 3.5 years and left at the end of 2021, so I’m excited to experience how the last two years brought everything together. I’m really curious about what made the cut and what will be released later on.

Specifically, I’m proud of contributing to the initial vision, strategy, and direction of the ▇▇▇▇▇▇ program for Vision Pro. The work I did on a small team helped green light that product category, and I think it could have a significant global impact one day.

The large majority of work I did at Apple is under NDA and was spread across a wide range of topics and approaches. But a few things have become public through patents which I can cite and paraphrase below.

Generally as a whole, a lot of the work I did involved detecting the mental state of users based on data from their body and brain when they were in immersive experiences.

So, a user is in a mixed reality or virtual reality experience, and AI models are trying to predict if you are feeling curious, mind wandering, scared, paying attention, remembering a past experience, or some other cognitive state. And these may be inferred through measurements like eye tracking, electrical activity in the brain, heartbeats and rhythms, muscle activity, blood density in the brain, blood pressure, skin conductance, etc.

There were a lot of tricks involved to make specific predictions possible, which the handful of patents I’m named on go into detail about. One of the coolest results involved predicting a user was going to click on something before they actually did. That was a ton of work and something I’m proud of. Your pupil reacts before you click in part because you expect something will happen after you click. So you can create biofeedback with a user’s brain by monitoring their eye behavior and redesigning the UI in real time to create more of this anticipatory pupil response. It’s a crude brain-computer interface via the eyes, but very cool. And I’d take that over invasive brain surgery any day.

Other tricks to infer cognitive state involved quickly flashing visuals or sounds to a user in ways they may not perceive, and then measuring their reaction to it.

Another patent goes into detail about using machine learning and signals from the body and brain to predict how focused, or relaxed you are, or how well you are learning. And then updating virtual environments to enhance those states. So, imagine an adaptive immersive environment that helps you learn, or work, or relax by changing what you’re seeing and hearing in the background.

All of these details are publicly available in patents and were carefully written to not leak anything. There was a ton of other stuff I was involved with, and hopefully, more of it will see the light of day eventually.

A lot of people have waited a long time for this product. But it’s still one step forward on the road to VR. And it’s going to take until the end of this decade for the industry to fully catch up to the grand vision of this tech.

Again, I’m open to consulting work and taking calls if your business is looking to enter the space or refine your strategy. Mostly, I’m proud and relieved this has finally been announced. It’s been over five years since I started working on this, and I spent a significant portion of my life on it, as did an army of other designers and engineers. I hope the whole is greater than the sum of the parts and Vision Pro blows your mind.

Sterling Crispin -AR/VR specialist, who worked at Apple as one of Vision Pro’s neurotechnologists
Welcome to the Era of Spatial Editing: Vision Pro and Final Cut Pro
Welcome to the Era of Spatial Editing: Vision Pro and Final Cut Pro

Final thoughts

There are a lot of anomalous words and definitions in Crispin’s post. Words like biofeedback, brain-computer interface, neurotechnology, mindfulness indications, mental state prediction, and more. In the near future, Vision Pro will be able to utilize those physiologic states in order to perform more precise editing on Final Cut Pro. We’d guess that these are one of the reasons Apple has stopped to invest the needed efforts to further develop FCP for Desktops, and now it’s focusing on sharpening Vision Pro capabilities in regard. Anyway, these are our two cents. Let’s know your thoughts about this.

Get the best of filmmaking!

Subscribe to Y.M.Cinema Magazine to get the latest news and insights on cinematography and filmmaking!

Yossy is a filmmaker who specializes mainly in action sports cinematography. Yossy also lectures about the art of independent filmmaking in leading educational institutes, academic programs, and festivals, and his independent films have garnered international awards and recognition.
Yossy is the founder of Y.M.Cinema Magazine.

2 Comments

  1. Why not create a machine who will edit even before we are shooting? or that it will create content like we want without the need of shooting anything (I know it’s already possible)? And if the editor is an ironic man, like me, or a frustrated one or simply a liar, thinking one but doing other? I know, someone is dreaming to create a machine that will be even more creative than an artist, but then what? I’m afraid I’m only scared by all this, do you really want some machine to be able to predict what you want to do when you are doing some click? I know it’s (almost) already happening, at least now they know your preference according to what you click…

  2. Brainwave products like Muse and Neurosky have been around for almost a decade. It wouldn’t be long before these technologies are incorporated into VR and AR headsets. Imagine controlling your computer with your thoughts.

Leave a Reply

Your email address will not be published.

Get the best of filmmaking!

Subscribe to Y.M.Cinema Magazine to get the latest news and insights on cinematography and filmmaking!

Get the best of filmmaking!

Subscribe to Y.M.Cinema Magazine to get the latest news and insights on cinematography and filmmaking!

Laowa Ranger Compact Full-Frame Cine Zoom Lenses Announced
Previous Story

Laowa Ranger Compact Full-Frame Cine Zoom Lenses Announced

Christopher Nolan: IMAX 70mm Film = 3D Without Glasses
Next Story

Christopher Nolan: IMAX 70mm Film = 3D Without Glasses

Latest from Educate

Dune Part Two: IMAX Q&A With Greig Fraser

Dune Part Two: IMAX Q&A With Greig Fraser

IMAX has interviewed Greig Fraser ACS, ASC who is the DP behind Dune Part Two. In the interview, Fraser talks about the making of Dune Part Two, how it was different compared…
The Advantages of Underwater Drone.

The Advantages of Underwater Drone

In this fascinating case study, the advantages of underwater drones are demonstrated. Wildlife filmmaker Antoine Drancey has been utilizing the Boxfish Luna underwater drone to film the extraordinary stunning deep underwater world.…
Go toTop

Don't Miss

Atomos announces Ninja Phone: Transform your phone into a Ninja

Atomos Announces Ninja Phone: Transform Your Phone Into a Ninja

Atomos announces the Ninja Phone, a whisper-quiet, 10-bit video co-processor for smartphones and tablets that lets you record from professional HDMI cameras. The
Bloomberg: “Apple Vision Pro's greatest potential may be replacing the Mac and iPad”

Bloomberg: “Apple Vision Pro’s greatest potential may be replacing the Mac and iPad”

According to Bloomberg, Zombieland will dominate, as Apple puts all its chips on its spatial computer in order to replace the Mac and