Apple Vision Pro Feedback

I originally wrote this as feedback to the most excellent Accidental Tech Podcast, but I thought it was worth sharing. It’s been modified slightly to make it more accessible.


As a photographer, Computer Graphics Supervisor at ILM Sydney and a developer of a simple Apple Vision Pro App that I managed to get approved for launch day I have a few thoughts on some of the reactions to Apple’s latest product launch.

P3 Colour Space Coverage

Some commentators have pointed out that the AVP only covers 92% of the DCI-P3 colour gamut. P3 was originally developed to provide a wider gamut for projects shot on CMYK film stock. You can see in this diagram print film vs DCI-P3 vs ITU-709/sRGB:

I was the lighting supervisor on the film Happy Feet, which was designed to be viewed primarily on film (it came out in 2006 after all). It has lots of colours in the cyan/aqua range which you can see in the diagram has a big mismatch between even DCI-P3 and print film. We struggled to get accurate colour representation with the tools at the time.

Now, with most (but not all) content originating in RGB rather than CMYK, this is less of an issue.  However the wider gamut of DCI-P3 and even wider BT.2020 allow for better reproduction of those extremely saturated cyans and reds:

In practice though, most people would not notice the absence of those colours. In my experience, contrast is far more important than colour. Personally, I'd much rather have solid blacks, good contrast and no local dimming issues than a wider gamut.

If you have a wide-gamut display (eg. any modern Apple display), you can see some gamut comparisons here. In many cases, to see the benefit of P3 you have to oversaturate the image to the point of it looking unnatural.

Resolution

I have some experience shooting spherical panoramic photographs. This means I have a supply of 200-megapixel equirectangular images I can test as immersive environments. You can definitely tell the difference between a spherical panorama at 8k x 4k and one at 16k x 8k. I had issues getting the AVP to display my 16k images without problems (more on that later), so the initial release of my app only uses uncompressed 8k.

This benefit would apply to any 360-degree content, including video. I don't know of any streaming platforms that support 16k delivery (and the bandwidth would probably be prohibitive), so it will be interesting to see how this develops.

Depth in 3D Movies

People talk about 3D content have ‘depth’ or ‘depth information’. I've also worked on a number of 3D movies, including CG animated films like Legend Of The Guardians and live-action post-converted films like Harry Potter and the Deathly Hallows: Part 2. To clarify - no 3D movie has any 'depth' information. They just have two independent streams of video - one for each eye. The 'depth' comes just from the viewer's brain interpreting the disparity between those two streams. What what I've read, this is no different on the AVP. Spatial video is not really 'spatial' it's just stereoscopic 3D.

3D Conversion

There is a big different between films that were shot with 3D cameras (eg Avatar) and ones that are converted afterwards. Shooting 3D still give a better result, but the conversion process has improved. It involves a combination of lots of rotoscoping, reprojection of the 2D frame onto proxy 3D geometry, and (increasingly) ML-driven depth estimation.

Depth-of-field in 3D movies

Other people who have tried the hardware have mentioned that limited depth of field is an issue in 3D movies. It's common practice for the 3D version of an animated film to have reduced defocus (i.e. increased depth of field) to alleviate the frustration a viewer might feel in not being able to focus on different parts of the screen. Also, if the story is being told correctly, the viewer should be creating a similar effect by converging their eyes on the point in the frame the director intends, so defocus/bokeh is not as necessary.

Differences between the AVP simulator and the hardware

I can't say too much about this without breaking the Developer Lab NDA, but this was my first time developing for any of Apple's platforms. Overall, I found the experience really positive, the simulator worked well, and I got good feedback from App Review. However, I did run into issues where I had a bug that manifested on the hardware but not in the simulator (and vice-versa) and it was only by attending an AVP lab that I managed to resolve it (it's also related to why I don't have 16k images in the app). In fact, the current, approved version of my app does not run properly in the simulator.

I’m not sure if this is common. I’ve heard independent developers talking about having to own many test devices, so I assume it is. Being in Australia, it makes future AVP development challenging. I have one in transit from the US, so hopefully it will arrive safely.

Previous
Previous

Apple Immersive Video

Next
Next

Classic Marketplace Scam