Thank you again for pushing the web forward in VisionOS 2, super exciting!
The latest WWDC24 video touched on VR experiences for VisionOS2.0 using WebXR, however there was no mention of passthrough AR experiences.
Samples such as this one are not supported:
https://immersive-web.github.io/webxr-samples/immersive-ar-session.html
In Settings > Safari, there is a feature flag for the AR WebXR module, but enabling it did not seem to change anything.
Is this the expected behavior at this time? Any developer preview(s) we could try?
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
The new Mac virtual display feature on visionOS 2 offers a curved/panoramic window. I was wondering if this is simply a property that can be applied to a window, or if it involves an immersive mode or SceneKit/RealityKit?
Game Controller Input Limitations in visionOS Volumetric Windows
Hello Apple Developer Community,
I'm developing a game for visionOS and have encountered significant limitations with game controller input when using volumetric windows (WindowGroup with .volumetric style). I'd appreciate clarification on whether this is expected behavior and any guidance on best practices.
🧩 Issue Summary
When using a DualSense controller with a volumetric window in visionOS, only a subset of controller inputs are available to the app. The remaining inputs appear to be reserved by the system for UI navigation.
✅ Working Inputs (Volumetric Window)
D-Pad (all directions)
L3 (left thumbstick button click)
R3 (right thumbstick button click)
Menu button
Options button
❌ Not Working Inputs (Volumetric Window)
Left thumbstick analog movement (used for UI scrolling instead)
Right thumbstick analog movement (used for UI scrolling instead)
Face buttons (Cross, Circle, Square, Triangle / A, B, X, Y)
Shoulder buttons (L1, R1)
Triggers (L2, R2)
Key observation: When moving the left thumbstick in a volumetric window, the window's UI scrolls vertically instead of sending input to my app's GameController handlers. Similarly, face buttons seem to be reserved for system UI interactions.
⚙️ Implementation Details
I'm using the standard GameController framework:
Connect to controller via GCController.controllers()
Access extendedGamepad profile
Set up valueChangedHandler and pressedChangedHandler for all inputs
Handlers confirmed registered via logging
Working inputs (D-Pad, L3, R3) trigger immediately and consistently
Non-working inputs (thumbsticks, face buttons) never trigger
🧠 Critical Finding: ImmersiveSpace Works Perfectly
When testing the exact same code in an ImmersiveSpace (.mixed immersion style), all controller inputs work perfectly:
✅ Both thumbsticks provide full analog input
✅ All face buttons trigger their handlers
✅ All shoulder buttons and triggers work correctly
✅ 100% success rate with no intermittent issues
This suggests the issue isn't with my code, but rather how visionOS handles controller input differently between Volumetric Windows and ImmersiveSpace.
🧪 Test Environment
I created a minimal test project (Controller-Playground) to isolate the issue:
A simple ControllerTester class that registers all GameController handlers
A visual UI showing real-time input state
No game logic, RealityKit physics, or other complexity
Results
In volumetric window: Only D-Pad, L3, R3, Menu, Options work
In ImmersiveSpace: All inputs work perfectly
This confirms the limitation exists at the visionOS platform level, not in app code.
🧰 Attempted Workarounds
I tried the following without success:
Setting GCSupportsControllerUserInteraction = false in Info.plist
Setting UIRequiresFullScreen = true
Changing window styles (.plain, .volumetric)
Polling vs. handler-based input approaches
Various threading models (MainActor, separate thread)
Result: The only way to enable full controller support is to switch to ImmersiveSpace.
❓ Questions for Apple
Is this input reservation behavior in volumetric windows intended and documented?
Are game controllers expected to have limited functionality in volumetric windows while full functionality is reserved for ImmersiveSpace?
Is there a way to request full controller input access in a volumetric window, or is ImmersiveSpace the only option for complete controller support?
Where can I find official documentation about controller input differences between window types?
Are there any APIs or configuration options to disable system controller shortcuts in volumetric windows?
🎯 Impact
This limitation has a significant effect on game design and architecture:
Volumetric windows offer a multitasking-friendly, less immersive experience
ImmersiveSpace provides full controller support but may be more immersive than some games require
Games that only need basic D-Pad and button input can work fine in volumetric windows
Games requiring analog sticks or face buttons must currently use ImmersiveSpace
It would be very helpful if Apple could clarify or reference existing documentation regarding controller input handling in different visionOS window types. If such documentation doesn't exist yet, it might be valuable to include this information in future developer guides or best-practice documents.
🕹 Current Workaround
For now, I'm using:
D-Pad for character movement (digital 8-direction)
R3 (right stick click) as a substitute for the "X" button
This setup allows the game to function within a volumetric window, though full controller support still requires ImmersiveSpace.
📄 Request
If this is expected behavior, I may have simply missed the relevant documentation — could you please point me to any existing resources that explain this design?
If there isn't one yet, it would be great if future visionOS documentation could:
Clearly outline controller input behavior across window types
Provide guidance on when to use Volumetric Windows vs. ImmersiveSpace for games
Consider adding an API option to request full controller access when appropriate
If this is not expected behavior, I'm happy to file a detailed bug report with sample code.
💻 System Information
visionOS: Latest Simulator
Xcode: Latest version
Controller: Sony DualSense
Framework: GameController (standard extendedGamepad profile)
Test project: Minimal reproducible example available
Thank you for any clarification or guidance you can provide. This information would be valuable for many developers working on visionOS games.
Hi. I am mixing content destined for Vision Pro. Locked to video. I have the AAX installer and the ASAF video player demonstrated in the quicktimes is nit included in the install package for pro tools. Would it be possible to post a link ?
When assigning a ManipulationComponent to an Entity SceneEvents.WillRemoveEntity will be called for that Entity.
Expected Behavior: the Entity is not (even if temporarily) removed from the Scene and no SceneEvents will be triggered as a result of assigning a ManipulationComponent.
FB20872220
I want to let users place 2D/3D “artworks” on detected walls and have them reappear in exactly the same real‑world spot after quitting and relaunching the app (like widgets do, but for my own entities).Environment: Xcode 26, visionOS 2.0, RealityKit + ARKitSession/WorldTrackingProvider Entities are parented to a holder that’s aligned to a wall via plane/mesh raycasts.
What I’ve tried:
Create a WorldAnchor at placement, save UUID + full 4×4 transform On next launch, re-create the WorldAnchor (or set the saved transform) and attach the entity Gate restore on relocalization/mesh updates and disable all raycast/search after restore Issue: After relaunch, placement still resolves relative to current device pose, not the same wall position.
Questions:
Is there a public API in visionOS 2.0 to persist app‑managed world anchors across sessions (room‑fixed), e.g., AnchorStore or equivalent?
If not, what’s the recommended pattern to reliably restore wall‑anchored content?
Are persistence features mentioned for widgets/windows available to third‑party RealityKit entities?
I want to let users place 2D/3D “artworks” on detected walls and have them reappear in exactly the same real‑world spot after quitting and relaunching the app (like widgets do, but for my own entities).Environment:
Xcode 26, visionOS 2.0, RealityKit + ARKitSession/WorldTrackingProvider
Entities are parented to a holder that’s aligned to a wall via plane/mesh raycasts
What I’ve tried:
Create a WorldAnchor at placement, save UUID + full 4×4 transform
On next launch, re-create the WorldAnchor (or set the saved transform) and attach the entity
Gate restore on relocalization/mesh updates and disable all raycast/search after restore
Issue:
After relaunch, placement still resolves relative to current device pose, not the same wall position.
Questions:
Is there a public API in visionOS 2.0 to persist app‑managed world anchors across sessions (room‑fixed), e.g., AnchorStore or equivalent?
If not, what’s the recommended pattern to reliably restore wall‑anchored content?
Are persistence features mentioned for widgets/windows available to third‑party RealityKit entities?
Topic:
Spatial Computing
SubTopic:
General
Hi I know it's possible to play equirectangular VR180 video either SBS or MV-HEVC. And for fisheye video, the only way I know is to convert it into an AIVU for playback.
Is there any way to directly play fisheye video using AVPlayer? Thanks a lot!
Most models are only available as glb or fbx, so I usually reexport them into usdz using Blender.
When I import them into Reality Composer Pro, Mesh, Textures etc look great, but in the Animation Library subsection all I can see is one default subtree animation.
In Blender I can see all available animations and play them individually. The default subtree animation just plays the default idle animation.
In fact when I open the nonlinear animation view in Blender and select a different animation as the default animation, the exported usdz shows the newly selected animation as default subtree animation.
I can see in the Apple sample apps models can have multiple animations in their Animation Library.
I'm using the latest Blender 4.5 and the usdz exporter should be working properly?
I'm currently implementing 180° / 360° immersive video for my app.
I easily implemented 360° by just applying VideoMaterial to flipped sphere.
But I'm stuck at 180°. I'm trying to implement by applying VideoMaterial to hemisphere (half sphere). I want to make VideoMaterial to be visible half front sphere and half back sphere transparent / clear.
Would there be any advice / information / idea to implement this? Your help would be grateful.
Hi, I'm currently implementing 180° / 360° property for immersive video in my app.
I was able to implement 360° easily by just giving VideoMaterial to flipped sphere.
However, I'm bit stuck at 180°. I want to implement by setting VideoMaterial to hemisphere mesh. But since RealityKit doesn't provide default function such like MeshResource.generateHemisphere yet, I just want to apply VideoMaterial half front visible, and half back transparent. I thought this would make my sphere looks like hemisphere.
But I can't find my way to implement this method.. I would appreciate any advice / idea / information that might help.
0
I’m using ARKit + SceneKit (Swift) with ARWorldTrackingConfiguration and detectionImages to place a 3D object (USDZ via SCNScene(named:)) when a reference image is detected. While the image is tracked, the object stays correctly aligned.
Goal: When the tracked image is no longer visible, I want the placed node to remain visible and fixed at its last known pose (no drifting) as I move the camera.
What works so far: Detect image → add node → track updates When the image disappears → keep showing the node at its last pose
Problem: After the image is no longer tracked, the node drifts as I move the device/camera. It looks like it’s still influenced by the (now unreliable) image anchor or accumulating small world-tracking errors.
Question: What’s the correct way in ARKit to “freeze” the node at its last known world transform once ARImageAnchor stops tracking, so it doesn’t drift?
In Reality Composer Pro, why is the Sky Sphere so much larger than the Sky Dome?
By my estimate, the Sky Sphere has a radius of 100m, while the Sky only has a radius of only 12m.
When using the new RealityKit Manipulation Component on Entities, indirect input will never translate the entity - no matter what settings are applied. Direct manipulation works as expected for both translation and rotation.
Is this intended behaviour? This is different from how indirect manipulation works on Model3D. How else can we get translation from this component?
visionOS 26 Beta 2
Build from macOS 26 Beta 2 and Xcode 26 Beta 2
Attached is replicable sample code, I have tried this in other projects with the same results.
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "MovieFilmReel", in: reelRCPBundle) {
ManipulationComponent.configureEntity(immersiveContentEntity, allowedInputTypes: .all, collisionShapes: [ShapeResource.generateBox(width: 0.2, height: 0.2, depth: 0.2)])
immersiveContentEntity.position.y = 1
immersiveContentEntity.position.z = -0.5
var mc = ManipulationComponent()
mc.releaseBehavior = .stay
immersiveContentEntity.components.set(mc)
content.add(immersiveContentEntity)
}
}
}
We were having an issue wrb the system rotate and scale gestures (two-handed gestures / RotateGesture3D and MagnifyGesture) were extremely difficult to register (make work) in the visionOS simulator.
The solution we found was to:
Launch your app in the simulator
Move the pointer on top of the 3D object for which you are testing rotation and scaling gestures.
Press and hold the Option key to display touch points (ie: the two-handed gesture points).
While maintaining the option key pressed, release the pointer and re-enable it again. I am using a track pad with tap-to-click enabled and three-finger to drag enabled in accessibility, so "release the pointer and re-enable it again" translates simply to removing the three finger and placing them again on the trackpad.
If you have maintained the option key pressed, then you should now be able to rotate and scale the 3D object.
Context if you are interested:
Our issue was also occurring in Apple's own sample project relating to gestures "Transforming RealityKit entities using gestures", at below link.
On Apple's article "Interacting with your app in the visionOS simulator" at the below link, for two-handed gestures it states "Press and hold the Option key to display touch points. Move the pointer while pressing the Option key to change the distance between the touch points. Move the pointer and hold the Shift and Option keys to reposition the touch points."
This simply did not work anymore for rotation and scaling gestures.
These gestures used to be a lot more responsive in Sonoma. Either the article should be updated to what I described above, or there is an issue. Our colleague who is using macOS Sonoma 14.6.1 with the latest release of Xcode is not having these issues.
Here is the list of configurations (troubleshooting we tried!) where it is difficult to achieve rotation and scaling gestures in the visionOS simulator:
macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.1
macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.0
macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.1
macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.0
macOS Sequoia 16.1 Beta, remove all Xcodes and installed the build from AppStore (Xcode 16.1)
macOS Sequoia 16.1 Beta, Xcode 16.0 w visionOS 2.0
completely wiped out, and reset entire development machine, re-installed latest releases of sequoia (15.1) and xcode (15.1))
Throughout these troubleshooting I often:
restarted both xcode and sim
erased all derived data
erased all contents and settings from sims
performed fresh git clones
None of the above worked, only the workaround described above works atm. As you can maybe deduce, it was very time consuming to find the workaround, we also wasted some development effort thinking our gesture development was no-good.
Hopefully this will help other devs.
Article Link:
https://developer.apple.com/documentation/xcode/interacting-with-your-app-in-the-visionos-simulator
Gesture sample project link:
https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
I've been experimenting with the Muse pen and understand that it can be accessed by my app through a SpatialTrackingSession, but is there any current or planned support for devices like this as for general UI input like game controllers are? For example, using the button as a tap analogue for SwiftUI views.
Topic:
Spatial Computing
SubTopic:
General
I am building a 360 photo viewer in VisionOS 26. Which allows the user to choose a 2 by 1 jpg and then renders it with a sphere mesh entity. And I use: TextureResource(contentsOf: url, options: options).
I noticed two situations here in terms of mipmaps options.
When setting "mipmapsMode: .none":
The graphic quality within the "gaze area" looks sharp and clear
The two poles (top and bottom) are perfectly rendered
Massive shimmer around the "gaze area"
When setting "mipmapsMode: .allocateAndGenerateAll":
The graphic looks slightly blurrier than in ".none" within the "gaze area"
The two poles are very blurry and hard to recognize the texture
Much less shimmer around the "gaze area"
My question would be: Is there a way to have the perfect graphic quality in ".none" without the massive shimmer?
Thank you!
Screenshots:
mipmapsMode: .none
mipmapsMode: .allocateAndGenerateAll
Hello,
If you add a ManipulationComponent to a RealityKit entity and then continue to add instructions, sooner or later you will encounter a crash with the following error message:
Attempting to move entity “%s” (%p) under “%s” (%p), but the new parent entity is currently being removed. Changing the parent/child entities of an entity in an event handler while that entity is already being reassigned is not supported.
CoreSimulator 1048 – Device: Apple Vision Pro 4K (B87DD32A-E862-4791-8B71-92E50CE6EC06) – Runtime: visionOS 26.0 (23M336) – Device Type: Apple Vision Pro
The problem occurs precisely with this code:
ManipulationComponent.configureEntity(object)
I adapted Apple's ObjectPlacementExample and made the changes available via GitHub.
The desired behavior is that I add entities to ManipulationComponent and then Realitiykit runs stably and does not crash randomly.
GitHub Repo
Thanks
Andre
Hi everyone,
We’re developing a Unity project for Apple Vision Pro that connects PSVR2 Sense controllers for advanced interaction and input.
We’ve encountered a major limitation:
when the controller is not held close to the designated hand (e.g., resting on a table or held by the non designated hand), the Sense controller enters a low-power or reduced-update mode. This results in noticeably reduced tracking update frequency and responsiveness until the controller is held again.
For certain use cases, this behavior is undesirable. In our case, it prevents continuous real-time tracking of the controller even when it’s stationary or being tracked externally.
Request:
Please consider exposing an API flag or developer option in ARKit to disable and optionally delay the low-power mode when the app requires full-rate updates regardless of proximity or hand pose detection.
Hi, I have a hand model that is in FBX and I'm exporting it to USD in Blender. I get a skinned mesh and while I can track the whole hand how do I track each joint and assign it and animate the skinned mesh itself. All my attempts say this is not possible in RealityKit as of now. True?