Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

RCP Scene issues at runtime (visionOS 26 / Xcode 26 Beta 4)
I have a scene that has been assembled in RCP but I'm losing the correct hierarchy and transforms when running the scene in the headset or the simulator. This is in RCP: This is at runtime with the debugger: As you can see the "MAIN_WAGON" entity is gone and part of the hierarchy are now children of "TRAIN_ROOT" instead. Another issue is that not only part of the hieararchy disappears, it also reverts back to default values of the transform instead of what is set in RCP: This is in RCP: This is in the simulator/headset: I'm filing a feedback ticket too and will post the number here. Anyone had a similar issue and found a fix or workaround ?
5
0
362
Aug ’25
WWDC 25 RemoteImmersiveSpace - Support for Passthrough Mode? RealityKit?
This is related to the WWDC presentation, What's new in Metal rendering for immersive apps.. Specifically, the macOS spatial streaming to visionOS feature: For reference: the page in the docs. The presentation demonstrates it using a full immersive space and Metal rendering using compositor services. I'd like clarity on a few things: Is the remote device wireless, or must the visionOS device be connected via a wired connected? Is there a limit to the number of remote devices, and if not, could macOS render different things per remote device simultaneously? Can I also use mixed mode with passthrough enabled, instead of just a fully-immersive mode? Can I use RealityKit instead of Metal? If so, may I have an example, or would someone point to an example?
5
0
675
Sep ’25
Photogrammetry Session - new model?
Hi Apple Team, We noticed the following exciting changelog in the latest macOS 26 beta: A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451) However after trying this on the latest beta and running some tests we do not see any differences on objects with low textures such as single coloured surfaces. Is there anything we are missing? the machine is definitely connected to the internet but we have no way of knowing from the logs if the new model is being used? thanks
5
1
669
Jul ’25
Buttons become unresponsive after using .windowStyle(.plain) with auto-hiding menu
I'm developing a visionOS panorama viewer app where I need to implement an auto-hiding floating menu in immersive space. The menu should: Show for 3 seconds when entering immersive mode Auto-hide after 3 seconds, Reappear when user taps anywhere (using SpatialTapGesture). Buttons should respond to gaze + pinch interaction The Problem: When I add .windowStyle(.plain) to achieve transparent window background for the auto-hide effect, all buttons in the menu become completely unresponsive to gaze + pinch interaction. The buttons only respond to direct finger touch (poking). Without .windowStyle(.plain): Buttons work correctly with gaze + pinch, but I cannot achieve transparent window background for hiding. With .windowStyle(.plain): Window can be transparent, but buttons lose gaze + pinch interaction. Code: App.swift: @main struct MyApp: App { @StateObject private var model = AppModel() var body: some Scene { WindowGroup(id: "MainWindow") { ContentView() .environmentObject(model) } .defaultSize(width: 900, height: 700) .windowResizability(.contentSize) .windowStyle(.plain) // <-- This causes the interaction issue ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() .environmentObject(model) } } } ContentView.swift (simplified): struct ContentView: View { @EnvironmentObject var model: AppModel @State private var isMenuVisible: Bool = true var body: some View { VStack { if model.isImmersiveViewActive { if isMenuVisible { // This menu's buttons don't respond to gaze+pinch immersiveControlMenu } } else { mainMenuButtons } } .glassBackgroundEffect() } private var immersiveControlMenu: some View { HStack { Button("Exit") { exitImmersiveSpace() } .buttonStyle(.bordered) // Also tried .plain, same issue } .padding() .glassBackgroundEffect() } } ImmersiveView.swift: struct ImmersiveView: View { @EnvironmentObject var model: AppModel var body: some View { RealityView { content in // Panorama sphere let sphere = ModelEntity(mesh: .generateSphere(radius: 1000), materials: [material]) content.add(sphere) // Tap detector for menu toggle let tapDetector = Entity() tapDetector.components.set(CollisionComponent(shapes: [.generateSphere(radius: 900)])) tapDetector.components.set(InputTargetComponent()) content.add(tapDetector) } .gesture( SpatialTapGesture() .targetedToAnyEntity() .onEnded { _ in model.shouldShowMenu = true } ) } } Environment: Xcode 26.2 visionOS 26.3 Vision Pro device Questions: Is .windowStyle(.plain) expected to affect button interaction behavior? What is the recommended approach to achieve a transparent/hidden window in immersive mode while maintaining button interactivity? Is there an alternative to .windowStyle(.plain) for hiding window chrome in visionOS? Thank you for any guidance!
5
0
1.1k
Feb ’26
RC Pro Timeline Notification Not Received in Xcode
I'm having a heck of a time getting this to work. I'm trying to add an event notification at the end of a timeline animation to trigger something in code but I'm not receiving the notification from RC Pro. I've watched that Compose Interactive 3D Content video quite a few times now and have tried many different ways. RC Pro has the correct ID names on the notifications. I'm not a programmer at all. Just a lowly 3D artist. Here is my code... import SwiftUI import RealityKit import RealityKitContent extension Notification.Name { static let button1Pressed = Notification.Name("button1pressed") static let button2Pressed = Notification.Name("button2pressed") static let button3Pressed = Notification.Name("button3pressed") } struct MainButtons: View { @State private var transitionToNextSceneForButton1 = false @State private var transitionToNextSceneForButton2 = false @State private var transitionToNextSceneForButton3 = false @Environment(AppModel.self) var appModel @Environment(\.dismissWindow) var dismissWindow // Notification publishers for each button private let button1PressedReceived = NotificationCenter.default.publisher(for: .button1Pressed) private let button2PressedReceived = NotificationCenter.default.publisher(for: .button2Pressed) private let button3PressedReceived = NotificationCenter.default.publisher(for: .button3Pressed) var body: some View { ZStack { RealityView { content in // Load your RC Pro scene that contains the 3D buttons. if let immersiveContentEntity = try? await Entity(named: "MainButtons", in: realityKitContentBundle) { content.add(immersiveContentEntity) } } // Optionally attach a gesture if you want to debug a generic tap: .gesture( TapGesture().targetedToAnyEntity().onEnded { value in print("3D Object tapped") _ = value.entity.applyTapForBehaviors() // Do not post a test notification here—rely on RC Pro timeline events. } ) } .onAppear { dismissWindow(id: "main") // Remove any test notification posting code. } // Listen for distinct button notifications. .onReceive(button1PressedReceived) { (output) in print("Button 1 pressed notification received") transitionToNextSceneForButton1 = true } .onReceive(button2PressedReceived.receive(on: DispatchQueue.main)) { _ in print("Button 2 pressed notification received") transitionToNextSceneForButton2 = true } .onReceive(button3PressedReceived.receive(on: DispatchQueue.main)) { _ in print("Button 3 pressed notification received") transitionToNextSceneForButton3 = true } // Present next scenes for each button as needed. For example, for button 1: .fullScreenCover(isPresented: $transitionToNextSceneForButton1) { FacilityTour() .environment(appModel) } // You can add additional fullScreenCover modifiers for button 2 and 3 transitions. } }
5
0
535
Sep ’25
Portal crossing causes inconsistent lighting and visual artifacts between virtual and real spaces (visionOS 2.0)
Hello, I'm working with the new PortalComponent introduced in visionOS 2.0, and I've encountered some issues when transitioning entities between virtual and real-world spaces using crossingMode. Specifically: Lighting inconsistency: When CG content (ModelEntities with PhysicallyBasedMaterial) crosses the portal from virtual space into the real environment, the way light reflects on the objects changes noticeably. This causes a jarring visual effect, as the same material appears differently depending on the space it's in. Unnatural transition visuals: During the transition, the CG models often appear to "emerge from the wall," especially when crossing from virtual to real. This ruins the immersive illusion and feels visually unnatural. IBL adjustment attempts: I’ve tried adding an ImageBasedLightComponent to the world entity, and while it slightly improves the lighting consistency, the issue still remains to a noticeable degree. My goal is to create a seamless visual experience when CG entities cross between spaces, without sudden lighting shifts or immersion-breaking geometry reveals. Has anyone else experienced similar issues? Is there a recommended setup or workaround to better control lighting and visual fidelity when using crossingMode with portals in visionOS 2.0? Any guidance would be greatly appreciated. Thank you!
5
0
319
Jul ’25
Developer Strap Gen 2 - Only USB2 Speeds
I am testing out the Gen 2 of the developer strap on my Vision Pro M2 and I have only been able to get USB 2 speeds when connecting it to my MacBook Pro Max M3. I used the official Apple Thunderbolt 4 cable, which does get Thunderbolt speeds on my T7 Touch drive. Has anyone figured out a solution for this issue? The Gen 2 developer strap does advertise 20 Gb/s speeds.
5
3
1.3k
Nov ’25
Request File Access from Unity for Apple Vision Pro
Hi, I am trying to load files from the Apple Vision Pro's storage into a Unity App (using Apple visionOS XR Plugin and not PolySpatial package). So far, I've tried using UnitySimpleFileBrowser and UnityStandaloneFileBrowser (both aren't made for the Vision Pro and don't work there), and then implemented my own naive file browser that at least allows me to view directories (that I can see from the App Sandbox). This is of course very limited: Gray folders can't be accessed, the only 3 available ones don't contain anything where a user would put files through the "Files" app. I know that an app can request access to these "Files & Folders": So my question is: Is there a way to request this access for a Unity-built app at the moment? If yes, what do I need to do? I've looked into the generated Xcode project's "Capabilities", but did not find anything related to file access. Any help is appreciated!
5
0
427
Oct ’25
Build Vision Pro failed
`error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults") error: Tool exited with code 1
5
0
693
Jul ’25
Digital Crown press when both immersive space and additional windows are presented
I have been experimenting with the Hello World sample app from https://developer.apple.com/documentation/visionos/world and I came across behavior that appears inconsistent with user-facing documentation describing the device controls at https://support.apple.com/en-gb/guide/apple-vision-pro/tan1e2a29e00/visionos I tried pressing simulator's "Home" button while "Objects in Orbit" immersive space was presented alongside with the main application window. According to user documentation, pressing Digital Crown should take the user directly to Home View. In my test a single press only dismissed the immersive space, I needed another press to "exit" the app and go to Home View. Is this behavior expected? I am assuming that "Home" button in the simulator behaves as if the user pressed Digital Crown on the device, I don't have access to the actual hardware.
5
0
439
Apr ’25
Blender to Reality Composer Pro 2.0 to SwiftUI + RealityKit visionOS Best Practices
Hi, I'm very new to 3D and am currently porting a SwiftUI iOS app to visionOS 2.0. I saw WWDC24 feature Blender in multiple spatial videos, and have begun integrating Blender models and animations into my VisionOS app (I would also like to integrate skeletons and programmatic rigging, more on that later). I'm wondering if there are “Best Practices” for this workflow - from Blender to USD to RCP 2.0 to visionOS 2 in Xcode. I’ve cobbled together the following that has some obvious holes: I’ve been able to find some pre-rigged and pre-animated models online that can serve as a great starting point. As a reference, here is a free model from SketchFab - a simple rigged skeleton with 6 built in animations: https://sketchfab.com/3d-models/skeleton-character-low-poly-8856e0138f424d68a8e0b40e185951f6 When exporting to USD from Blender, I haven’t been able to export more than one animation per USD file. Is there a workflow to export multiple animations in a single USDC file, or is this just not possible? As a temporary workaround, here is a python script I’ve been using to loop through all Blender animations, and export a model for each animation: import bpy import os # Set the directory where you want to save the USD files output_directory = “/path/to/export” # Ensure the directory exists if not os.path.exists(output_directory): os.makedirs(output_directory) # Function to export current scene as USD def export_scene_as_usd(output_path, start_frame, end_frame): bpy.context.scene.frame_start = start_frame bpy.context.scene.frame_end = end_frame # Export the scene as a USD file bpy.ops.wm.usd_export( filepath=output_path, export_animation=True ) # Save the current scene name original_scene = bpy.context.scene.name # Iterate through each action and export it as a USD file for action in bpy.data.actions: # Create a new scene for each action bpy.context.window.scene = bpy.data.scenes[original_scene].copy() new_scene = bpy.context.scene # Link the action to all relevant objects for obj in new_scene.objects: if obj.animation_data is not None: obj.animation_data.action = action # Determine the frame range for the action start_frame, end_frame = action.frame_range # Export the scene as a USD file output_path = os.path.join(output_directory, f"{action.name}.usdc") export_scene_as_usd(output_path, int(start_frame), int(end_frame)) # Delete the temporary scene to free memory bpy.data.scenes.remove(new_scene) print("Export completed.") I have also been able to successfully export rigging armatures as a single Skeleton - each “bone” showing getting imported into Reality Composer Pro 2.0 when exporting/importing manually. I would like to have all of these animations available in a single scene to be used in a RealityView in visionOS - so I have placed all animation models in a RCP scene and created named Timeline Action animations for each, showing the correct model and hiding the rest when triggering specific animations. I apply materials/textures to each so they appear the same, using Shader Graph. Then in SwiftUI I use notifications (as shown here - https://forums.developer.apple.com/forums/thread/756978) to trigger each RCP Timeline Action animation from code. Two questions: Is there a better way than to have multiple models of the same skeleton - each with a different animation - in a scene to be able to trigger multiple animations? Or would this require recreating Blender animations using skeleton rigging and keyframes from within RCP Timelines? If I want to programmatically create custom animations and move parts of the skeleton/armatures - do I need to do this by defining custom components in RCP, using IKRig and define movement of each of the “bones” in Xcode? I’m looking for any tips/tricks/workflow from experienced engineers or 3D artists that can create a more efficient/optimized workflow using Blender, USD, RCP 2 and visionOS 2 with SwiftUI. Thanks so much, I appreciate any help! I am very excited about all the new tools that keep evolving to make spatial apps really fun to build!
4
2
1.3k
Apr ’25
Access Main Camera not working in VisionOS 26.1
I downloaded the official sample project “Accessing the Main Camera”, but I found that it’s not able to retrieve the camera feed on visionOS 26.1. After checking the debug logs, it seems the issue is caused by the system being unable to find the expected format. I tested on a device running visionOS 2, and the camera feed worked correctly — but only when using the sample code from the visionOS 2 version, not the current one. I also noticed that some of the APIs have changed between versions. Has anyone managed to successfully access the camera feed on visionOS 26.1?
4
0
875
Nov ’25
Metal (Compositor Services) or RealityKit on visionOS
I am develop visionOS app. I am now very interested in Metal and Compositor Services, but I have not explored them in depth. I know that Metal has a higher degree of control freedom. I am wondering if using Compositor Services will have fewer functions than RealityKit in AR technology (such as scene reconstruction and understanding, hover effect, etc.).
4
0
286
Jun ’25
Alternatives to SceneView
Hey there, since SceneView has been marked as „deprecated“ for SwiftUI, I‘m wondering which alternatives should be considered for the following situation: I have a SwiftUI app (for iOS and iPadOS) where users can view (with rotate, scale, move gestures) 3D models (USDZ) in a scene. The models will be downloaded from web backend and called via local URL paths. What I tested: I‘ve tried ARView in .nonAR mode, RealityView, however I didn‘t get the expected response -> User can rotate, scale the 3D models in a virtual space. ARView in nonAR mode still shows the object like in normal AR mode without camera stream. I tried to add Gestures to the RealityView on iOS - loading USDZ 3D models worked but the gestures didn’t). Model3D is only available for visionOS (that would be amazing to have it for iOS) I also checked QuickLook Preview however it works pretty strange via Filepicker etc, which is not the way how the user should load the 3D models in my app. Maybe I missed something, I couldn’t find anything which can help me. I‘m pretty much stucked adopting the latest and greatest frameworks/APIs in my App and taking the next steps porting my app to visionOS. Long story short 😃: Does someone have an idea what is the alternative to SceneView for USDZ 3D models? I appreciate your support!! Thanks in advance!
4
0
269
Jul ’25
Presenting images in RealityKit sample No Longer Builds
After updating to the latest visionOS beta, visionOS 26 Beta 4 (23M5300g) the ‘Presenting images in RealityKit’ sample from the following link no longer builds due to an error. https://developer.apple.com/documentation/RealityKit/presenting-images-in-realitykit Expected / Previous: Application builds and runs on device, working as described in the documentation. Reality: Application builds, but does not run on device due to an error (shown in screenshot) “Thread 1: EXC_BAD_ACCESS (code=1, address=0xb)”. The application still runs on the simulator, but not on device. When launching the app from Xcode, it builds and installs correctly but hangs due to the respective error. When loading the app from the Home Screen, the app does not load, and immediately returns to the Home Screen. This Xcode project previously ran with no changes to code - the only change was updating the visionOS system software to the latest version. visionOS 26 Beta 4 (23M5300g) Is anyone else experiencing this issue?
4
0
241
Aug ’25
RCP Scene issues at runtime (visionOS 26 / Xcode 26 Beta 4)
I have a scene that has been assembled in RCP but I'm losing the correct hierarchy and transforms when running the scene in the headset or the simulator. This is in RCP: This is at runtime with the debugger: As you can see the "MAIN_WAGON" entity is gone and part of the hierarchy are now children of "TRAIN_ROOT" instead. Another issue is that not only part of the hieararchy disappears, it also reverts back to default values of the transform instead of what is set in RCP: This is in RCP: This is in the simulator/headset: I'm filing a feedback ticket too and will post the number here. Anyone had a similar issue and found a fix or workaround ?
Replies
5
Boosts
0
Views
362
Activity
Aug ’25
What is the reason the hand-tracking joints have these axes? visionOS
What is the reason the hand-tracking joints have these axes? I'm trying to create a virtual hands model and that's a mess.
Replies
5
Boosts
0
Views
1.5k
Activity
Dec ’25
WWDC 25 RemoteImmersiveSpace - Support for Passthrough Mode? RealityKit?
This is related to the WWDC presentation, What's new in Metal rendering for immersive apps.. Specifically, the macOS spatial streaming to visionOS feature: For reference: the page in the docs. The presentation demonstrates it using a full immersive space and Metal rendering using compositor services. I'd like clarity on a few things: Is the remote device wireless, or must the visionOS device be connected via a wired connected? Is there a limit to the number of remote devices, and if not, could macOS render different things per remote device simultaneously? Can I also use mixed mode with passthrough enabled, instead of just a fully-immersive mode? Can I use RealityKit instead of Metal? If so, may I have an example, or would someone point to an example?
Replies
5
Boosts
0
Views
675
Activity
Sep ’25
Photogrammetry Session - new model?
Hi Apple Team, We noticed the following exciting changelog in the latest macOS 26 beta: A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451) However after trying this on the latest beta and running some tests we do not see any differences on objects with low textures such as single coloured surfaces. Is there anything we are missing? the machine is definitely connected to the internet but we have no way of knowing from the logs if the new model is being used? thanks
Replies
5
Boosts
1
Views
669
Activity
Jul ’25
Having trouble with USD material not showing correct color
I exported some usd assets from IsaacSim but they are not showing up correctly on my Apple Vision Pro. Even though the mesh looks to be the correct color in Finder and I can see the Diffuse Color looks correct, the object is still just gray. It should be green!
Replies
5
Boosts
0
Views
665
Activity
Dec ’25
Buttons become unresponsive after using .windowStyle(.plain) with auto-hiding menu
I'm developing a visionOS panorama viewer app where I need to implement an auto-hiding floating menu in immersive space. The menu should: Show for 3 seconds when entering immersive mode Auto-hide after 3 seconds, Reappear when user taps anywhere (using SpatialTapGesture). Buttons should respond to gaze + pinch interaction The Problem: When I add .windowStyle(.plain) to achieve transparent window background for the auto-hide effect, all buttons in the menu become completely unresponsive to gaze + pinch interaction. The buttons only respond to direct finger touch (poking). Without .windowStyle(.plain): Buttons work correctly with gaze + pinch, but I cannot achieve transparent window background for hiding. With .windowStyle(.plain): Window can be transparent, but buttons lose gaze + pinch interaction. Code: App.swift: @main struct MyApp: App { @StateObject private var model = AppModel() var body: some Scene { WindowGroup(id: "MainWindow") { ContentView() .environmentObject(model) } .defaultSize(width: 900, height: 700) .windowResizability(.contentSize) .windowStyle(.plain) // <-- This causes the interaction issue ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() .environmentObject(model) } } } ContentView.swift (simplified): struct ContentView: View { @EnvironmentObject var model: AppModel @State private var isMenuVisible: Bool = true var body: some View { VStack { if model.isImmersiveViewActive { if isMenuVisible { // This menu's buttons don't respond to gaze+pinch immersiveControlMenu } } else { mainMenuButtons } } .glassBackgroundEffect() } private var immersiveControlMenu: some View { HStack { Button("Exit") { exitImmersiveSpace() } .buttonStyle(.bordered) // Also tried .plain, same issue } .padding() .glassBackgroundEffect() } } ImmersiveView.swift: struct ImmersiveView: View { @EnvironmentObject var model: AppModel var body: some View { RealityView { content in // Panorama sphere let sphere = ModelEntity(mesh: .generateSphere(radius: 1000), materials: [material]) content.add(sphere) // Tap detector for menu toggle let tapDetector = Entity() tapDetector.components.set(CollisionComponent(shapes: [.generateSphere(radius: 900)])) tapDetector.components.set(InputTargetComponent()) content.add(tapDetector) } .gesture( SpatialTapGesture() .targetedToAnyEntity() .onEnded { _ in model.shouldShowMenu = true } ) } } Environment: Xcode 26.2 visionOS 26.3 Vision Pro device Questions: Is .windowStyle(.plain) expected to affect button interaction behavior? What is the recommended approach to achieve a transparent/hidden window in immersive mode while maintaining button interactivity? Is there an alternative to .windowStyle(.plain) for hiding window chrome in visionOS? Thank you for any guidance!
Replies
5
Boosts
0
Views
1.1k
Activity
Feb ’26
RC Pro Timeline Notification Not Received in Xcode
I'm having a heck of a time getting this to work. I'm trying to add an event notification at the end of a timeline animation to trigger something in code but I'm not receiving the notification from RC Pro. I've watched that Compose Interactive 3D Content video quite a few times now and have tried many different ways. RC Pro has the correct ID names on the notifications. I'm not a programmer at all. Just a lowly 3D artist. Here is my code... import SwiftUI import RealityKit import RealityKitContent extension Notification.Name { static let button1Pressed = Notification.Name("button1pressed") static let button2Pressed = Notification.Name("button2pressed") static let button3Pressed = Notification.Name("button3pressed") } struct MainButtons: View { @State private var transitionToNextSceneForButton1 = false @State private var transitionToNextSceneForButton2 = false @State private var transitionToNextSceneForButton3 = false @Environment(AppModel.self) var appModel @Environment(\.dismissWindow) var dismissWindow // Notification publishers for each button private let button1PressedReceived = NotificationCenter.default.publisher(for: .button1Pressed) private let button2PressedReceived = NotificationCenter.default.publisher(for: .button2Pressed) private let button3PressedReceived = NotificationCenter.default.publisher(for: .button3Pressed) var body: some View { ZStack { RealityView { content in // Load your RC Pro scene that contains the 3D buttons. if let immersiveContentEntity = try? await Entity(named: "MainButtons", in: realityKitContentBundle) { content.add(immersiveContentEntity) } } // Optionally attach a gesture if you want to debug a generic tap: .gesture( TapGesture().targetedToAnyEntity().onEnded { value in print("3D Object tapped") _ = value.entity.applyTapForBehaviors() // Do not post a test notification here—rely on RC Pro timeline events. } ) } .onAppear { dismissWindow(id: "main") // Remove any test notification posting code. } // Listen for distinct button notifications. .onReceive(button1PressedReceived) { (output) in print("Button 1 pressed notification received") transitionToNextSceneForButton1 = true } .onReceive(button2PressedReceived.receive(on: DispatchQueue.main)) { _ in print("Button 2 pressed notification received") transitionToNextSceneForButton2 = true } .onReceive(button3PressedReceived.receive(on: DispatchQueue.main)) { _ in print("Button 3 pressed notification received") transitionToNextSceneForButton3 = true } // Present next scenes for each button as needed. For example, for button 1: .fullScreenCover(isPresented: $transitionToNextSceneForButton1) { FacilityTour() .environment(appModel) } // You can add additional fullScreenCover modifiers for button 2 and 3 transitions. } }
Replies
5
Boosts
0
Views
535
Activity
Sep ’25
Portal crossing causes inconsistent lighting and visual artifacts between virtual and real spaces (visionOS 2.0)
Hello, I'm working with the new PortalComponent introduced in visionOS 2.0, and I've encountered some issues when transitioning entities between virtual and real-world spaces using crossingMode. Specifically: Lighting inconsistency: When CG content (ModelEntities with PhysicallyBasedMaterial) crosses the portal from virtual space into the real environment, the way light reflects on the objects changes noticeably. This causes a jarring visual effect, as the same material appears differently depending on the space it's in. Unnatural transition visuals: During the transition, the CG models often appear to "emerge from the wall," especially when crossing from virtual to real. This ruins the immersive illusion and feels visually unnatural. IBL adjustment attempts: I’ve tried adding an ImageBasedLightComponent to the world entity, and while it slightly improves the lighting consistency, the issue still remains to a noticeable degree. My goal is to create a seamless visual experience when CG entities cross between spaces, without sudden lighting shifts or immersion-breaking geometry reveals. Has anyone else experienced similar issues? Is there a recommended setup or workaround to better control lighting and visual fidelity when using crossingMode with portals in visionOS 2.0? Any guidance would be greatly appreciated. Thank you!
Replies
5
Boosts
0
Views
319
Activity
Jul ’25
Developer Strap Gen 2 - Only USB2 Speeds
I am testing out the Gen 2 of the developer strap on my Vision Pro M2 and I have only been able to get USB 2 speeds when connecting it to my MacBook Pro Max M3. I used the official Apple Thunderbolt 4 cable, which does get Thunderbolt speeds on my T7 Touch drive. Has anyone figured out a solution for this issue? The Gen 2 developer strap does advertise 20 Gb/s speeds.
Replies
5
Boosts
3
Views
1.3k
Activity
Nov ’25
Curved/panorama window in visionOS 2?
The new Mac virtual display feature on visionOS 2 offers a curved/panoramic window. I was wondering if this is simply a property that can be applied to a window, or if it involves an immersive mode or SceneKit/RealityKit?
Replies
5
Boosts
0
Views
1.5k
Activity
Nov ’25
Request File Access from Unity for Apple Vision Pro
Hi, I am trying to load files from the Apple Vision Pro's storage into a Unity App (using Apple visionOS XR Plugin and not PolySpatial package). So far, I've tried using UnitySimpleFileBrowser and UnityStandaloneFileBrowser (both aren't made for the Vision Pro and don't work there), and then implemented my own naive file browser that at least allows me to view directories (that I can see from the App Sandbox). This is of course very limited: Gray folders can't be accessed, the only 3 available ones don't contain anything where a user would put files through the "Files" app. I know that an app can request access to these "Files & Folders": So my question is: Is there a way to request this access for a Unity-built app at the moment? If yes, what do I need to do? I've looked into the generated Xcode project's "Capabilities", but did not find anything related to file access. Any help is appreciated!
Replies
5
Boosts
0
Views
427
Activity
Oct ’25
Build Vision Pro failed
`error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults") error: Tool exited with code 1
Replies
5
Boosts
0
Views
693
Activity
Jul ’25
Digital Crown press when both immersive space and additional windows are presented
I have been experimenting with the Hello World sample app from https://developer.apple.com/documentation/visionos/world and I came across behavior that appears inconsistent with user-facing documentation describing the device controls at https://support.apple.com/en-gb/guide/apple-vision-pro/tan1e2a29e00/visionos I tried pressing simulator's "Home" button while "Objects in Orbit" immersive space was presented alongside with the main application window. According to user documentation, pressing Digital Crown should take the user directly to Home View. In my test a single press only dismissed the immersive space, I needed another press to "exit" the app and go to Home View. Is this behavior expected? I am assuming that "Home" button in the simulator behaves as if the user pressed Digital Crown on the device, I don't have access to the actual hardware.
Replies
5
Boosts
0
Views
439
Activity
Apr ’25
Please provide bounds for ViewAttachmentComponent
FB17999226
Replies
4
Boosts
0
Views
206
Activity
Jun ’25
Blender to Reality Composer Pro 2.0 to SwiftUI + RealityKit visionOS Best Practices
Hi, I'm very new to 3D and am currently porting a SwiftUI iOS app to visionOS 2.0. I saw WWDC24 feature Blender in multiple spatial videos, and have begun integrating Blender models and animations into my VisionOS app (I would also like to integrate skeletons and programmatic rigging, more on that later). I'm wondering if there are “Best Practices” for this workflow - from Blender to USD to RCP 2.0 to visionOS 2 in Xcode. I’ve cobbled together the following that has some obvious holes: I’ve been able to find some pre-rigged and pre-animated models online that can serve as a great starting point. As a reference, here is a free model from SketchFab - a simple rigged skeleton with 6 built in animations: https://sketchfab.com/3d-models/skeleton-character-low-poly-8856e0138f424d68a8e0b40e185951f6 When exporting to USD from Blender, I haven’t been able to export more than one animation per USD file. Is there a workflow to export multiple animations in a single USDC file, or is this just not possible? As a temporary workaround, here is a python script I’ve been using to loop through all Blender animations, and export a model for each animation: import bpy import os # Set the directory where you want to save the USD files output_directory = “/path/to/export” # Ensure the directory exists if not os.path.exists(output_directory): os.makedirs(output_directory) # Function to export current scene as USD def export_scene_as_usd(output_path, start_frame, end_frame): bpy.context.scene.frame_start = start_frame bpy.context.scene.frame_end = end_frame # Export the scene as a USD file bpy.ops.wm.usd_export( filepath=output_path, export_animation=True ) # Save the current scene name original_scene = bpy.context.scene.name # Iterate through each action and export it as a USD file for action in bpy.data.actions: # Create a new scene for each action bpy.context.window.scene = bpy.data.scenes[original_scene].copy() new_scene = bpy.context.scene # Link the action to all relevant objects for obj in new_scene.objects: if obj.animation_data is not None: obj.animation_data.action = action # Determine the frame range for the action start_frame, end_frame = action.frame_range # Export the scene as a USD file output_path = os.path.join(output_directory, f"{action.name}.usdc") export_scene_as_usd(output_path, int(start_frame), int(end_frame)) # Delete the temporary scene to free memory bpy.data.scenes.remove(new_scene) print("Export completed.") I have also been able to successfully export rigging armatures as a single Skeleton - each “bone” showing getting imported into Reality Composer Pro 2.0 when exporting/importing manually. I would like to have all of these animations available in a single scene to be used in a RealityView in visionOS - so I have placed all animation models in a RCP scene and created named Timeline Action animations for each, showing the correct model and hiding the rest when triggering specific animations. I apply materials/textures to each so they appear the same, using Shader Graph. Then in SwiftUI I use notifications (as shown here - https://forums.developer.apple.com/forums/thread/756978) to trigger each RCP Timeline Action animation from code. Two questions: Is there a better way than to have multiple models of the same skeleton - each with a different animation - in a scene to be able to trigger multiple animations? Or would this require recreating Blender animations using skeleton rigging and keyframes from within RCP Timelines? If I want to programmatically create custom animations and move parts of the skeleton/armatures - do I need to do this by defining custom components in RCP, using IKRig and define movement of each of the “bones” in Xcode? I’m looking for any tips/tricks/workflow from experienced engineers or 3D artists that can create a more efficient/optimized workflow using Blender, USD, RCP 2 and visionOS 2 with SwiftUI. Thanks so much, I appreciate any help! I am very excited about all the new tools that keep evolving to make spatial apps really fun to build!
Replies
4
Boosts
2
Views
1.3k
Activity
Apr ’25
Access Main Camera not working in VisionOS 26.1
I downloaded the official sample project “Accessing the Main Camera”, but I found that it’s not able to retrieve the camera feed on visionOS 26.1. After checking the debug logs, it seems the issue is caused by the system being unable to find the expected format. I tested on a device running visionOS 2, and the camera feed worked correctly — but only when using the sample code from the visionOS 2 version, not the current one. I also noticed that some of the APIs have changed between versions. Has anyone managed to successfully access the camera feed on visionOS 26.1?
Replies
4
Boosts
0
Views
875
Activity
Nov ’25
RealityView content.add Called Twice Automatically
I’m working with RealityView in visionOS and noticed that the content closure seems to run twice, causing content.add to be called twice automatically. This results in duplicate entities being added to the scene unless I manually check for duplicates. How can I fix that? Thanks.
Replies
4
Boosts
0
Views
244
Activity
Aug ’25
Metal (Compositor Services) or RealityKit on visionOS
I am develop visionOS app. I am now very interested in Metal and Compositor Services, but I have not explored them in depth. I know that Metal has a higher degree of control freedom. I am wondering if using Compositor Services will have fewer functions than RealityKit in AR technology (such as scene reconstruction and understanding, hover effect, etc.).
Replies
4
Boosts
0
Views
286
Activity
Jun ’25
Alternatives to SceneView
Hey there, since SceneView has been marked as „deprecated“ for SwiftUI, I‘m wondering which alternatives should be considered for the following situation: I have a SwiftUI app (for iOS and iPadOS) where users can view (with rotate, scale, move gestures) 3D models (USDZ) in a scene. The models will be downloaded from web backend and called via local URL paths. What I tested: I‘ve tried ARView in .nonAR mode, RealityView, however I didn‘t get the expected response -> User can rotate, scale the 3D models in a virtual space. ARView in nonAR mode still shows the object like in normal AR mode without camera stream. I tried to add Gestures to the RealityView on iOS - loading USDZ 3D models worked but the gestures didn’t). Model3D is only available for visionOS (that would be amazing to have it for iOS) I also checked QuickLook Preview however it works pretty strange via Filepicker etc, which is not the way how the user should load the 3D models in my app. Maybe I missed something, I couldn’t find anything which can help me. I‘m pretty much stucked adopting the latest and greatest frameworks/APIs in my App and taking the next steps porting my app to visionOS. Long story short 😃: Does someone have an idea what is the alternative to SceneView for USDZ 3D models? I appreciate your support!! Thanks in advance!
Replies
4
Boosts
0
Views
269
Activity
Jul ’25
Presenting images in RealityKit sample No Longer Builds
After updating to the latest visionOS beta, visionOS 26 Beta 4 (23M5300g) the ‘Presenting images in RealityKit’ sample from the following link no longer builds due to an error. https://developer.apple.com/documentation/RealityKit/presenting-images-in-realitykit Expected / Previous: Application builds and runs on device, working as described in the documentation. Reality: Application builds, but does not run on device due to an error (shown in screenshot) “Thread 1: EXC_BAD_ACCESS (code=1, address=0xb)”. The application still runs on the simulator, but not on device. When launching the app from Xcode, it builds and installs correctly but hangs due to the respective error. When loading the app from the Home Screen, the app does not load, and immediately returns to the Home Screen. This Xcode project previously ran with no changes to code - the only change was updating the visionOS system software to the latest version. visionOS 26 Beta 4 (23M5300g) Is anyone else experiencing this issue?
Replies
4
Boosts
0
Views
241
Activity
Aug ’25