Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

visionOS pushWindow being dismissed on app foreground
We seen to have found an issue when using the pushWindow action on visionOS. The issue occurs if the app is backgrounded then reopened by selecting the apps icon on the home screen. Any window that is opened via the pushWindow action is then dismissed. We've been able to replicate the issue in a small sample project. Replication steps Open app Open window via the push action Press the digital crown On the home screen select the apps icon again The pushed window will now be dismissed. There is a sample project linked here that shows off the issue, including a video of the bug in progress
3
1
898
Jan ’26
Game Controller Input Limitations in visionOS Volumetric Windows - Need Clarification
Game Controller Input Limitations in visionOS Volumetric Windows Hello Apple Developer Community, I'm developing a game for visionOS and have encountered significant limitations with game controller input when using volumetric windows (WindowGroup with .volumetric style). I'd appreciate clarification on whether this is expected behavior and any guidance on best practices. 🧩 Issue Summary When using a DualSense controller with a volumetric window in visionOS, only a subset of controller inputs are available to the app. The remaining inputs appear to be reserved by the system for UI navigation. ✅ Working Inputs (Volumetric Window) D-Pad (all directions) L3 (left thumbstick button click) R3 (right thumbstick button click) Menu button Options button ❌ Not Working Inputs (Volumetric Window) Left thumbstick analog movement (used for UI scrolling instead) Right thumbstick analog movement (used for UI scrolling instead) Face buttons (Cross, Circle, Square, Triangle / A, B, X, Y) Shoulder buttons (L1, R1) Triggers (L2, R2) Key observation: When moving the left thumbstick in a volumetric window, the window's UI scrolls vertically instead of sending input to my app's GameController handlers. Similarly, face buttons seem to be reserved for system UI interactions. ⚙️ Implementation Details I'm using the standard GameController framework: Connect to controller via GCController.controllers() Access extendedGamepad profile Set up valueChangedHandler and pressedChangedHandler for all inputs Handlers confirmed registered via logging Working inputs (D-Pad, L3, R3) trigger immediately and consistently Non-working inputs (thumbsticks, face buttons) never trigger 🧠 Critical Finding: ImmersiveSpace Works Perfectly When testing the exact same code in an ImmersiveSpace (.mixed immersion style), all controller inputs work perfectly: ✅ Both thumbsticks provide full analog input ✅ All face buttons trigger their handlers ✅ All shoulder buttons and triggers work correctly ✅ 100% success rate with no intermittent issues This suggests the issue isn't with my code, but rather how visionOS handles controller input differently between Volumetric Windows and ImmersiveSpace. 🧪 Test Environment I created a minimal test project (Controller-Playground) to isolate the issue: A simple ControllerTester class that registers all GameController handlers A visual UI showing real-time input state No game logic, RealityKit physics, or other complexity Results In volumetric window: Only D-Pad, L3, R3, Menu, Options work In ImmersiveSpace: All inputs work perfectly This confirms the limitation exists at the visionOS platform level, not in app code. 🧰 Attempted Workarounds I tried the following without success: Setting GCSupportsControllerUserInteraction = false in Info.plist Setting UIRequiresFullScreen = true Changing window styles (.plain, .volumetric) Polling vs. handler-based input approaches Various threading models (MainActor, separate thread) Result: The only way to enable full controller support is to switch to ImmersiveSpace. ❓ Questions for Apple Is this input reservation behavior in volumetric windows intended and documented? Are game controllers expected to have limited functionality in volumetric windows while full functionality is reserved for ImmersiveSpace? Is there a way to request full controller input access in a volumetric window, or is ImmersiveSpace the only option for complete controller support? Where can I find official documentation about controller input differences between window types? Are there any APIs or configuration options to disable system controller shortcuts in volumetric windows? 🎯 Impact This limitation has a significant effect on game design and architecture: Volumetric windows offer a multitasking-friendly, less immersive experience ImmersiveSpace provides full controller support but may be more immersive than some games require Games that only need basic D-Pad and button input can work fine in volumetric windows Games requiring analog sticks or face buttons must currently use ImmersiveSpace It would be very helpful if Apple could clarify or reference existing documentation regarding controller input handling in different visionOS window types. If such documentation doesn't exist yet, it might be valuable to include this information in future developer guides or best-practice documents. 🕹 Current Workaround For now, I'm using: D-Pad for character movement (digital 8-direction) R3 (right stick click) as a substitute for the "X" button This setup allows the game to function within a volumetric window, though full controller support still requires ImmersiveSpace. 📄 Request If this is expected behavior, I may have simply missed the relevant documentation — could you please point me to any existing resources that explain this design? If there isn't one yet, it would be great if future visionOS documentation could: Clearly outline controller input behavior across window types Provide guidance on when to use Volumetric Windows vs. ImmersiveSpace for games Consider adding an API option to request full controller access when appropriate If this is not expected behavior, I'm happy to file a detailed bug report with sample code. 💻 System Information visionOS: Latest Simulator Xcode: Latest version Controller: Sony DualSense Framework: GameController (standard extendedGamepad profile) Test project: Minimal reproducible example available Thank you for any clarification or guidance you can provide. This information would be valuable for many developers working on visionOS games.
1
0
691
Oct ’25
Why VideoMaterial can't show transparency on Apple Vision Pro
https://developer.apple.com/documentation/realitykit/videomaterial The documentation: "Video materials support transparency if the source video’s file format also supports transparency." I have a transparency video(Hand.mov, HEVC with alpha), I can show the video with transparency background correctly on Vision Pro Simulates, but on physic Device the video has a black background. I'm sure the video format is ok because I can see get the texture from video and display it on an UnlitMaterial. How can I show the transparency video correctly with the RealityKit/VideoMaterial?
2
0
294
Dec ’25
How to Achieve Volumetric Lighting (Light Shafts) in RealityKit on visionOS?
Hello everyone, I am currently developing an experience for visionOS using RealityKit and I would like to achieve volumetric light effects, such as visible light rays or shafts through fog or dust. I found this GitHub project: https://github.com/robcupisz/LightShafts, which demonstrates the kind of visual style I am aiming for. I would like to know if there is a way to create similar effects using RealityKit on visionOS. So far, I have experimented with DirectionalLight, SpotLight, ImageBasedLight, and custom materials (e.g., additive blending on translucent meshes), but none of these approaches can replicate the volumetric light shaft look shown in the repository above. Questions: Is there a recommended technique or workaround in RealityKit to simulate light shafts or volumetric lighting? Is creating a custom mesh (e.g., cone or volume geometry with gradient alpha and additive blending) the only feasible method? Are there any examples, best practices, or sample projects from Apple or other developers that showcase a similar visual style? Any advice or hints would be greatly appreciated. Thank you in advance!
9
1
1k
Aug ’25
visualBounds ignores TextComponents set for Entity. Workarounds?
After adding TextComponents to my Entities on visionOS, I have observed that visualBounds will ignore the TextComponents. Documentation states that it should render a rounded rectangle mesh. These mashes are visible on the device, but not visible in the debugger ("Capture Entity Hierarchy") and ignored by visualBounds. Am I missing something? static func makeDirection(_ direction: Direction) -> Entity { let text = Entity() text.name = direction.rawValue text.setScale(SIMD3(repeating: 5), relativeTo: nil) text.transform.rotation = direction.rotation text.components.set(direction.textComponent) return text } My workaround is to add a disabled ModelEntity and take its bounds 😬
1
0
354
Dec ’25
VisionOS 2 - Screen Capture with passthrough
We're trying to switch from using main camera access on Arkit to screen-capture with passthrough however we're facing some issues and it seems a bit complicated to debug. We have set up a broadcast Extension, set up some logs on the sample Handler but we get nothing in the console nor that the recording starts, we set up the picker as well and we can see our extension in the control center as one of the choices but clicking start, results in it stopping in less than one second after. The only message that is rather contradictory we see in the console.app is the following [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license and just right after [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
2
1
624
Dec ’25
Cannot find devices in RemoteImmersiveSpace
Hi, I'm running the Spatial Rendering App sample on a Macbook Pro running 26.4 Beta and the Vision Pro running visionOS 26.3.1. Handoff and SharePlay are on, both devices are on the same Apple ID and network, and SharePlay screen sharing works fine between the two devices. However, when calling openImmersiveSpace, the device picker fails to present and no devices are found. Errors from console: ((processConfiguration != nil && configuration != nil) || (processConfiguration == nil && configuration == nil)) - .../ExtensionKit/Source/HostViewController/Internal/EXHostSessionDriver.m:80: `processConfiguration` and `configuration` must be both non-nil or both nil Unable to obtain a task name port right for pid 638: (os/kern) failure (0x5) Unable to present an ImmersiveSpace for Scene id 'Compositor Services' Is this a known bug or I'm I missing something? Thanks!
2
1
1.5k
Mar ’26
RealityView Camera Target Error when set while Orbiting
When interacting with RealityView’s realityViewCameraControls .orbit and setting a new RealityViewCameraContent .cameraTarget, the resulting camera target and camera orbit is incorrect. This can be demonstrated where one finger is orbiting the RealityView, and another pushes a button which changes the camera target. Instead of the camera facing the new target, some point in the scene is the new effective camera target and orbit point. This only occurs when an orbit interaction is currently taking place. If you stop interacting with the orbit, change target, then start orbit interacting again, everything works as expected. Though this example uses two-touches, any change of the camera target has this conflict with orbit interaction. This means interacting with orbit will result in the wrong camera view which is unexpected for users and difficult to reconcile or detect, for developers. Expected: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target shows centred in view the orbit revolves the new target and continues to match my gestures. Reality: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target is not centred in view, and camera is now orbiting an unexpected point in the scene, that is not my expected target. One imperfect workaround is to force a rebuild of the view after setting a new cameraTarget. This sets all targets correctly but results in a flicker, loss of orbit controls until re-touch and ultimately is a poor user experience, but is better than the wrong target being shown unexpectedly. Code Sample: import SwiftUI import RealityKit struct RKOribtTarget: View { @State private var target: Int = 0 @State private var rcContent: RealityViewCameraContent? @State private var rkID: UUID = UUID() let root = Entity() let center = ModelEntity(mesh: .generateSphere(radius: 0.05), materials: [UnlitMaterial(color: UIColor(.gray.opacity(0.5)))]) let red = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: false)]) let blue = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .blue, isMetallic: false)]) let green = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .green, isMetallic: false)]) var body: some View { VStack{ RealityView { content in red.position.x = 0.5 blue.position.z = 0.5 green.position.y = 0.5 center.position = .init(repeating: 0.25) content.cameraTarget = target == 0 ? root : blue root.addChild(red) root.addChild(blue) root.addChild(green) root.addChild(center) content.add(root) } update: { content in switch target{ case 0: content.cameraTarget = root case 1: content.cameraTarget = blue case 2: content.cameraTarget = red case 3: content.cameraTarget = green default: content.cameraTarget = root } } .id(rkID) .realityViewCameraControls(.orbit) VStack{ Text("Target") Button("Default") { target = 0 // Force rebuilding view resets orbit target and rotation // But shows a flicker, interaction requires touch reset // Not an ideal workaround // rkID = UUID() } .buttonStyle(.bordered) Button("Blue") { target = 1 // rkID = UUID() } .buttonStyle(.bordered) .tint(.blue) Button("Red") { target = 2 // rkID = UUID() } .buttonStyle(.bordered) .tint(.red) Button("Green") { target = 3 // rkID = UUID() } .buttonStyle(.bordered) .tint(.green) } } } } Xcode Version: Version 26.0 (17A324) iOS Version: iOS 26.5 (23F75) Tested on devices, iPhone 12 Pro, iPhone 15 Pro
2
0
466
19h
How to fix "Sample 0 missing LiDAR point cloud!" error?
I'm trying to run a PhotogrammetrySession based on photos taken in an AVCaptureSession and stored as .heic files. When I load the files I'm always seeing the error "Sample 0 missing LiDAR point cloud!" showing up for each individual sample. Debugging shows that sample.depthDataMap is populated, also the .heic contains depth data which can be extracted using e.g. heif-convert on my Mac. Comparing the .heic I created to one of the ObjectCaptureSession which doesn't show the LiDAR warning, I noticed the only difference being the HEIC information here: So my questions are: Are these the missing information in my manual capture causing this warning? Can I somehow add these information in an AVCaptureSession? Do these information allow better photogrammetry results?
2
0
437
Dec ’25
Real Time Spatial Video Streaming with Vision Pro
Hello, I am trying to build an AVP app for real-time "zero-latency" spatial video streaming. I am trying to figure out, on a high level, the best way to do this. Currently this is my method: Server sends stereo images via a WebRTC service (ie, livekit) The WebRTC stream is converted to a CVPixelBuffer, writes them to file, plays via AVPlayer, and applies a VideoMaterial to a plane entity. However, this is a bit hacky and it seems like this won't be compatible with Apple's spatial experinces. To my understanding, Apple supports HLS streaming for spatial experiences and APMP content. However, HLS (and even Low Latency HLS) introduces a second or more of latency, likely do to the segmentation nature of HLS. Thus, HLS will not work for us. Some other alternatives I've thought of are streaming the live stream video via webrtc from the server to a local computer in the AVP's network, and then using LL-HLS to stream from the local computer to the vision pro. Still, it seems like this would introduce latency on the order of seconds. Is my current approach the best way to implement this? Or could anyone suggest a better way, perhaps something compatible with AVP's spatial experiences
0
1
144
Dec ’25
LowLevelInstanceData & animation
AppleOS 26 introduces LowLevelInstanceData that can reduce CPU draw calls significantly by instancing. However, I have noticed trouble with animating each individual instance. As I wanted low-level control, I'm using a custom system and LowLevelInstanceData.replace(using:) to update the transform each frame. The update closure itself is extremely efficient (Xcode Instruments reports nearly no cost). But I noticed extremely high runloop time, reach around 20ms. Time Profiler shows that the CPU is blocked by kernel.release.t6401. I think it is caused by synchronization between CPU and GPU, however, as I am already using a MTLCommandBuffer to coordinate it, I don't understand why I am still seeing large CPU time.
3
0
736
2w
fileImporter issue in visionOS with iPhone app (that can run on visionOS)
Happy new year to all! I have created an iOS app that also runs on Apple Vision Pro. On iOS, when you activate the fileImporter modal, you can swipe down the modal in iOS to dismiss. However, in visionOS, this same modal CANNOT be swiped down to cancel/dismiss. If you are drilled deep into a file hierarchy, you have to navigate back to the top level to tap X to dismiss. Is there a way to add swipe down to the visionOS implementation of fileImporter, or any other workaround so the user doesn't have to navigate back to the top to dismiss? Again, this is not a visionOS app but an iOS app compatible for use in Vision Pro. Thanks!
2
0
967
Jan ’26
With manipulation component, once you let go, how to prevent the entity from disappearing while animating it back into the volume
So with the new ManipulationComponent, we can choose "stay" and then if you drag it out of your volume, once you let go it will instantly disappear. We can "animate" it back to inside the volume, eg.: content.subscribe(to: ManipulationEvents.WillRelease.self) { event in Entity.animate( .easeInOut(duration: 1), body: { event.entity.position = [0, 0.2, 0] }, completion: {} ) }, Howeve,r for the duration that it travels outside of the volume it's invisible the whole time. In this apple video, it seems to be visible when dragging and when letting go, but perhaps that's not a volume they're dragging it out of? https://youtu.be/VtenPKrvPOU?si=y1zoZOs2IMyDzOm6&t=1748 Does anyone know how to keep the entity visible even when after letting the entity go while you animate it back towards inside of your volume?
1
1
1k
Jan ’26
Reality Kit Scene
Hi, I’m wondering whether RealityKit has its own scene management system, since it uses ARView (backed by ARKit) to present AR content. Does RealityKit manage scenes independently, or does it rely entirely on ARKit’s scene handling? Thank you.
1
0
216
Feb ’26
Spatial Web and Safari
Is there any interest in this forum for those developing for the spatial web and safari. I can't seem to find any posts that are relevant here.
Replies
0
Boosts
1
Views
245
Activity
Dec ’25
Is it possible to play Fisheye VR180 video directly?
Hi I know it's possible to play equirectangular VR180 video either SBS or MV-HEVC. And for fisheye video, the only way I know is to convert it into an AIVU for playback. Is there any way to directly play fisheye video using AVPlayer? Thanks a lot!
Replies
2
Boosts
0
Views
526
Activity
Oct ’25
visionOS pushWindow being dismissed on app foreground
We seen to have found an issue when using the pushWindow action on visionOS. The issue occurs if the app is backgrounded then reopened by selecting the apps icon on the home screen. Any window that is opened via the pushWindow action is then dismissed. We've been able to replicate the issue in a small sample project. Replication steps Open app Open window via the push action Press the digital crown On the home screen select the apps icon again The pushed window will now be dismissed. There is a sample project linked here that shows off the issue, including a video of the bug in progress
Replies
3
Boosts
1
Views
898
Activity
Jan ’26
Assigning ManipulationComponent to Entity triggers SceneEvents.WillRemoveEntity
When assigning a ManipulationComponent to an Entity SceneEvents.WillRemoveEntity will be called for that Entity. Expected Behavior: the Entity is not (even if temporarily) removed from the Scene and no SceneEvents will be triggered as a result of assigning a ManipulationComponent. FB20872220
Replies
0
Boosts
1
Views
256
Activity
Oct ’25
Game Controller Input Limitations in visionOS Volumetric Windows - Need Clarification
Game Controller Input Limitations in visionOS Volumetric Windows Hello Apple Developer Community, I'm developing a game for visionOS and have encountered significant limitations with game controller input when using volumetric windows (WindowGroup with .volumetric style). I'd appreciate clarification on whether this is expected behavior and any guidance on best practices. 🧩 Issue Summary When using a DualSense controller with a volumetric window in visionOS, only a subset of controller inputs are available to the app. The remaining inputs appear to be reserved by the system for UI navigation. ✅ Working Inputs (Volumetric Window) D-Pad (all directions) L3 (left thumbstick button click) R3 (right thumbstick button click) Menu button Options button ❌ Not Working Inputs (Volumetric Window) Left thumbstick analog movement (used for UI scrolling instead) Right thumbstick analog movement (used for UI scrolling instead) Face buttons (Cross, Circle, Square, Triangle / A, B, X, Y) Shoulder buttons (L1, R1) Triggers (L2, R2) Key observation: When moving the left thumbstick in a volumetric window, the window's UI scrolls vertically instead of sending input to my app's GameController handlers. Similarly, face buttons seem to be reserved for system UI interactions. ⚙️ Implementation Details I'm using the standard GameController framework: Connect to controller via GCController.controllers() Access extendedGamepad profile Set up valueChangedHandler and pressedChangedHandler for all inputs Handlers confirmed registered via logging Working inputs (D-Pad, L3, R3) trigger immediately and consistently Non-working inputs (thumbsticks, face buttons) never trigger 🧠 Critical Finding: ImmersiveSpace Works Perfectly When testing the exact same code in an ImmersiveSpace (.mixed immersion style), all controller inputs work perfectly: ✅ Both thumbsticks provide full analog input ✅ All face buttons trigger their handlers ✅ All shoulder buttons and triggers work correctly ✅ 100% success rate with no intermittent issues This suggests the issue isn't with my code, but rather how visionOS handles controller input differently between Volumetric Windows and ImmersiveSpace. 🧪 Test Environment I created a minimal test project (Controller-Playground) to isolate the issue: A simple ControllerTester class that registers all GameController handlers A visual UI showing real-time input state No game logic, RealityKit physics, or other complexity Results In volumetric window: Only D-Pad, L3, R3, Menu, Options work In ImmersiveSpace: All inputs work perfectly This confirms the limitation exists at the visionOS platform level, not in app code. 🧰 Attempted Workarounds I tried the following without success: Setting GCSupportsControllerUserInteraction = false in Info.plist Setting UIRequiresFullScreen = true Changing window styles (.plain, .volumetric) Polling vs. handler-based input approaches Various threading models (MainActor, separate thread) Result: The only way to enable full controller support is to switch to ImmersiveSpace. ❓ Questions for Apple Is this input reservation behavior in volumetric windows intended and documented? Are game controllers expected to have limited functionality in volumetric windows while full functionality is reserved for ImmersiveSpace? Is there a way to request full controller input access in a volumetric window, or is ImmersiveSpace the only option for complete controller support? Where can I find official documentation about controller input differences between window types? Are there any APIs or configuration options to disable system controller shortcuts in volumetric windows? 🎯 Impact This limitation has a significant effect on game design and architecture: Volumetric windows offer a multitasking-friendly, less immersive experience ImmersiveSpace provides full controller support but may be more immersive than some games require Games that only need basic D-Pad and button input can work fine in volumetric windows Games requiring analog sticks or face buttons must currently use ImmersiveSpace It would be very helpful if Apple could clarify or reference existing documentation regarding controller input handling in different visionOS window types. If such documentation doesn't exist yet, it might be valuable to include this information in future developer guides or best-practice documents. 🕹 Current Workaround For now, I'm using: D-Pad for character movement (digital 8-direction) R3 (right stick click) as a substitute for the "X" button This setup allows the game to function within a volumetric window, though full controller support still requires ImmersiveSpace. 📄 Request If this is expected behavior, I may have simply missed the relevant documentation — could you please point me to any existing resources that explain this design? If there isn't one yet, it would be great if future visionOS documentation could: Clearly outline controller input behavior across window types Provide guidance on when to use Volumetric Windows vs. ImmersiveSpace for games Consider adding an API option to request full controller access when appropriate If this is not expected behavior, I'm happy to file a detailed bug report with sample code. 💻 System Information visionOS: Latest Simulator Xcode: Latest version Controller: Sony DualSense Framework: GameController (standard extendedGamepad profile) Test project: Minimal reproducible example available Thank you for any clarification or guidance you can provide. This information would be valuable for many developers working on visionOS games.
Replies
1
Boosts
0
Views
691
Activity
Oct ’25
Why VideoMaterial can't show transparency on Apple Vision Pro
https://developer.apple.com/documentation/realitykit/videomaterial The documentation: "Video materials support transparency if the source video’s file format also supports transparency." I have a transparency video(Hand.mov, HEVC with alpha), I can show the video with transparency background correctly on Vision Pro Simulates, but on physic Device the video has a black background. I'm sure the video format is ok because I can see get the texture from video and display it on an UnlitMaterial. How can I show the transparency video correctly with the RealityKit/VideoMaterial?
Replies
2
Boosts
0
Views
294
Activity
Dec ’25
How to Achieve Volumetric Lighting (Light Shafts) in RealityKit on visionOS?
Hello everyone, I am currently developing an experience for visionOS using RealityKit and I would like to achieve volumetric light effects, such as visible light rays or shafts through fog or dust. I found this GitHub project: https://github.com/robcupisz/LightShafts, which demonstrates the kind of visual style I am aiming for. I would like to know if there is a way to create similar effects using RealityKit on visionOS. So far, I have experimented with DirectionalLight, SpotLight, ImageBasedLight, and custom materials (e.g., additive blending on translucent meshes), but none of these approaches can replicate the volumetric light shaft look shown in the repository above. Questions: Is there a recommended technique or workaround in RealityKit to simulate light shafts or volumetric lighting? Is creating a custom mesh (e.g., cone or volume geometry with gradient alpha and additive blending) the only feasible method? Are there any examples, best practices, or sample projects from Apple or other developers that showcase a similar visual style? Any advice or hints would be greatly appreciated. Thank you in advance!
Replies
9
Boosts
1
Views
1k
Activity
Aug ’25
Is it possible to load a WKWebView that has 3D rendering (like three.js) in a volumetric window?
I would like to translate info in a three.js based web app as a 3D model in a volumetric window. Is it possible to do this in a similar manner as loading a web page in a WKWebView?
Replies
1
Boosts
1
Views
732
Activity
Jul ’25
visualBounds ignores TextComponents set for Entity. Workarounds?
After adding TextComponents to my Entities on visionOS, I have observed that visualBounds will ignore the TextComponents. Documentation states that it should render a rounded rectangle mesh. These mashes are visible on the device, but not visible in the debugger ("Capture Entity Hierarchy") and ignored by visualBounds. Am I missing something? static func makeDirection(_ direction: Direction) -> Entity { let text = Entity() text.name = direction.rawValue text.setScale(SIMD3(repeating: 5), relativeTo: nil) text.transform.rotation = direction.rotation text.components.set(direction.textComponent) return text } My workaround is to add a disabled ModelEntity and take its bounds 😬
Replies
1
Boosts
0
Views
354
Activity
Dec ’25
VisionOS 2 - Screen Capture with passthrough
We're trying to switch from using main camera access on Arkit to screen-capture with passthrough however we're facing some issues and it seems a bit complicated to debug. We have set up a broadcast Extension, set up some logs on the sample Handler but we get nothing in the console nor that the recording starts, we set up the picker as well and we can see our extension in the control center as one of the choices but clicking start, results in it stopping in less than one second after. The only message that is rather contradictory we see in the console.app is the following [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license and just right after [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
Replies
2
Boosts
1
Views
624
Activity
Dec ’25
Cannot find devices in RemoteImmersiveSpace
Hi, I'm running the Spatial Rendering App sample on a Macbook Pro running 26.4 Beta and the Vision Pro running visionOS 26.3.1. Handoff and SharePlay are on, both devices are on the same Apple ID and network, and SharePlay screen sharing works fine between the two devices. However, when calling openImmersiveSpace, the device picker fails to present and no devices are found. Errors from console: ((processConfiguration != nil && configuration != nil) || (processConfiguration == nil && configuration == nil)) - .../ExtensionKit/Source/HostViewController/Internal/EXHostSessionDriver.m:80: `processConfiguration` and `configuration` must be both non-nil or both nil Unable to obtain a task name port right for pid 638: (os/kern) failure (0x5) Unable to present an ImmersiveSpace for Scene id 'Compositor Services' Is this a known bug or I'm I missing something? Thanks!
Replies
2
Boosts
1
Views
1.5k
Activity
Mar ’26
RealityView Camera Target Error when set while Orbiting
When interacting with RealityView’s realityViewCameraControls .orbit and setting a new RealityViewCameraContent .cameraTarget, the resulting camera target and camera orbit is incorrect. This can be demonstrated where one finger is orbiting the RealityView, and another pushes a button which changes the camera target. Instead of the camera facing the new target, some point in the scene is the new effective camera target and orbit point. This only occurs when an orbit interaction is currently taking place. If you stop interacting with the orbit, change target, then start orbit interacting again, everything works as expected. Though this example uses two-touches, any change of the camera target has this conflict with orbit interaction. This means interacting with orbit will result in the wrong camera view which is unexpected for users and difficult to reconcile or detect, for developers. Expected: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target shows centred in view the orbit revolves the new target and continues to match my gestures. Reality: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target is not centred in view, and camera is now orbiting an unexpected point in the scene, that is not my expected target. One imperfect workaround is to force a rebuild of the view after setting a new cameraTarget. This sets all targets correctly but results in a flicker, loss of orbit controls until re-touch and ultimately is a poor user experience, but is better than the wrong target being shown unexpectedly. Code Sample: import SwiftUI import RealityKit struct RKOribtTarget: View { @State private var target: Int = 0 @State private var rcContent: RealityViewCameraContent? @State private var rkID: UUID = UUID() let root = Entity() let center = ModelEntity(mesh: .generateSphere(radius: 0.05), materials: [UnlitMaterial(color: UIColor(.gray.opacity(0.5)))]) let red = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: false)]) let blue = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .blue, isMetallic: false)]) let green = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .green, isMetallic: false)]) var body: some View { VStack{ RealityView { content in red.position.x = 0.5 blue.position.z = 0.5 green.position.y = 0.5 center.position = .init(repeating: 0.25) content.cameraTarget = target == 0 ? root : blue root.addChild(red) root.addChild(blue) root.addChild(green) root.addChild(center) content.add(root) } update: { content in switch target{ case 0: content.cameraTarget = root case 1: content.cameraTarget = blue case 2: content.cameraTarget = red case 3: content.cameraTarget = green default: content.cameraTarget = root } } .id(rkID) .realityViewCameraControls(.orbit) VStack{ Text("Target") Button("Default") { target = 0 // Force rebuilding view resets orbit target and rotation // But shows a flicker, interaction requires touch reset // Not an ideal workaround // rkID = UUID() } .buttonStyle(.bordered) Button("Blue") { target = 1 // rkID = UUID() } .buttonStyle(.bordered) .tint(.blue) Button("Red") { target = 2 // rkID = UUID() } .buttonStyle(.bordered) .tint(.red) Button("Green") { target = 3 // rkID = UUID() } .buttonStyle(.bordered) .tint(.green) } } } } Xcode Version: Version 26.0 (17A324) iOS Version: iOS 26.5 (23F75) Tested on devices, iPhone 12 Pro, iPhone 15 Pro
Replies
2
Boosts
0
Views
466
Activity
19h
How to fix "Sample 0 missing LiDAR point cloud!" error?
I'm trying to run a PhotogrammetrySession based on photos taken in an AVCaptureSession and stored as .heic files. When I load the files I'm always seeing the error "Sample 0 missing LiDAR point cloud!" showing up for each individual sample. Debugging shows that sample.depthDataMap is populated, also the .heic contains depth data which can be extracted using e.g. heif-convert on my Mac. Comparing the .heic I created to one of the ObjectCaptureSession which doesn't show the LiDAR warning, I noticed the only difference being the HEIC information here: So my questions are: Are these the missing information in my manual capture causing this warning? Can I somehow add these information in an AVCaptureSession? Do these information allow better photogrammetry results?
Replies
2
Boosts
0
Views
437
Activity
Dec ’25
Real Time Spatial Video Streaming with Vision Pro
Hello, I am trying to build an AVP app for real-time "zero-latency" spatial video streaming. I am trying to figure out, on a high level, the best way to do this. Currently this is my method: Server sends stereo images via a WebRTC service (ie, livekit) The WebRTC stream is converted to a CVPixelBuffer, writes them to file, plays via AVPlayer, and applies a VideoMaterial to a plane entity. However, this is a bit hacky and it seems like this won't be compatible with Apple's spatial experinces. To my understanding, Apple supports HLS streaming for spatial experiences and APMP content. However, HLS (and even Low Latency HLS) introduces a second or more of latency, likely do to the segmentation nature of HLS. Thus, HLS will not work for us. Some other alternatives I've thought of are streaming the live stream video via webrtc from the server to a local computer in the AVP's network, and then using LL-HLS to stream from the local computer to the vision pro. Still, it seems like this would introduce latency on the order of seconds. Is my current approach the best way to implement this? Or could anyone suggest a better way, perhaps something compatible with AVP's spatial experiences
Replies
0
Boosts
1
Views
144
Activity
Dec ’25
How do you incorporate SharePlay into an Immersive scene in VisionOS?
I've got an Immersive scene that I want to be able to bring additional users into via SharePlay where each user would be able to see (and hopefully interact) with the Immersive scene. How does one implement that?
Replies
2
Boosts
1
Views
912
Activity
Jan ’26
LowLevelInstanceData & animation
AppleOS 26 introduces LowLevelInstanceData that can reduce CPU draw calls significantly by instancing. However, I have noticed trouble with animating each individual instance. As I wanted low-level control, I'm using a custom system and LowLevelInstanceData.replace(using:) to update the transform each frame. The update closure itself is extremely efficient (Xcode Instruments reports nearly no cost). But I noticed extremely high runloop time, reach around 20ms. Time Profiler shows that the CPU is blocked by kernel.release.t6401. I think it is caused by synchronization between CPU and GPU, however, as I am already using a MTLCommandBuffer to coordinate it, I don't understand why I am still seeing large CPU time.
Replies
3
Boosts
0
Views
736
Activity
2w
fileImporter issue in visionOS with iPhone app (that can run on visionOS)
Happy new year to all! I have created an iOS app that also runs on Apple Vision Pro. On iOS, when you activate the fileImporter modal, you can swipe down the modal in iOS to dismiss. However, in visionOS, this same modal CANNOT be swiped down to cancel/dismiss. If you are drilled deep into a file hierarchy, you have to navigate back to the top level to tap X to dismiss. Is there a way to add swipe down to the visionOS implementation of fileImporter, or any other workaround so the user doesn't have to navigate back to the top to dismiss? Again, this is not a visionOS app but an iOS app compatible for use in Vision Pro. Thanks!
Replies
2
Boosts
0
Views
967
Activity
Jan ’26
With manipulation component, once you let go, how to prevent the entity from disappearing while animating it back into the volume
So with the new ManipulationComponent, we can choose "stay" and then if you drag it out of your volume, once you let go it will instantly disappear. We can "animate" it back to inside the volume, eg.: content.subscribe(to: ManipulationEvents.WillRelease.self) { event in Entity.animate( .easeInOut(duration: 1), body: { event.entity.position = [0, 0.2, 0] }, completion: {} ) }, Howeve,r for the duration that it travels outside of the volume it's invisible the whole time. In this apple video, it seems to be visible when dragging and when letting go, but perhaps that's not a volume they're dragging it out of? https://youtu.be/VtenPKrvPOU?si=y1zoZOs2IMyDzOm6&t=1748 Does anyone know how to keep the entity visible even when after letting the entity go while you animate it back towards inside of your volume?
Replies
1
Boosts
1
Views
1k
Activity
Jan ’26
Reality Kit Scene
Hi, I’m wondering whether RealityKit has its own scene management system, since it uses ARView (backed by ARKit) to present AR content. Does RealityKit manage scenes independently, or does it rely entirely on ARKit’s scene handling? Thank you.
Replies
1
Boosts
0
Views
216
Activity
Feb ’26
How to give spatial photo a custom corner radius?
Spatial photo in RealityView has a default corner radius. I made a parallel effect with spatial photos in ScrollView(like Spatial Gallery), but the corner radius disappeared on left and right spatial photos. I've tried .clipShape and .mask modifiers, but they did't work. How to clip or mask spatial photo with corner radius effect?
Replies
0
Boosts
1
Views
488
Activity
Aug ’25