Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Vision Pro App Development Outside Supported Countries (Apple ID / Region Restrictions?)
Hello, does anyone have experience using Apple Vision Pro in countries where it has not yet been officially released? I work for a company in Austria, and we are interested in developing internal XR applications for Vision Pro. Since the device is not officially available in Austria, we are considering purchasing it in Germany. My main question is whether it is possible to develop and test Vision Pro apps using an Austrian Apple ID / developer account, or if there are any regional restrictions we should be aware of (e.g., related to App Store access, provisioning, or device functionality). Apple Support was unfortunately unable to provide a definitive answer and recommended asking here. Any insights or experiences would be greatly appreciated. Best regards, Don Appelonie
0
0
1
4m
Vision Pro App Development Outside Supported Countries (Apple ID / Region Restrictions?)
Hello, does anyone have experience using Apple Vision Pro in countries where it has not yet been officially released? I work for a company in Austria, and we are interested in developing internal XR applications for Vision Pro. Since the device is not officially available in Austria, we are considering purchasing it in Germany. My main question is whether it is possible to develop and test Vision Pro apps using an Austrian Apple ID / developer account, or if there are any regional restrictions we should be aware of (e.g., related to App Store access, provisioning, or device functionality). Apple Support was unfortunately unable to provide a definitive answer and recommended asking here. Any insights or experiences would be greatly appreciated. Best regards, Don Appelonie
0
0
1
5m
RealityView Camera Target Error when set while Orbiting
When interacting with RealityView’s realityViewCameraControls .orbit and setting a new RealityViewCameraContent .cameraTarget, the resulting camera target and camera orbit is incorrect. This can be demonstrated where one finger is orbiting the RealityView, and another pushes a button which changes the camera target. Instead of the camera facing the new target, some point in the scene is the new effective camera target and orbit point. This only occurs when an orbit interaction is currently taking place. If you stop interacting with the orbit, change target, then start orbit interacting again, everything works as expected. Though this example uses two-touches, any change of the camera target has this conflict with orbit interaction. This means interacting with orbit will result in the wrong camera view which is unexpected for users and difficult to reconcile or detect, for developers. Expected: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target shows centred in view the orbit revolves the new target and continues to match my gestures. Reality: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target is not centred in view, and camera is now orbiting an unexpected point in the scene, that is not my expected target. One imperfect workaround is to force a rebuild of the view after setting a new cameraTarget. This sets all targets correctly but results in a flicker, loss of orbit controls until re-touch and ultimately is a poor user experience, but is better than the wrong target being shown unexpectedly. Code Sample: import SwiftUI import RealityKit struct RKOribtTarget: View { @State private var target: Int = 0 @State private var rcContent: RealityViewCameraContent? @State private var rkID: UUID = UUID() let root = Entity() let center = ModelEntity(mesh: .generateSphere(radius: 0.05), materials: [UnlitMaterial(color: UIColor(.gray.opacity(0.5)))]) let red = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: false)]) let blue = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .blue, isMetallic: false)]) let green = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .green, isMetallic: false)]) var body: some View { VStack{ RealityView { content in red.position.x = 0.5 blue.position.z = 0.5 green.position.y = 0.5 center.position = .init(repeating: 0.25) content.cameraTarget = target == 0 ? root : blue root.addChild(red) root.addChild(blue) root.addChild(green) root.addChild(center) content.add(root) } update: { content in switch target{ case 0: content.cameraTarget = root case 1: content.cameraTarget = blue case 2: content.cameraTarget = red case 3: content.cameraTarget = green default: content.cameraTarget = root } } .id(rkID) .realityViewCameraControls(.orbit) VStack{ Text("Target") Button("Default") { target = 0 // Force rebuilding view resets orbit target and rotation // But shows a flicker, interaction requires touch reset // Not an ideal workaround // rkID = UUID() } .buttonStyle(.bordered) Button("Blue") { target = 1 // rkID = UUID() } .buttonStyle(.bordered) .tint(.blue) Button("Red") { target = 2 // rkID = UUID() } .buttonStyle(.bordered) .tint(.red) Button("Green") { target = 3 // rkID = UUID() } .buttonStyle(.bordered) .tint(.green) } } } } Xcode Version: Version 26.0 (17A324) iOS Version: iOS 26.5 (23F75) Tested on devices, iPhone 12 Pro, iPhone 15 Pro
2
0
459
17h
`ARCamera.unprojectPoint` and `ARCamera.TrackingState` behavior changes between iOS 26.3 and 26.4 under AR resource pressure
ARCamera.TrackingState questions: Did the threshold or sensitivity for transitioning ARCamera.TrackingState from .normal to .limited(.excessiveMotion) or .limited(.insufficientFeatures) change between iOS 26.3 and iOS 26.4? What does "ARWorldTrackingTechnique: resource constraints [33]" mean, and is it new in iOS 26.4? Does it correspond to a tracking state degradation? Is there a way for the client to detect or respond to ARKit entering a resource-constrained mode short of the full tracking state transition — for example, a lower-level notification or a flag on ARFrame — so that apps can take protective action without interpreting it as a full tracking failure? ARCamera.unprojectPoint questions: Did the behavior of ARCamera.unprojectPoint(_:ontoPlane:orientation:viewportSize:) change between iOS 26.3 and iOS 26.4 for near-parallel geometry? Specifically, on iOS 26.3 this method returns nil when the camera ray is nearly parallel to the target plane (denominator of the ray-plane intersection → 0 at ~90° of camera rotation). On iOS 26.4, with identical code and environment, it returns a large finite value instead — we observed z = −12.27m. Since the method's optional return type implies nil is the documented signal for no valid intersection, this reads as a behavioral regression rather than an intentional change. If returning the computed value for near-parallel geometry is now the intended behavior, is there a recommended way for the caller to guard against it? For example, should we check abs(dot(rayDirection, planeNormal)) against a threshold before calling, and if so, is there a documented epsilon Apple uses internally? Alternatively, is there a newer API we should prefer over unprojectPoint(:ontoPlane:) for this use case that handles degenerate geometry more gracefully — such as ARSession.raycast(:)? Are there any other ARKit API adjustments between OS 26.3 and 26.4? We are using the same codebase, but it behaves differently in between these 2 OS versions now. Thanks!
0
0
223
2d
How do I dismiss a presented sheet?
I'm developing an app requiring data entry across several devices. My SwiftUI app runs on iOS and iPadOS but I also want to run it on visionOS. I'm using the visionOS simulator. When I enter data in one of my views I use a Form within a .sheet and this works perfectly well on iOS and iPadOS and I can dismiss the sheet by simply tapping the view behind the sheet. On visionOS I click my + button, the sheet appears, I enter the data as usual but after that there is no gesture in the app I can perform with keyboard or mouse that will make the sheet disappear! Do I have to add a "Close" button for visionOS or is there a way to enable the same interaction that works on iPadOS?
0
0
351
5d
WWDC25 Houdini VR Optimisation Toolkit Texture Baking
The texture baking section of the WWDC25 session "Optimize your custom environments for visionOS" (https://youtu.be/RELnRZmb02c?t=1485) moves very quickly and leaves a lot unexplained. Has anyone worked through this part of the toolkit in practice and can speak to what's actually going on, particularly around projection baking and how it addresses the reprojection artifacts the presenter briefly mentions? Thankyou
0
0
774
1w
RealityView content disappears when selecting Lock In Place on visionOS
I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace A red cube is placed 1m in front of the user via RealityView Long press the X button on any floating window Select "Lock In Place" The cube disappears immediately Expected: Cube remains visible after window is locked Actual: Cube disappears. Note: "Follow Me" does NOT reproduce this issue. Minimal reproducible code: struct ImmersiveView: View { var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
1
0
1.6k
2w
Real world anchors
I’m trying to build a persistent world map of my college campus using ARKit, but it’s not very reliable. Anchors don’t consistently appear in the same place across sessions. I’ve tried using image anchors, but they didn’t improve accuracy much. How can I create a stable world map for a larger area and reliably relocalize anchors? Are there better approaches or recommended resources for this?
1
0
1.1k
2w
visionOS: AVFoundation cannot deliver simultaneous video from two external (UVC) cameras; no public USB fallback exists
Area: visionOS 26.4 · AVFoundation · AVCapture · External/UVC video Classification: Suggestion / API Enhancement Request (also: Incorrect/Missing Documentation) Device / OS: Apple Vision Pro, visionOS 26.x. Xcode 26.4.1, XROS26.4.sdk. Summary On visionOS, a third-party app cannot display two UVC USB cameras (connected through a powered USB-C hub) at the same time. Every AVFoundation path that would enable this on iPadOS is either unavailable or fails at runtime on visionOS, and there is no public non-AVFoundation fallback (no IOUSBHost, no DriverKit, no usable CoreMediaIO, no MFi path for generic UVC devices). This is a real capability gap relative to iPadOS and macOS, and Camo Studio on iPadOS (App Store ID 6450313385) demonstrates the two-camera USB-hub use case is legitimate and valuable for spatial-video/hybrid-capture workflows on Vision Pro. Steps to reproduce Connect a powered USB-C hub to Apple Vision Pro with two UVC webcams attached. Build a visionOS app that uses AVCaptureDevice.DiscoverySession(deviceTypes: [.external], …). Observe: both cameras are discovered and enumerate as distinct AVCaptureDevices. Attempt A — two independent sessions: Create two independent AVCaptureSessions, each with one AVCaptureDeviceInput and one AVCaptureVideoDataOutput, start both. Result: only one session delivers sample buffers. The other stalls silently with no error and no interruption notification. Attempt B — AVCaptureMultiCamSession with manual connections (the pattern that works on iPadOS 18+): Result: code does not compile. In XROS26.4.sdk: AVCaptureInputPort is API_UNAVAILABLE(visionos) (AVCaptureInput.h) AVCaptureInput.ports is API_UNAVAILABLE(visionos) AVCaptureDeviceInput.portsWithMediaType:sourceDeviceType:sourceDevicePosition: is API_UNAVAILABLE(macos, visionos) Therefore AVCaptureConnection(inputPorts:output:) cannot be constructed. AVCaptureMultiCamSession itself is declared API_AVAILABLE(… visionos(2.1)), which is misleading because without input-port access the manual-connection path the class requires is unreachable. Expected behavior Either of the following would resolve this, in order of preference: Expose the missing API surface on visionOS. Make AVCaptureInputPort, AVCaptureInput.ports, and AVCaptureDeviceInput.portsWithMediaType:sourceDeviceType:sourceDevicePosition: available on visionOS so the documented iPadOS multi-cam pattern compiles and runs. AVCaptureMultiCamSession is already declared available — the supporting API surface should match. Allow two concurrent plain AVCaptureSessions to each own a distinct external AVCaptureDevice. Each session binds a different hardware device, and the current serialization appears to be a software policy rather than a hardware constraint (a powered hub has bandwidth for both). Document the limit explicitly and surface a clear error or interruption reason on the stalled session so apps can fail loudly instead of appearing to work. Actual behavior AVCaptureMultiCamSession advertises visionos(2.1) availability but the APIs required to wire its connections are marked unavailable on visionOS. Two concurrent AVCaptureSessions silently deliver frames to only one session; no error is reported on the other. There is no public alternative framework on visionOS for raw UVC access to work around this: IOUSBHost.framework — not present in XROS26.4.sdk DriverKit — not present in XROS26.4.sdk IOKit — ships a stub (IOKit.tbd); no public USB device interfaces CoreMediaIO — headers are an apinotes stub on visionOS ExternalAccessory — MFi-only; generic UVC devices don't enumerate This means there is no public path, AVFoundation or otherwise, for a third-party visionOS app to display two UVC cameras at once. Impact / use cases Apple Vision Pro is uniquely suited to multi-camera monitoring and capture workflows — spatial creators, broadcast/AV producers, multi-angle reference during immersive authoring, clinical and field-recording use cases, and apps that combine a primary UVC cinema camera with a secondary UVC reference/overview angle. iPadOS already supports this via AVCaptureMultiCamSession (demonstrated shipping by Camo Studio). The current visionOS limitation pushes these workflows back to iPad or macOS and undermines Vision Pro's positioning as a pro capture/monitor environment. References iPadOS reference implementation: Apple sample Displaying Video From Connected Devices + AVCaptureMultiCamSession with manual AVCaptureConnection wiring — works on iPadOS 18+ with two UVC cameras via a powered hub. Shipping precedent: Camo Studio — two simultaneous UVC cameras via USB hub on iPad — https://apps.apple.com/us/app/camo-studio-stream-record/id6450313385 visionOS 26.4 SDK headers cited above (AVCaptureInput.h, AVCaptureSession.h).
1
0
1.2k
2w
RealityView attachment draw order
My visionOS 26.3 app displays a diorama-like scene in a RealityView in a mixed immersive space, about 1 meter square, with view attachments floating above the scene. Each view attachment fades out after user interaction, by animating the view's opacity. What I'm observing is that depending on the position of a view attachment relative to the scene and the camera, an unwanted cutout effect is observed (presumably because of draw order issues), as shown in the right column in the screenshots below. YouTube video link of these sequences: https://youtu.be/oTuo0okKCkc (19 seconds) My question: How does visionOS determine the view attachment draw order relative to the RealityView scene? If I better understood how the draw order is determined, I could modify my scene to ensure that the view attachments were always drawn after the scene, fixing the unwanted cutout effect. I've successfully used ModelSortGroupComponent to control the draw order of entities within the RealityView scene, but my understanding is that this approach cannot be used with view attachments. I've submitted FB22014370 about this issue. Thank you.
4
0
1.6k
2w
LowLevelInstanceData & animation
AppleOS 26 introduces LowLevelInstanceData that can reduce CPU draw calls significantly by instancing. However, I have noticed trouble with animating each individual instance. As I wanted low-level control, I'm using a custom system and LowLevelInstanceData.replace(using:) to update the transform each frame. The update closure itself is extremely efficient (Xcode Instruments reports nearly no cost). But I noticed extremely high runloop time, reach around 20ms. Time Profiler shows that the CPU is blocked by kernel.release.t6401. I think it is caused by synchronization between CPU and GPU, however, as I am already using a MTLCommandBuffer to coordinate it, I don't understand why I am still seeing large CPU time.
3
0
734
2w
RealityView content disappears when selecting Lock In Place on visionOS
Hi, I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace. A red cube is placed 1m in front of the user via RealityView. Long press the X button on any floating window Select "Lock In Place". The cube disappears immediately. Expected: Cube remains visible after window is locked Actual: Cube disappears. Minimal reproducible code: var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3<Float>(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
5
0
1.4k
3w
RealityView camera feed not shown
I have two RealityView: ParentView and When click the button in ParentView, ChildView will be shown as full screen cover, but the camera feed in ChildView will not be shown, only black screen. If I show ChildView directly, it works with camera feed. Please help me on this issue? Thanks. import RealityKit import SwiftUI struct ParentView: View{ @State private var showIt = false var body: some View{ ZStack{ RealityView{content in content.camera = .virtual let box = ModelEntity(mesh: MeshResource.generateSphere(radius: 0.2),materials: [createSimpleMaterial(color: .red)]) content.add(box) } Button("Click here"){ showIt = true } } .fullScreenCover(isPresented: $showIt){ ChildView() .overlay( Button("Close"){ showIt = false }.padding(20), alignment: .bottomLeading ) } .ignoresSafeArea(.all) } } import ARKit import RealityKit import SwiftUI struct ChildView: View{ var body: some View{ RealityView{content in content.camera = .spatialTracking } } }
5
1
2.1k
3w
ManipulationComponent Not Translating using indirect input
When using the new RealityKit Manipulation Component on Entities, indirect input will never translate the entity - no matter what settings are applied. Direct manipulation works as expected for both translation and rotation. Is this intended behaviour? This is different from how indirect manipulation works on Model3D. How else can we get translation from this component? visionOS 26 Beta 2 Build from macOS 26 Beta 2 and Xcode 26 Beta 2 Attached is replicable sample code, I have tried this in other projects with the same results. var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "MovieFilmReel", in: reelRCPBundle) { ManipulationComponent.configureEntity(immersiveContentEntity, allowedInputTypes: .all, collisionShapes: [ShapeResource.generateBox(width: 0.2, height: 0.2, depth: 0.2)]) immersiveContentEntity.position.y = 1 immersiveContentEntity.position.z = -0.5 var mc = ManipulationComponent() mc.releaseBehavior = .stay immersiveContentEntity.components.set(mc) content.add(immersiveContentEntity) } } }
17
5
3.2k
Apr ’26
Tapping once with both hands only works sometimes in visionOS
Hello! I have an iOS app where I am looking into support for visionOS. I have a whole bunch of gestures set up using UIGestureRecognizer and so far most of them work great in visionOS! But I do see something odd that I am not sure can be fixed on my end. I have a UITapGestureRecognizer which is set up with numberOfTouchesRequired = 2 which I am assuming translates in visionOS to when you tap your thumb and index finger on both hands. When I tap with both hands sometimes this tap gesture gets kicked off and other times it doesn't and it says it only received one touch when it should be two. Interestingly, I see this behavior in Apple Maps where tapping once with both hands should zoom out the map, which only works sometimes. Can anyone explain this or am I missing something?
6
0
1.2k
Apr ’26
Vision Pro App Development Outside Supported Countries (Apple ID / Region Restrictions?)
Hello, does anyone have experience using Apple Vision Pro in countries where it has not yet been officially released? I work for a company in Austria, and we are interested in developing internal XR applications for Vision Pro. Since the device is not officially available in Austria, we are considering purchasing it in Germany. My main question is whether it is possible to develop and test Vision Pro apps using an Austrian Apple ID / developer account, or if there are any regional restrictions we should be aware of (e.g., related to App Store access, provisioning, or device functionality). Apple Support was unfortunately unable to provide a definitive answer and recommended asking here. Any insights or experiences would be greatly appreciated. Best regards, Don Appelonie
Replies
0
Boosts
0
Views
1
Activity
4m
Vision Pro App Development Outside Supported Countries (Apple ID / Region Restrictions?)
Hello, does anyone have experience using Apple Vision Pro in countries where it has not yet been officially released? I work for a company in Austria, and we are interested in developing internal XR applications for Vision Pro. Since the device is not officially available in Austria, we are considering purchasing it in Germany. My main question is whether it is possible to develop and test Vision Pro apps using an Austrian Apple ID / developer account, or if there are any regional restrictions we should be aware of (e.g., related to App Store access, provisioning, or device functionality). Apple Support was unfortunately unable to provide a definitive answer and recommended asking here. Any insights or experiences would be greatly appreciated. Best regards, Don Appelonie
Replies
0
Boosts
0
Views
1
Activity
5m
Work with Reality Composer Pro content in Xcode
May I ask if there is a complete source code project for this instructional video that needs to be learned. Work with Reality Composer Pro content in Xcode
Replies
0
Boosts
0
Views
80
Activity
10h
RealityView Camera Target Error when set while Orbiting
When interacting with RealityView’s realityViewCameraControls .orbit and setting a new RealityViewCameraContent .cameraTarget, the resulting camera target and camera orbit is incorrect. This can be demonstrated where one finger is orbiting the RealityView, and another pushes a button which changes the camera target. Instead of the camera facing the new target, some point in the scene is the new effective camera target and orbit point. This only occurs when an orbit interaction is currently taking place. If you stop interacting with the orbit, change target, then start orbit interacting again, everything works as expected. Though this example uses two-touches, any change of the camera target has this conflict with orbit interaction. This means interacting with orbit will result in the wrong camera view which is unexpected for users and difficult to reconcile or detect, for developers. Expected: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target shows centred in view the orbit revolves the new target and continues to match my gestures. Reality: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target is not centred in view, and camera is now orbiting an unexpected point in the scene, that is not my expected target. One imperfect workaround is to force a rebuild of the view after setting a new cameraTarget. This sets all targets correctly but results in a flicker, loss of orbit controls until re-touch and ultimately is a poor user experience, but is better than the wrong target being shown unexpectedly. Code Sample: import SwiftUI import RealityKit struct RKOribtTarget: View { @State private var target: Int = 0 @State private var rcContent: RealityViewCameraContent? @State private var rkID: UUID = UUID() let root = Entity() let center = ModelEntity(mesh: .generateSphere(radius: 0.05), materials: [UnlitMaterial(color: UIColor(.gray.opacity(0.5)))]) let red = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: false)]) let blue = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .blue, isMetallic: false)]) let green = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .green, isMetallic: false)]) var body: some View { VStack{ RealityView { content in red.position.x = 0.5 blue.position.z = 0.5 green.position.y = 0.5 center.position = .init(repeating: 0.25) content.cameraTarget = target == 0 ? root : blue root.addChild(red) root.addChild(blue) root.addChild(green) root.addChild(center) content.add(root) } update: { content in switch target{ case 0: content.cameraTarget = root case 1: content.cameraTarget = blue case 2: content.cameraTarget = red case 3: content.cameraTarget = green default: content.cameraTarget = root } } .id(rkID) .realityViewCameraControls(.orbit) VStack{ Text("Target") Button("Default") { target = 0 // Force rebuilding view resets orbit target and rotation // But shows a flicker, interaction requires touch reset // Not an ideal workaround // rkID = UUID() } .buttonStyle(.bordered) Button("Blue") { target = 1 // rkID = UUID() } .buttonStyle(.bordered) .tint(.blue) Button("Red") { target = 2 // rkID = UUID() } .buttonStyle(.bordered) .tint(.red) Button("Green") { target = 3 // rkID = UUID() } .buttonStyle(.bordered) .tint(.green) } } } } Xcode Version: Version 26.0 (17A324) iOS Version: iOS 26.5 (23F75) Tested on devices, iPhone 12 Pro, iPhone 15 Pro
Replies
2
Boosts
0
Views
459
Activity
17h
VirtualEnvironmentProbeComponent VS ImageBasedLightComponent
Hi. I want to know what's the difference between VirtualEnvironmentProbeComponent and ImageBasedLightComponent? It seems they both can achieve the same light and reflection effect of environment.
Replies
0
Boosts
0
Views
258
Activity
2d
`ARCamera.unprojectPoint` and `ARCamera.TrackingState` behavior changes between iOS 26.3 and 26.4 under AR resource pressure
ARCamera.TrackingState questions: Did the threshold or sensitivity for transitioning ARCamera.TrackingState from .normal to .limited(.excessiveMotion) or .limited(.insufficientFeatures) change between iOS 26.3 and iOS 26.4? What does "ARWorldTrackingTechnique: resource constraints [33]" mean, and is it new in iOS 26.4? Does it correspond to a tracking state degradation? Is there a way for the client to detect or respond to ARKit entering a resource-constrained mode short of the full tracking state transition — for example, a lower-level notification or a flag on ARFrame — so that apps can take protective action without interpreting it as a full tracking failure? ARCamera.unprojectPoint questions: Did the behavior of ARCamera.unprojectPoint(_:ontoPlane:orientation:viewportSize:) change between iOS 26.3 and iOS 26.4 for near-parallel geometry? Specifically, on iOS 26.3 this method returns nil when the camera ray is nearly parallel to the target plane (denominator of the ray-plane intersection → 0 at ~90° of camera rotation). On iOS 26.4, with identical code and environment, it returns a large finite value instead — we observed z = −12.27m. Since the method's optional return type implies nil is the documented signal for no valid intersection, this reads as a behavioral regression rather than an intentional change. If returning the computed value for near-parallel geometry is now the intended behavior, is there a recommended way for the caller to guard against it? For example, should we check abs(dot(rayDirection, planeNormal)) against a threshold before calling, and if so, is there a documented epsilon Apple uses internally? Alternatively, is there a newer API we should prefer over unprojectPoint(:ontoPlane:) for this use case that handles degenerate geometry more gracefully — such as ARSession.raycast(:)? Are there any other ARKit API adjustments between OS 26.3 and 26.4? We are using the same codebase, but it behaves differently in between these 2 OS versions now. Thanks!
Replies
0
Boosts
0
Views
223
Activity
2d
AVP Developer Strap
I'm trying to find where to buy the Vision Pro Developer Strap Gen 2. I've looked all around this site and cannot find it. Help
Replies
0
Boosts
0
Views
374
Activity
3d
Spatial Audio: <<<< FigAudioSession(AV) >>>> signalled err=-19224 at <>:612
Ok trying to play Spatial Audio on my VisionPro. OS26.4, using Xcode 26.4.1. Every attempt gives me the following error. <<<< FigAudioSession(AV) >>>> signalled err=-19224 at <>:612 I have tried the sample code at https://developer.apple.com/documentation/visionos/playing-spatial-audio-in-visionos and it gives the same error.
Replies
2
Boosts
0
Views
767
Activity
4d
How do I dismiss a presented sheet?
I'm developing an app requiring data entry across several devices. My SwiftUI app runs on iOS and iPadOS but I also want to run it on visionOS. I'm using the visionOS simulator. When I enter data in one of my views I use a Form within a .sheet and this works perfectly well on iOS and iPadOS and I can dismiss the sheet by simply tapping the view behind the sheet. On visionOS I click my + button, the sheet appears, I enter the data as usual but after that there is no gesture in the app I can perform with keyboard or mouse that will make the sheet disappear! Do I have to add a "Close" button for visionOS or is there a way to enable the same interaction that works on iPadOS?
Replies
0
Boosts
0
Views
351
Activity
5d
Object Capture feature in visionOS
The object capture feature in Reality Composer App is only available in iOS and iPadOS at the moment, would this feature be available for visionOS in near future? Reality Composer App Store https://apps.apple.com/us/app/reality-composer/id1462358802
Replies
2
Boosts
0
Views
1.4k
Activity
1w
WWDC25 Houdini VR Optimisation Toolkit Texture Baking
The texture baking section of the WWDC25 session "Optimize your custom environments for visionOS" (https://youtu.be/RELnRZmb02c?t=1485) moves very quickly and leaves a lot unexplained. Has anyone worked through this part of the toolkit in practice and can speak to what's actually going on, particularly around projection baking and how it addresses the reprojection artifacts the presenter briefly mentions? Thankyou
Replies
0
Boosts
0
Views
774
Activity
1w
RealityView content disappears when selecting Lock In Place on visionOS
I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace A red cube is placed 1m in front of the user via RealityView Long press the X button on any floating window Select "Lock In Place" The cube disappears immediately Expected: Cube remains visible after window is locked Actual: Cube disappears. Note: "Follow Me" does NOT reproduce this issue. Minimal reproducible code: struct ImmersiveView: View { var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
Replies
1
Boosts
0
Views
1.6k
Activity
2w
Real world anchors
I’m trying to build a persistent world map of my college campus using ARKit, but it’s not very reliable. Anchors don’t consistently appear in the same place across sessions. I’ve tried using image anchors, but they didn’t improve accuracy much. How can I create a stable world map for a larger area and reliably relocalize anchors? Are there better approaches or recommended resources for this?
Replies
1
Boosts
0
Views
1.1k
Activity
2w
visionOS: AVFoundation cannot deliver simultaneous video from two external (UVC) cameras; no public USB fallback exists
Area: visionOS 26.4 · AVFoundation · AVCapture · External/UVC video Classification: Suggestion / API Enhancement Request (also: Incorrect/Missing Documentation) Device / OS: Apple Vision Pro, visionOS 26.x. Xcode 26.4.1, XROS26.4.sdk. Summary On visionOS, a third-party app cannot display two UVC USB cameras (connected through a powered USB-C hub) at the same time. Every AVFoundation path that would enable this on iPadOS is either unavailable or fails at runtime on visionOS, and there is no public non-AVFoundation fallback (no IOUSBHost, no DriverKit, no usable CoreMediaIO, no MFi path for generic UVC devices). This is a real capability gap relative to iPadOS and macOS, and Camo Studio on iPadOS (App Store ID 6450313385) demonstrates the two-camera USB-hub use case is legitimate and valuable for spatial-video/hybrid-capture workflows on Vision Pro. Steps to reproduce Connect a powered USB-C hub to Apple Vision Pro with two UVC webcams attached. Build a visionOS app that uses AVCaptureDevice.DiscoverySession(deviceTypes: [.external], …). Observe: both cameras are discovered and enumerate as distinct AVCaptureDevices. Attempt A — two independent sessions: Create two independent AVCaptureSessions, each with one AVCaptureDeviceInput and one AVCaptureVideoDataOutput, start both. Result: only one session delivers sample buffers. The other stalls silently with no error and no interruption notification. Attempt B — AVCaptureMultiCamSession with manual connections (the pattern that works on iPadOS 18+): Result: code does not compile. In XROS26.4.sdk: AVCaptureInputPort is API_UNAVAILABLE(visionos) (AVCaptureInput.h) AVCaptureInput.ports is API_UNAVAILABLE(visionos) AVCaptureDeviceInput.portsWithMediaType:sourceDeviceType:sourceDevicePosition: is API_UNAVAILABLE(macos, visionos) Therefore AVCaptureConnection(inputPorts:output:) cannot be constructed. AVCaptureMultiCamSession itself is declared API_AVAILABLE(… visionos(2.1)), which is misleading because without input-port access the manual-connection path the class requires is unreachable. Expected behavior Either of the following would resolve this, in order of preference: Expose the missing API surface on visionOS. Make AVCaptureInputPort, AVCaptureInput.ports, and AVCaptureDeviceInput.portsWithMediaType:sourceDeviceType:sourceDevicePosition: available on visionOS so the documented iPadOS multi-cam pattern compiles and runs. AVCaptureMultiCamSession is already declared available — the supporting API surface should match. Allow two concurrent plain AVCaptureSessions to each own a distinct external AVCaptureDevice. Each session binds a different hardware device, and the current serialization appears to be a software policy rather than a hardware constraint (a powered hub has bandwidth for both). Document the limit explicitly and surface a clear error or interruption reason on the stalled session so apps can fail loudly instead of appearing to work. Actual behavior AVCaptureMultiCamSession advertises visionos(2.1) availability but the APIs required to wire its connections are marked unavailable on visionOS. Two concurrent AVCaptureSessions silently deliver frames to only one session; no error is reported on the other. There is no public alternative framework on visionOS for raw UVC access to work around this: IOUSBHost.framework — not present in XROS26.4.sdk DriverKit — not present in XROS26.4.sdk IOKit — ships a stub (IOKit.tbd); no public USB device interfaces CoreMediaIO — headers are an apinotes stub on visionOS ExternalAccessory — MFi-only; generic UVC devices don't enumerate This means there is no public path, AVFoundation or otherwise, for a third-party visionOS app to display two UVC cameras at once. Impact / use cases Apple Vision Pro is uniquely suited to multi-camera monitoring and capture workflows — spatial creators, broadcast/AV producers, multi-angle reference during immersive authoring, clinical and field-recording use cases, and apps that combine a primary UVC cinema camera with a secondary UVC reference/overview angle. iPadOS already supports this via AVCaptureMultiCamSession (demonstrated shipping by Camo Studio). The current visionOS limitation pushes these workflows back to iPad or macOS and undermines Vision Pro's positioning as a pro capture/monitor environment. References iPadOS reference implementation: Apple sample Displaying Video From Connected Devices + AVCaptureMultiCamSession with manual AVCaptureConnection wiring — works on iPadOS 18+ with two UVC cameras via a powered hub. Shipping precedent: Camo Studio — two simultaneous UVC cameras via USB hub on iPad — https://apps.apple.com/us/app/camo-studio-stream-record/id6450313385 visionOS 26.4 SDK headers cited above (AVCaptureInput.h, AVCaptureSession.h).
Replies
1
Boosts
0
Views
1.2k
Activity
2w
RealityView attachment draw order
My visionOS 26.3 app displays a diorama-like scene in a RealityView in a mixed immersive space, about 1 meter square, with view attachments floating above the scene. Each view attachment fades out after user interaction, by animating the view's opacity. What I'm observing is that depending on the position of a view attachment relative to the scene and the camera, an unwanted cutout effect is observed (presumably because of draw order issues), as shown in the right column in the screenshots below. YouTube video link of these sequences: https://youtu.be/oTuo0okKCkc (19 seconds) My question: How does visionOS determine the view attachment draw order relative to the RealityView scene? If I better understood how the draw order is determined, I could modify my scene to ensure that the view attachments were always drawn after the scene, fixing the unwanted cutout effect. I've successfully used ModelSortGroupComponent to control the draw order of entities within the RealityView scene, but my understanding is that this approach cannot be used with view attachments. I've submitted FB22014370 about this issue. Thank you.
Replies
4
Boosts
0
Views
1.6k
Activity
2w
LowLevelInstanceData & animation
AppleOS 26 introduces LowLevelInstanceData that can reduce CPU draw calls significantly by instancing. However, I have noticed trouble with animating each individual instance. As I wanted low-level control, I'm using a custom system and LowLevelInstanceData.replace(using:) to update the transform each frame. The update closure itself is extremely efficient (Xcode Instruments reports nearly no cost). But I noticed extremely high runloop time, reach around 20ms. Time Profiler shows that the CPU is blocked by kernel.release.t6401. I think it is caused by synchronization between CPU and GPU, however, as I am already using a MTLCommandBuffer to coordinate it, I don't understand why I am still seeing large CPU time.
Replies
3
Boosts
0
Views
734
Activity
2w
RealityView content disappears when selecting Lock In Place on visionOS
Hi, I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace. A red cube is placed 1m in front of the user via RealityView. Long press the X button on any floating window Select "Lock In Place". The cube disappears immediately. Expected: Cube remains visible after window is locked Actual: Cube disappears. Minimal reproducible code: var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3<Float>(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
Replies
5
Boosts
0
Views
1.4k
Activity
3w
RealityView camera feed not shown
I have two RealityView: ParentView and When click the button in ParentView, ChildView will be shown as full screen cover, but the camera feed in ChildView will not be shown, only black screen. If I show ChildView directly, it works with camera feed. Please help me on this issue? Thanks. import RealityKit import SwiftUI struct ParentView: View{ @State private var showIt = false var body: some View{ ZStack{ RealityView{content in content.camera = .virtual let box = ModelEntity(mesh: MeshResource.generateSphere(radius: 0.2),materials: [createSimpleMaterial(color: .red)]) content.add(box) } Button("Click here"){ showIt = true } } .fullScreenCover(isPresented: $showIt){ ChildView() .overlay( Button("Close"){ showIt = false }.padding(20), alignment: .bottomLeading ) } .ignoresSafeArea(.all) } } import ARKit import RealityKit import SwiftUI struct ChildView: View{ var body: some View{ RealityView{content in content.camera = .spatialTracking } } }
Replies
5
Boosts
1
Views
2.1k
Activity
3w
ManipulationComponent Not Translating using indirect input
When using the new RealityKit Manipulation Component on Entities, indirect input will never translate the entity - no matter what settings are applied. Direct manipulation works as expected for both translation and rotation. Is this intended behaviour? This is different from how indirect manipulation works on Model3D. How else can we get translation from this component? visionOS 26 Beta 2 Build from macOS 26 Beta 2 and Xcode 26 Beta 2 Attached is replicable sample code, I have tried this in other projects with the same results. var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "MovieFilmReel", in: reelRCPBundle) { ManipulationComponent.configureEntity(immersiveContentEntity, allowedInputTypes: .all, collisionShapes: [ShapeResource.generateBox(width: 0.2, height: 0.2, depth: 0.2)]) immersiveContentEntity.position.y = 1 immersiveContentEntity.position.z = -0.5 var mc = ManipulationComponent() mc.releaseBehavior = .stay immersiveContentEntity.components.set(mc) content.add(immersiveContentEntity) } } }
Replies
17
Boosts
5
Views
3.2k
Activity
Apr ’26
Tapping once with both hands only works sometimes in visionOS
Hello! I have an iOS app where I am looking into support for visionOS. I have a whole bunch of gestures set up using UIGestureRecognizer and so far most of them work great in visionOS! But I do see something odd that I am not sure can be fixed on my end. I have a UITapGestureRecognizer which is set up with numberOfTouchesRequired = 2 which I am assuming translates in visionOS to when you tap your thumb and index finger on both hands. When I tap with both hands sometimes this tap gesture gets kicked off and other times it doesn't and it says it only received one touch when it should be two. Interestingly, I see this behavior in Apple Maps where tapping once with both hands should zoom out the map, which only works sometimes. Can anyone explain this or am I missing something?
Replies
6
Boosts
0
Views
1.2k
Activity
Apr ’26