Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

visionOS widget dimensions?
Is there any size guidance for the new WidgetKit integration on visionOS? The Widget HIG provides dimensions for all the widget size classes on iOS, iPadOS and watchOS, but has not been updated for visionOS. https://developer.apple.com/design/human-interface-guidelines/widgets My potential widget use case is image based, so I'm looking to better understand the optimal size, resolution etc I would need, particularly for the new visionOS specific extra large widget size.
0
0
576
Jul ’25
Template Project Entity Overlapping and Sticking Issues
Hello, There are three issues I am running into with a default template project + additional minimal code changes: the Sphere_Left entity always overlaps the Sphere_Right entity. when I release the Sphere_Left entity, it does not remain sticking to the Sphere_Right entity when I release the Sphere_Left entity, it distances itself from the Sphere_Right entity When I manipulate the Sphere_Right entity, these above 3 issues do not occur: I get a correct and expected behavior. These issues are simple to replicate: Create a new project in XCode Choose visionOS -> App, then click Next Name your project, and leave all other options as defaults: Initial Scene: Window, Immersive Space Renderer: RealityKit, Immersive Space: Mixed, then click Next Save you project anywhere... Replace the entire ImmersiveView.swift file with the below code. Run. Try to manipulate the left sphere, you should get the same issues I mentioned above If you restart the project, and manipulate only the right sphere, you should get the correct expected behaviors, and no issues. I am running this in macOS 26, XCode 26, on visionOS 26, all released lately. ImmersiveView Code: // // ImmersiveView.swift // import OSLog import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { private let logger = Logger(subsystem: "com.testentitiessticktogether", category: "ImmersiveView") @State var collisionBeganUnfiltered: EventSubscription? var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) // Add manipulation components setupManipulationComponents(in: immersiveContentEntity) collisionBeganUnfiltered = content.subscribe(to: CollisionEvents.Began.self) { collisionEvent in Task { @MainActor in handleCollision(entityA: collisionEvent.entityA, entityB: collisionEvent.entityB) } } } } } private func setupManipulationComponents(in rootEntity: Entity) { logger.info("\(#function) \(#line) ") let sphereNames = ["Sphere_Left", "Sphere_Right"] for name in sphereNames { guard let sphere = rootEntity.findEntity(named: name) else { logger.error("\(#function) \(#line) Failed to find \(name) entity") assertionFailure("Failed to find \(name) entity") continue } ManipulationComponent.configureEntity(sphere) var manipulationComponent = ManipulationComponent() manipulationComponent.releaseBehavior = .stay sphere.components.set(manipulationComponent) } logger.info("\(#function) \(#line) Successfully set up manipulation components") } private func handleCollision(entityA: Entity, entityB: Entity) { logger.info("\(#function) \(#line) Collision between \(entityA.name) and \(entityB.name)") guard entityA !== entityB else { return } if entityB.isAncestor(of: entityA) { logger.debug("\(#function) \(#line) \(entityA.name) already under \(entityB.name); skipping reparent") return } if entityA.isAncestor(of: entityB) { logger.info("\(#function) \(#line) Skip reparent: \(entityA.name) is an ancestor of \(entityB.name)") return } reparentEntities(child: entityA, parent: entityB) entityA.components[ParticleEmitterComponent.self]?.burst() } private func reparentEntities(child: Entity, parent: Entity) { let childBounds = child.visualBounds(relativeTo: nil) let parentBounds = parent.visualBounds(relativeTo: nil) let maxEntityWidth = max(childBounds.extents.x, parentBounds.extents.x) let childPosition = child.position(relativeTo: nil) let parentPosition = parent.position(relativeTo: nil) let currentDistance = distance(childPosition, parentPosition) child.setParent(parent, preservingWorldTransform: true) logger.info("\(#function) \(#line) Set \(child.name) parent to \(parent.name)") child.components.remove(ManipulationComponent.self) logger.info("\(#function) \(#line) Removed ManipulationComponent from child \(child.name)") if currentDistance > maxEntityWidth { let direction = normalize(childPosition - parentPosition) let newPosition = parentPosition + direction * maxEntityWidth child.setPosition(newPosition - parentPosition, relativeTo: parent) logger.info("\(#function) \(#line) Adjusted position: distance was \(currentDistance), now \(maxEntityWidth)") } } } fileprivate extension Entity { func isAncestor(of other: Entity) -> Bool { var current: Entity? = other.parent while let node = current { if node === self { return true } current = node.parent } return false } } #Preview(immersionStyle: .mixed) { ImmersiveView() .environment(AppModel()) }
8
0
421
Sep ’25
Is it possible to live render CMTaggedBuffer / MV-HEVC frames in visionOS?
Hey all, I'm working on a visionOS app that captures live frames from the left and right cameras of Apple Vision Pro using cameraFrame.sample(for: .left/.right). Apple provides documentation on encoding side-by-side frames into MV-HEVC spatial video using CMTaggedBuffer: Converting Side-by-Side 3D Video to MV-HEVC My question: Is there any way to render tagged frames (e.g. CMTaggedBuffer with .stereoView(.leftEye/.rightEye)) live, directly to a surface in RealityKit or Metal, without saving them to a file? I’d like to create a true stereoscopic (spatial) live video preview, not just render two images side-by-side. Any advice or insights would be greatly appreciated!
2
0
250
Aug ’25
Need to rotate child of a 3D mesh
I am creating a vision pro app with a 3D model, it has a mesh hierarchy of head, hands, feet etc. I want the character to look towards the camera, but am not able to access head of character through sceneKit nor reality kit. when I try to print names of the child meshes, it only prints till the character, it does iterate through all the body parts. Can anyone help?
1
0
203
Sep ’25
ManipulationComponent in both parent and child entities
Hello, In my project, I have attached a ManipulationComponent to Entity A and as expected, I'm able interact with it using the built-in gestures. I have another Entity B which is a child of A that I would like to interact with as well, so I attempted to add a ManipulationComponent to B. However, no gestures seem to be registered on B; I can still interact with A but B cannot be interacted with despite having ManipulationComponents on both entities. So I'm wondering if I'm just doing something wrong, if this is an issue with the ManipulationComponent, or if this is a limitation of the API. Attached is the code used to add the ManipulationComponent to an Entity and it was done on both A and B: let mc = ManipulationComponent() model.components.set(mc) var boxShape = ShapeResource.generateBox(width: 0.25, height: 0.05, depth: 0.25) boxShape = boxShape.offsetBy(translation: simd_float3(0, -0.05, -0.25)) ManipulationComponent.configureEntity(model, collisionShapes: [boxShape]) if var mc = model.components[ManipulationComponent.self] { mc.releaseBehavior = .stay mc.dynamics.inertia = .low model.components.set(mc) } I am using visionOS 26.0; let me know if there's any additional information needed.
1
0
372
Oct ’25
RoomPlan crash on iOS 16 devices even when using available checks
I've encountered an unexpected crash with RoomPlan on iOS 16 devices. The odd part is the code is protected by an available check, since I'm using newer RoomPlan features. Xcode error dyld[40588]: Symbol not found: _$s8RoomPlan08CapturedA0V16USDExportOptionsV5modelAEvgZ I can repro using the Apple sample code. https://developer.apple.com/documentation/roomplan/create-a-3d-model-of-an-interior-room-by-guiding-the-user-through-an-ar-experience Modify RoomCaptureViewController.swift as follows. Remove try finalResults?.export(to: destinationURL, exportOptions: .parametric) Add if #available(iOS 17.0, *) { try finalResults?.export(to: destinationURL, exportOptions: .model) } else { try finalResults?.export(to: destinationURL, exportOptions: .parametric) } I would have expected this code to at least compile and run on older devices. When the app was targeting iOS 15, the available checks worked as expected and the app is able to launch properly.
0
0
166
Apr ’25
How to configure Spatial Audio on a Video Material?, Compile error.
I've tried following apple's documentation to apply a video material on a Model Entity, but I have encountered a compile error while attempting to specify the Spatial Audio type. It is a 360 video on a Sphere which plays just fine, but the audio is too quiet compared to the volume I get when I preview the video on Xcode. So I tried tried to configure audio playback mode on the material but it gives me a compile error: "audioInputMode' is unavailable in visionOS audioInputMode' has been explicitly marked unavailable here RealityFoundation.VideoPlaybackController.audioInputMode)" https://developer.apple.com/documentation/realitykit/videomaterial/ Code: let player = AVPlayer(url: url) // Instantiate and configure the video material. let material = VideoMaterial(avPlayer: player) // Configure audio playback mode. material.controller.audioInputMode = .spatial // this line won’t compile. VisionOS 2.4, Xcode 16.4, also tried Xcode 26 beta 2. The videos are HEVC MPEG-4 codecs. Is there any other way to do this, or is there a workaround available? Thank you.
2
0
247
Jul ’25
Help Configuring Unity for Immersive VR on Vision Pro with Pinch Teleport
How do I configure a Unity project for a fully immersive VR app on Apple Vision Pro using Metal Rendering, and add a simple pinch-to-teleport-where-looking feature? I've tried the available samples and docs, but they don't cover this clearly (to me). So far, I've reviewed Unity XR docs, Apple dev guides, and tutorials, but most emphasize spatial apps. Metal examples exist but don't include teleportation. Specifically: visionOS sample "XRI_SimpleRig" – Deploys to device/simulator, but no full immersion or teleport. XRI Toolkit sample "XR Origin Hands (XR Rig)" – Pinch gestures detect, but not linked to movement. visionOS "XR Plugin" sample "Metal Sample URP" – Metal setup works, but static scene without locomotion. I'm new in Unity XR development and would appreciate a simple, standalone scene or document focused only on the essentials for "teleport to gaze on pinch" in VR mode—no extra features. I do have some experience in unreal, world toolkit, cosmo, etc from the 90's and I'm ok with code. Please include steps for: Setting up immersive VR (disabling spatial defaults if needed). Integrating pinch detection with ray-based teleport. Any config changes or basic scripts. Project Configuration: Unity Editor Version: 6000.2.5f1.2588.7373 (Revision: 6000.2/staging 43d04cd1df69) Installed Packages: Apple visionOS XR Plugin: 2.3.1 AR Foundation: 6.2.0 PolySpatial XR: 2.3.1 XR Core Utilities: 2.5.3 XR Hands: 1.6.1 XR Interaction Toolkit: 3.2.1 XR Legacy Input Helpers: 2.1.12 XR Plugin Management: 4.5.1 Imported Samples: Apple visionOS XR Plugin 2.3.1: Metal Sample - URP XR Hands 1.6.1 XR Interaction Toolkit 3.2.1: Hands Interaction Demo, Starter Assets, visionOS Build Platform Settings: Target: Apple visionOS App Mode: Metal Rendering with Compositor Services Selected Validation Profiles: visionOS Metal Documentation: Enabled Xcode Version: 26.01 visionOS SDK: 26 Mac Hardware: Apple M1 Max Target visionOS Version: 20 or 26 Test Environment: Model: Apple Vision Pro, visionOS 26.0.1 (23M341), Apple M1 Max No errors in builds so far; just missing the desired functionality. Thanks for a complete response with actionable steps.
0
0
246
Oct ’25
AVPlayer stutters when using AVPlayerItemVideoOutput
We’re trying to build a custom player for Unity. For this, we’re using AVPlayer with AVPlayerItemVideoOutput to get textures. However, we noticed that playback is not smooth and the stream often freezes. For testing, we used this 8K video: https://deovr.com/nwfnq1 The video was played using the following code: @objc public func playVideo(urlString: String) { guard let url = URL(string: urlString) else { return } let pItem = AVPlayerItem(url: url) playerItem = pItem pItem.preferredForwardBufferDuration = 10.0 let pixelBufferAttributes: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, kCVPixelBufferMetalCompatibilityKey as String: true, ] let output = AVPlayerItemVideoOutput( pixelBufferAttributes: pixelBufferAttributes ) pItem.add(output) playerItemObserver = pItem.observe(\.status) { [weak self] pItem, _ in guard pItem.status == .readyToPlay else { return } self?.playerItemObserver = nil self?.player.play() } player = AVPlayer(playerItem: pItem) player.currentItem?.preferredPeakBitRate = 35_000_000 } When AVPlayerItemVideoOutput is attached, the video stutters and the log looks like this: 🟢 Playback likely to keep up 🟡 Buffer ahead: 4.08s | buffer: 4.08s 🟡 Buffer ahead: 4.08s | buffer: 4.08s 🟡 Buffer ahead: -0.07s | buffer: 0.00s 🟡 Buffer ahead: 2.94s | buffer: 3.49s 🟡 Buffer ahead: 2.50s | buffer: 4.06s 🟡 Buffer ahead: 1.74s | buffer: 4.30s 🟡 Buffer ahead: 0.74s | buffer: 4.30s 🟠 Playback may stall 🛑 Buffer empty 🟡 Buffer ahead: 0.09s | buffer: 4.30s 🟠 Playback may stall 🟠 Playback may stall 🛑 Buffer empty 🟠 Playback may stall 🟣 Buffer full 🟡 Buffer ahead: 1.41s | buffer: 1.43s 🟡 Buffer ahead: 1.41s | buffer: 1.43s 🟡 Buffer ahead: 1.07s | buffer: 1.43s 🟣 Buffer full 🟡 Buffer ahead: 0.47s | buffer: 1.65s 🟠 Playback may stall 🛑 Buffer empty 🟡 Buffer ahead: 0.10s | buffer: 1.65s 🟠 Playback may stall 🟡 Buffer ahead: 1.99s | buffer: 2.03s 🟡 Buffer ahead: 1.99s | buffer: 2.03s 🟣 Buffer full 🟣 Buffer full 🟡 Buffer ahead: 1.41s | buffer: 2.00s 🟡 Buffer ahead: 0.68s | buffer: 2.27s 🟡 Buffer ahead: 0.09s | buffer: 2.27s 🟠 Playback may stall 🛑 Buffer empty 🟠 Playback may stall When we remove AVPlayerItemVideoOutput from the player, the video plays smoothly, and the output looks like this: 🟢 Playback likely to keep up 🟡 Buffer ahead: 1.94s | buffer: 1.94s 🟡 Buffer ahead: 1.94s | buffer: 1.94s 🟡 Buffer ahead: 1.22s | buffer: 2.22s 🟡 Buffer ahead: 1.05s | buffer: 3.05s 🟡 Buffer ahead: 1.12s | buffer: 4.12s 🟡 Buffer ahead: 1.18s | buffer: 5.18s 🟡 Buffer ahead: 0.72s | buffer: 5.72s 🟡 Buffer ahead: 1.27s | buffer: 7.28s 🟡 Buffer ahead: 2.09s | buffer: 3.03s 🟡 Buffer ahead: 4.16s | buffer: 6.10s 🟡 Buffer ahead: 6.66s | buffer: 7.09s 🟡 Buffer ahead: 5.66s | buffer: 7.09s 🟡 Buffer ahead: 4.66s | buffer: 7.09s 🟡 Buffer ahead: 4.02s | buffer: 7.45s 🟡 Buffer ahead: 3.62s | buffer: 8.05s 🟡 Buffer ahead: 2.62s | buffer: 8.05s 🟡 Buffer ahead: 2.49s | buffer: 3.53s 🟡 Buffer ahead: 2.43s | buffer: 3.38s 🟡 Buffer ahead: 1.90s | buffer: 3.85s We’ve tried different attribute settings for AVPlayerItemVideoOutput. We also removed all logic related to reading frame data, but the choppy playback still remained. Can you advise whether this is a player issue or if we’re doing something wrong?
1
0
402
Oct ’25
[WWDC25] For GuessTogether, can you initiate a FaceTime call via the custom SharePlay button?
Hello, For GuessTogether source code, it seems like the code assumes that you're already in a FaceTime call before pressing the custom SharePlay button (labeled "Play Guess Together"). If not already on a FaceTime call, my Apple Vision Pro and the visionOS simulator both do nothing after throwing warnings. Is this intended behavior? If so, how do I make it so that pressing the button can also initiate FaceTime calls? Is this allowed? Thank you!
3
0
132
Sep ’25
Vision Pro 画面传输至 Mac 后分辨率偏低
传输后的直播流分辨率显著下降,画面细节丢失、清晰度不足,导致 3D 家具商品的纹理、尺寸等关键信息无法精准展示,影响用户对商品的判断。 期望 优化流传输过程中的分辨率压缩策略,减少传输过程中的画质损耗,提升 Mac 端接收的直播流清晰度,匹配 3D 商品展示的高精度需求。
0
0
173
Dec ’25
Can't establish spatial connection after visionOS update
After updating to visionOS 26.2 Beta 2 (and Beta 3), I'm unable to establish a spatial connection to Vision Pro. This was working fine before the update. To test, I've created a fresh spatialApp project from the Xcode template with zero modifications, but I'm hitting the same issue - the Vision Pro is discovered but won't connect. Am I forgetting to update the config somewhere? Any ideas what might be causing this and how to fix it? Thanks! Warning: -[NSWindow makeKeyWindow] called on <NSWindow: 0xa1f811900> windowNumber=1b9 which returned NO from -[NSWindow canBecomeKeyWindow]. ((processConfiguration != nil && configuration != nil) || (processConfiguration == nil && configuration == nil)) - /AppleInternal/Library/BuildRoots/4~CBS0ugAIF7BrQZjLe6r0lhPXO4GJmNDTovxYoV0/Library/Caches/com.apple.xbs/Sources/ExtensionKit/ExtensionKit/Source/HostViewController/Internal/EXHostSessionDriver.m:80: `processConfiguration` and `configuration` must be both non-nil or both nil Unable to obtain a task name port right for pid 415: (os/kern) failure (0x5) CCContextDeviceGroup.mm(291):+[CCContextDeviceGroup checkBinaryArchivesForDevice:withBundle:]: Failed to find any binary shader archive
0
0
105
Nov ’25
Metal Compositor Service & Persona (VisionOS)
Hello, I'm currently trying to make a collaborative app. But it just works only on Reality View, when I tried to use Compositor Layer like below, the personas disappeared. ImmersiveSpace(id: "ImmersiveSpace-Metal") { CompositorLayer(configuration: MetalLayerConfiguration()) { layerRenderer in SpatialRenderer_InitAndRun(layerRenderer) } } Is there any potential solution too see Personas in Metal view? Thanks in advance!
2
0
756
Sep ’25
Header Blur Effect on visionOS SwiftUI
Hi, I'm looking to build something similar to the header blur in the App Store and Apple TV app settings. Does anyone know the best way to achieve this so that when there is nothing behind the header it looks the same as the rest of the view background but when content goes underneath it has a blur effect. I've seen .scrollEdgeEffect on IOS26 is there something similar for visionOS? Thanks!
0
0
133
Sep ’25
mipmapsMode trade-off?
I am building a 360 photo viewer in VisionOS 26. Which allows the user to choose a 2 by 1 jpg and then renders it with a sphere mesh entity. And I use: TextureResource(contentsOf: url, options: options). I noticed two situations here in terms of mipmaps options. When setting "mipmapsMode: .none": The graphic quality within the "gaze area" looks sharp and clear The two poles (top and bottom) are perfectly rendered Massive shimmer around the "gaze area" When setting "mipmapsMode: .allocateAndGenerateAll": The graphic looks slightly blurrier than in ".none" within the "gaze area" The two poles are very blurry and hard to recognize the texture Much less shimmer around the "gaze area" My question would be: Is there a way to have the perfect graphic quality in ".none" without the massive shimmer? Thank you! Screenshots: mipmapsMode: .none mipmapsMode: .allocateAndGenerateAll
0
0
230
Oct ’25
Summon gesture
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it? Thanks a lot, Christophe
1
0
68
Apr ’25
New Spatial Rendering App on macOS doesn't display on visionOS device
I created a new Spatial Rendering App from the template in Xcode 26.0.1. When I run the app, click 'Show Immersive Space' and select my Vision Pro from the pop-up dialog, the content in the dialog flickers (which seems to indicate something crashed) and nothing appears on my Vision Pro. I'm running the released macOS 26.0 (25A354) and visionOS 26.0 (23M336). Filed as FB20397093.
0
0
347
Sep ’25