RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

Posts under RealityKit tag

200 Posts

Post

Replies

Boosts

Views

Activity

RealityView Camera Target Error when set while Orbiting
When interacting with RealityView’s realityViewCameraControls .orbit and setting a new RealityViewCameraContent .cameraTarget, the resulting camera target and camera orbit is incorrect. This can be demonstrated where one finger is orbiting the RealityView, and another pushes a button which changes the camera target. Instead of the camera facing the new target, some point in the scene is the new effective camera target and orbit point. This only occurs when an orbit interaction is currently taking place. If you stop interacting with the orbit, change target, then start orbit interacting again, everything works as expected. Though this example uses two-touches, any change of the camera target has this conflict with orbit interaction. This means interacting with orbit will result in the wrong camera view which is unexpected for users and difficult to reconcile or detect, for developers. Expected: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target shows centred in view the orbit revolves the new target and continues to match my gestures. Reality: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target is not centred in view, and camera is now orbiting an unexpected point in the scene, that is not my expected target. One imperfect workaround is to force a rebuild of the view after setting a new cameraTarget. This sets all targets correctly but results in a flicker, loss of orbit controls until re-touch and ultimately is a poor user experience, but is better than the wrong target being shown unexpectedly. Code Sample: import SwiftUI import RealityKit struct RKOribtTarget: View { @State private var target: Int = 0 @State private var rcContent: RealityViewCameraContent? @State private var rkID: UUID = UUID() let root = Entity() let center = ModelEntity(mesh: .generateSphere(radius: 0.05), materials: [UnlitMaterial(color: UIColor(.gray.opacity(0.5)))]) let red = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: false)]) let blue = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .blue, isMetallic: false)]) let green = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .green, isMetallic: false)]) var body: some View { VStack{ RealityView { content in red.position.x = 0.5 blue.position.z = 0.5 green.position.y = 0.5 center.position = .init(repeating: 0.25) content.cameraTarget = target == 0 ? root : blue root.addChild(red) root.addChild(blue) root.addChild(green) root.addChild(center) content.add(root) } update: { content in switch target{ case 0: content.cameraTarget = root case 1: content.cameraTarget = blue case 2: content.cameraTarget = red case 3: content.cameraTarget = green default: content.cameraTarget = root } } .id(rkID) .realityViewCameraControls(.orbit) VStack{ Text("Target") Button("Default") { target = 0 // Force rebuilding view resets orbit target and rotation // But shows a flicker, interaction requires touch reset // Not an ideal workaround // rkID = UUID() } .buttonStyle(.bordered) Button("Blue") { target = 1 // rkID = UUID() } .buttonStyle(.bordered) .tint(.blue) Button("Red") { target = 2 // rkID = UUID() } .buttonStyle(.bordered) .tint(.red) Button("Green") { target = 3 // rkID = UUID() } .buttonStyle(.bordered) .tint(.green) } } } } Xcode Version: Version 26.0 (17A324) iOS Version: iOS 26.5 (23F75) Tested on devices, iPhone 12 Pro, iPhone 15 Pro
2
0
454
16h
RealityKit custom component: `has()` returns `true` but typed subscript returns `nil` in SwiftPM test runner
swift test (SwiftPM CLI) fails to decode RealityKit custom components from USD files, even though entity.components.has(MyComponent.self) returns true. Typed access via entity.components[MyComponent.self] returns nil. This forces projects that use RealityKit custom components to use xcodebuild test exclusively. Minimal repro: github.com/mesqueeb/swiftpm-realitykit-custom-component-repro Repro steps git clone https://github.com/mesqueeb/swiftpm-realitykit-custom-component-repro cd swiftpm-realitykit-custom-component-repro swift test --filter componentsPresentButNotDecodableInSwiftTest Observed ✅ entity.components.has(ReproComponent.self) returns true ❌ entity.components[ReproComponent.self] returns nil Expected If has(...) returns true for a registered custom component, typed lookup should decode and return non-nil. Notes Running the same test via xcodebuild test works correctly The component is properly registered and the USDA file correctly references it This affects any project that relies on custom RealityKit components in tests — there is no swift test workaround Feedback ID: FB22099519 Environment: macOS 15.5, Xcode 16.4, Swift 6.1
1
0
518
5d
RealityView content disappears when selecting Lock In Place on visionOS
I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace A red cube is placed 1m in front of the user via RealityView Long press the X button on any floating window Select "Lock In Place" The cube disappears immediately Expected: Cube remains visible after window is locked Actual: Cube disappears. Note: "Follow Me" does NOT reproduce this issue. Minimal reproducible code: struct ImmersiveView: View { var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
1
0
1.6k
2w
Real world anchors
I’m trying to build a persistent world map of my college campus using ARKit, but it’s not very reliable. Anchors don’t consistently appear in the same place across sessions. I’ve tried using image anchors, but they didn’t improve accuracy much. How can I create a stable world map for a larger area and reliably relocalize anchors? Are there better approaches or recommended resources for this?
1
0
1.1k
2w
RealityKit crashes when rendering SpriteKit scene with SKShapeNode in postProcess callback
I'm converting my game from SceneKit to RealityKit. It has a SpriteKit overlay that according to Explore advanced rendering with RealityKit 2 I can add with the code below. The code runs fine if the SKScene only contains a SKSpriteNode (see the commented out line), but when I add a SKShapeNode with a fillColor instead, the app crashes with this error: -[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5970: failed assertion `Draw Errors Validation MTLDepthStencilDescriptor uses frontFaceStencil but MTLRenderPassDescriptor has a nil stencilAttachment texture MTLDepthStencilDescriptor uses backFaceStencil but MTLRenderPassDescriptor has a nil stencilAttachment texture ' I don't know enough about low-level graphics and stencils yet to figure out a quick solution, so I would appreciate if anyone could share an easy fix or explanation of what's wrong. Thanks! class ViewController: NSViewController { var device: MTLDevice! var renderer: SKRenderer! override func loadView() { let arView = ARView(frame: NSScreen.main!.frame) view = arView arView.renderCallbacks.prepareWithDevice = { [weak self] device in guard let self = self else { return } self.device = device renderer = SKRenderer(device: MTLCreateSystemDefaultDevice()!) let scene = SKScene() let shape = SKShapeNode(rectOf: CGSize(width: 10, height: 10)) shape.fillColor = .red scene.addChild(shape) // scene.addChild(SKSpriteNode(color: .red, size: CGSize(width: 10, height: 10))) renderer.scene = scene } arView.renderCallbacks.postProcess = { [weak self] context in guard let self = self else { return } let encoder = context.commandBuffer.makeBlitCommandEncoder() encoder?.copy(from: context.sourceColorTexture, to: context.targetColorTexture) encoder?.endEncoding() renderer.update(atTime: context.time) let descriptor = MTLRenderPassDescriptor() descriptor.colorAttachments[0].loadAction = .load descriptor.colorAttachments[0].storeAction = .store descriptor.colorAttachments[0].texture = context.targetColorTexture renderer.render(withViewport: CGRect(x: 0, y: 0, width: context.targetColorTexture.width, height: context.targetColorTexture.height), commandBuffer: context.commandBuffer, renderPassDescriptor: descriptor) } } }
9
0
2.4k
2w
RealityView attachment draw order
My visionOS 26.3 app displays a diorama-like scene in a RealityView in a mixed immersive space, about 1 meter square, with view attachments floating above the scene. Each view attachment fades out after user interaction, by animating the view's opacity. What I'm observing is that depending on the position of a view attachment relative to the scene and the camera, an unwanted cutout effect is observed (presumably because of draw order issues), as shown in the right column in the screenshots below. YouTube video link of these sequences: https://youtu.be/oTuo0okKCkc (19 seconds) My question: How does visionOS determine the view attachment draw order relative to the RealityView scene? If I better understood how the draw order is determined, I could modify my scene to ensure that the view attachments were always drawn after the scene, fixing the unwanted cutout effect. I've successfully used ModelSortGroupComponent to control the draw order of entities within the RealityView scene, but my understanding is that this approach cannot be used with view attachments. I've submitted FB22014370 about this issue. Thank you.
4
0
1.6k
2w
LowLevelInstanceData & animation
AppleOS 26 introduces LowLevelInstanceData that can reduce CPU draw calls significantly by instancing. However, I have noticed trouble with animating each individual instance. As I wanted low-level control, I'm using a custom system and LowLevelInstanceData.replace(using:) to update the transform each frame. The update closure itself is extremely efficient (Xcode Instruments reports nearly no cost). But I noticed extremely high runloop time, reach around 20ms. Time Profiler shows that the CPU is blocked by kernel.release.t6401. I think it is caused by synchronization between CPU and GPU, however, as I am already using a MTLCommandBuffer to coordinate it, I don't understand why I am still seeing large CPU time.
3
0
733
2w
ParticleEmitterComponent Position Offset Issue After iOS 26.1 Update – Seeking Solutions & Workarounds
Problem Summary After upgrading to iOS 26.1 and 26.2, I'm experiencing a particle positioning bug in RealityKit where ParticleEmitterComponent particles render at an incorrect offset relative to their parent entity. This behavior does not occur on iOS 18.6.2 or earlier versions, suggesting a regression introduced in the newer OS builds. Environment Details Operating System: iOS 26.1 & iOS 26.2 Framework: RealityKit Xcode Version: 16.2 (16C5032a) Expected vs. Actual Behavior Expected: Particles should render at the position of the entity to which the ParticleEmitterComponent is attached, matching the behavior on iOS 18.6.2 and earlier. Actual: Particles appear away from their parent entity, creating a visual misalignment that breaks the intended AR experience. Steps to Reproduce Create or open an AR application with RealityKit that uses particle components Attach a ParticleEmitterComponent to an entity via a custom system Run the application on iOS 26.1 or iOS 26.2 Observe that particles render at an offset position away from the entity Minimal Code Example Here's the setup from my test case: Custom Component & System: struct SparkleComponent4: Component {} class SparkleSystem4: System { static let query = EntityQuery(where: .has(SparkleComponent4.self)) required init(scene: Scene) {} func update(context: SceneUpdateContext) { for entity in context.scene.performQuery(Self.query) { // Only add once if entity.components.has(ParticleEmitterComponent.self) { continue } var newEmitter = ParticleEmitterComponent() newEmitter.mainEmitter.color = .constant(.single(.red)) entity.components.set(newEmitter) } } } AR Setup: let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true) let model = Entity() model.components.set(ModelComponent(mesh: boxMesh, materials: [material])) model.components.set(SparkleComponent4()) model.position = [0, 0.05, 0] model.name = "MyCube" let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: [0.2, 0.2])) anchor.addChild(model) arView.scene.addAnchor(anchor) Questions for the Community Has anyone else encountered this particle positioning issue after updating to iOS 26.1/26.2? Are there known workarounds or configuration changes to ParticleEmitterComponent that restore correct positioning? Is this a confirmed bug, or could there be a change in coordinate system handling or transform inheritance that I'm missing? Additional Information I've already submitted this issue via Feedback Assistant(FB21346746)
3
1
1.1k
3w
RealityView content disappears when selecting Lock In Place on visionOS
Hi, I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace. A red cube is placed 1m in front of the user via RealityView. Long press the X button on any floating window Select "Lock In Place". The cube disappears immediately. Expected: Cube remains visible after window is locked Actual: Cube disappears. Minimal reproducible code: var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3<Float>(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
5
0
1.4k
3w
RealityView camera feed not shown
I have two RealityView: ParentView and When click the button in ParentView, ChildView will be shown as full screen cover, but the camera feed in ChildView will not be shown, only black screen. If I show ChildView directly, it works with camera feed. Please help me on this issue? Thanks. import RealityKit import SwiftUI struct ParentView: View{ @State private var showIt = false var body: some View{ ZStack{ RealityView{content in content.camera = .virtual let box = ModelEntity(mesh: MeshResource.generateSphere(radius: 0.2),materials: [createSimpleMaterial(color: .red)]) content.add(box) } Button("Click here"){ showIt = true } } .fullScreenCover(isPresented: $showIt){ ChildView() .overlay( Button("Close"){ showIt = false }.padding(20), alignment: .bottomLeading ) } .ignoresSafeArea(.all) } } import ARKit import RealityKit import SwiftUI struct ChildView: View{ var body: some View{ RealityView{content in content.camera = .spatialTracking } } }
5
1
2.1k
3w
RealityKit fill the background environment
I am new to RealityKit and Metal and I am building a RealityKit app that renders a procedural LowLevelMesh road. But the left and right side of the road is a complete green terrain mesh object and it doesn't look great. What I want is to add some rocks, tall trees and dence bushes (or weed) to make it look like the player is in the woods. But when I add many of those objects then the performance drains. What is the best approach to fill background empty spaces in the scene?
3
0
529
3w
RealityView AR - anchored to the screen not the floor
This started out as a plea for help, but in preparing this post I discovered the root cause. I'm posting it as a lesson learned in hopes it will help someone. I've spent a good chunk of March trying to get AR-mode working again in my unreleased game. I had it working with SceneKit and ARView 5 years ago, but since 2024 I've been converting the game to use RealityKit and RealityView on iOS, macOS, visionOS, and tvOS. I've been having no joy getting AR mode to work on iOS. I get the pass-through device video but the game content isn't anchored to the floor but rather anchored to the screen. I made a simple project with just a simple shape in the middle of a RealityView and an overlay with a SwiftUI toggle to go in and out of AR-mode. At first, my simple project worked, and I couldn't figure out what was different in the logic. Both projects used the same logic: func transitionToXR(_ content: inout RealityViewCameraContent) { content.remove(gameBoard.rootEntity) content.add(xrAnchor) content.camera = .spatialTracking Self.anchorStateChangedSubscription = content.subscribe(to: SceneEvents.AnchoredStateChanged.self) { event in if event.anchor == xrAnchor, event.isAnchored { xrAnchor.addChild(gameBoard.rootEntity) } } } Then I made an alternate version of my view, and reproduced the same "anchored to the screen not the floor" issue. I compared the code side-by-side and finally saw the difference! The one that didn't work, like my game, had a property 'cameraEntity' which is initialized with PerspectiveCamera(), position and look-at configured, then added as a child of the root entity. So, the simple fix was to remove 'cameraEntity' from the root entity before adding it to the detected AnchorEntity when going into AR-mode. Then when leaving AR-mode, I add back 'cameraEntity' as a child of the root entity and configure it again. So the lesson learned is: make sure there isn't a PerspectiveCamera in the tree of Entities added to an AnchorEntity with a .spatialTracking content camera. Apple: let me know if you think this is a bug or if I was being dumb. If a bug, I can use Feedback Assistant to report this. If I was being dumb, it wouldn't be the first time. :-)
1
0
191
3w
Blending walk and run animations in RealityKit
Hi everybody, I have 2 separate animations run.usdz and walk.usdz animation files which are loaded perfectly in Reality Composer Pro and in the RealityKit application. I want to gradually increase the speed of my player by switching blending weight values from 0.0 (walking) to 1.0 (full speed running). let rabbit = await RabbitBuilder.loadWalkingRabbit() let runningRabbit = await RabbitBuilder.loadRunningRabbit() rabbit.scale = SIMD3(0.05, 0.05, 0.05) runningRabbit.scale = SIMD3(0.05, 0.05, 0.05) let walkAnimation = rabbit.availableAnimations let runAnimation = runningRabbit.availableAnimations RabbitWalker.walkAnim = walkAnimation.first! RabbitWalker.runAnim = runAnimation.first! guard let walk = RabbitWalker.walkAnim, let run = RabbitWalker.runAnim else { return } let blendTree = BlendTreeAnimation<JointTransforms>( BlendTreeBlendNode(sources: [ BlendTreeSourceNode(source: walk.definition, name: "walk", weight: .value(1 - weight)), BlendTreeSourceNode(source: run.definition, name: "run", weight: .value(weight)) ]), name: "rabbitLocomotion", repeatMode: .repeat, offset: TimeInterval(elapsed) ) // I have runtime error after executing this line: "Cannot add incompatible timeline type to blend tree." guard let resource = try? AnimationResource.generate(with: blendTree) else { return } entity.playAnimation(resource) static func loadWalkingRabbit() async -> Entity? { do { let scene = try await Entity(named: "Scene", in: realityKitEnvironmentBundle) guard let rabbit = await scene.findEntity(named: "RabbitWalk") else { return nil } await rabbit.removeFromParent() return rabbit } catch { return nil } } static func loadRunningRabbit() async -> Entity? { do { let scene = try await Entity(named: "Scene", in: realityKitEnvironmentBundle) guard let rabbit = await scene.findEntity(named: "RabbitRun") else { return nil } await rabbit.removeFromParent() return rabbit } catch { return nil } } But when I run this code I have this error; Cannot add incompatible timeline type to blend tree. By the way I have looked to developer's sample codes from here but I couldn't find any relevant BlendTreeAnimation sample which blends 2 animations. I would very happy if someone could direct me to a solution. Regards.
4
0
376
3w
ManipulationComponent Not Translating using indirect input
When using the new RealityKit Manipulation Component on Entities, indirect input will never translate the entity - no matter what settings are applied. Direct manipulation works as expected for both translation and rotation. Is this intended behaviour? This is different from how indirect manipulation works on Model3D. How else can we get translation from this component? visionOS 26 Beta 2 Build from macOS 26 Beta 2 and Xcode 26 Beta 2 Attached is replicable sample code, I have tried this in other projects with the same results. var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "MovieFilmReel", in: reelRCPBundle) { ManipulationComponent.configureEntity(immersiveContentEntity, allowedInputTypes: .all, collisionShapes: [ShapeResource.generateBox(width: 0.2, height: 0.2, depth: 0.2)]) immersiveContentEntity.position.y = 1 immersiveContentEntity.position.z = -0.5 var mc = ManipulationComponent() mc.releaseBehavior = .stay immersiveContentEntity.components.set(mc) content.add(immersiveContentEntity) } } }
17
5
3.2k
Apr ’26
Does VisionOS have the equivalent of ARView.physicsOrigin?
I'm trying to scale the physics of a scene without changing the apparent size to avoid the low-speed zeroing-out of motion that the physics simulation does. I found a technique for using separate simulation and physics roots in the docs, but it relies on ARView, which VisionOS doesn't have. This seems more elegant than scaling absolutely everything with shared root -- any chance I'm just failing in my searches to find the equivalent functionality?
3
1
788
Mar ’26
RealityKit .Kinematic + collisions (visionOs)
Hi everyone, I'm new to visionOS development. I'm trying to create a physics-based scene (with gravity) where users can pick up and move objects on a workbench. I am struggling with physics interactions during the drag gesture: Kinematic Mode: If I switch to .kinematic during the drag, the object moves smoothly but clips through other objects (no collisions). Dynamic Mode: I tried keeping it .dynamic and applying linear velocity toward the hand position, but the movement feels laggy and unresponsive. Hybrid Approach: I tried switching to .kinematic during DragGesture.onChange and back to .dynamic on collision, but this causes the entity to jitter/shake violently when touching other objects. Has anyone found a clean way to drag objects while maintaining solid collisions. Thanks for your help!
3
0
888
Mar ’26
RealityKit equivalent of ARGeoAnchor?
In ARKit there is ARGeoAnchor, which lets you anchor content using latitude and longitude so objects stay fixed to a real-world location. Is there an equivalent feature in RealityKit? I want to place points in the world and make sure they don't move or drift after placement. If RealityKit doesn't support this directly, what is the recommended approach?
0
1
548
Mar ’26
Can I use metal shader if I use RealityKit to build a VisionOS app?
I asked AI to build a realistic ocean shader for a VisionOS project using RealityKit. It gave a bunch instructions and asked me to connect large amount of nodes for the ShaderGraph in the Reality Composer Pro. I am just wondering if AI can help to generate the Metal shader code directly, or build the ShaderGraph nodes for me automatically.
0
0
754
Mar ’26
Best approach for animating a speaking avatar in a macOS/iOS SwiftUI application
I am developing a macOS application using SwiftUI (with an iOS version as well). One feature we are exploring is displaying an avatar that reads or speaks dynamically generated text produced by an AI service. The basic flow would be: Text generated by an AI service Text converted to speech using a TTS engine An avatar (2D or 3D) rendered in the app that animates lip movement synchronized with the speech Ideally the avatar would render locally on the device. Questions: What Apple frameworks would be most appropriate for implementing a speaking avatar? SceneKit RealityKit SpriteKit (for 2D avatars) Is there any recommended way to drive lip-sync animation from speech audio using Apple frameworks? Does AVSpeechSynthesizer expose phoneme or viseme timing information that could be used for avatar animation? If such timing information is not available, what is the recommended approach for synchronizing character mouth animation with speech audio on macOS/iOS? Are there examples of real-time character animation synchronized with speech on macOS/iOS? Any architectural guidance or references would be greatly appreciated.
0
0
755
Mar ’26
RealityView Camera Target Error when set while Orbiting
When interacting with RealityView’s realityViewCameraControls .orbit and setting a new RealityViewCameraContent .cameraTarget, the resulting camera target and camera orbit is incorrect. This can be demonstrated where one finger is orbiting the RealityView, and another pushes a button which changes the camera target. Instead of the camera facing the new target, some point in the scene is the new effective camera target and orbit point. This only occurs when an orbit interaction is currently taking place. If you stop interacting with the orbit, change target, then start orbit interacting again, everything works as expected. Though this example uses two-touches, any change of the camera target has this conflict with orbit interaction. This means interacting with orbit will result in the wrong camera view which is unexpected for users and difficult to reconcile or detect, for developers. Expected: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target shows centred in view the orbit revolves the new target and continues to match my gestures. Reality: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target is not centred in view, and camera is now orbiting an unexpected point in the scene, that is not my expected target. One imperfect workaround is to force a rebuild of the view after setting a new cameraTarget. This sets all targets correctly but results in a flicker, loss of orbit controls until re-touch and ultimately is a poor user experience, but is better than the wrong target being shown unexpectedly. Code Sample: import SwiftUI import RealityKit struct RKOribtTarget: View { @State private var target: Int = 0 @State private var rcContent: RealityViewCameraContent? @State private var rkID: UUID = UUID() let root = Entity() let center = ModelEntity(mesh: .generateSphere(radius: 0.05), materials: [UnlitMaterial(color: UIColor(.gray.opacity(0.5)))]) let red = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: false)]) let blue = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .blue, isMetallic: false)]) let green = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .green, isMetallic: false)]) var body: some View { VStack{ RealityView { content in red.position.x = 0.5 blue.position.z = 0.5 green.position.y = 0.5 center.position = .init(repeating: 0.25) content.cameraTarget = target == 0 ? root : blue root.addChild(red) root.addChild(blue) root.addChild(green) root.addChild(center) content.add(root) } update: { content in switch target{ case 0: content.cameraTarget = root case 1: content.cameraTarget = blue case 2: content.cameraTarget = red case 3: content.cameraTarget = green default: content.cameraTarget = root } } .id(rkID) .realityViewCameraControls(.orbit) VStack{ Text("Target") Button("Default") { target = 0 // Force rebuilding view resets orbit target and rotation // But shows a flicker, interaction requires touch reset // Not an ideal workaround // rkID = UUID() } .buttonStyle(.bordered) Button("Blue") { target = 1 // rkID = UUID() } .buttonStyle(.bordered) .tint(.blue) Button("Red") { target = 2 // rkID = UUID() } .buttonStyle(.bordered) .tint(.red) Button("Green") { target = 3 // rkID = UUID() } .buttonStyle(.bordered) .tint(.green) } } } } Xcode Version: Version 26.0 (17A324) iOS Version: iOS 26.5 (23F75) Tested on devices, iPhone 12 Pro, iPhone 15 Pro
Replies
2
Boosts
0
Views
454
Activity
16h
VirtualEnvironmentProbeComponent VS ImageBasedLightComponent
Hi. I want to know what's the difference between VirtualEnvironmentProbeComponent and ImageBasedLightComponent? It seems they both can achieve the same light and reflection effect of environment.
Replies
0
Boosts
0
Views
254
Activity
2d
RealityKit custom component: `has()` returns `true` but typed subscript returns `nil` in SwiftPM test runner
swift test (SwiftPM CLI) fails to decode RealityKit custom components from USD files, even though entity.components.has(MyComponent.self) returns true. Typed access via entity.components[MyComponent.self] returns nil. This forces projects that use RealityKit custom components to use xcodebuild test exclusively. Minimal repro: github.com/mesqueeb/swiftpm-realitykit-custom-component-repro Repro steps git clone https://github.com/mesqueeb/swiftpm-realitykit-custom-component-repro cd swiftpm-realitykit-custom-component-repro swift test --filter componentsPresentButNotDecodableInSwiftTest Observed ✅ entity.components.has(ReproComponent.self) returns true ❌ entity.components[ReproComponent.self] returns nil Expected If has(...) returns true for a registered custom component, typed lookup should decode and return non-nil. Notes Running the same test via xcodebuild test works correctly The component is properly registered and the USDA file correctly references it This affects any project that relies on custom RealityKit components in tests — there is no swift test workaround Feedback ID: FB22099519 Environment: macOS 15.5, Xcode 16.4, Swift 6.1
Replies
1
Boosts
0
Views
518
Activity
5d
RealityView content disappears when selecting Lock In Place on visionOS
I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace A red cube is placed 1m in front of the user via RealityView Long press the X button on any floating window Select "Lock In Place" The cube disappears immediately Expected: Cube remains visible after window is locked Actual: Cube disappears. Note: "Follow Me" does NOT reproduce this issue. Minimal reproducible code: struct ImmersiveView: View { var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
Replies
1
Boosts
0
Views
1.6k
Activity
2w
Real world anchors
I’m trying to build a persistent world map of my college campus using ARKit, but it’s not very reliable. Anchors don’t consistently appear in the same place across sessions. I’ve tried using image anchors, but they didn’t improve accuracy much. How can I create a stable world map for a larger area and reliably relocalize anchors? Are there better approaches or recommended resources for this?
Replies
1
Boosts
0
Views
1.1k
Activity
2w
RealityKit crashes when rendering SpriteKit scene with SKShapeNode in postProcess callback
I'm converting my game from SceneKit to RealityKit. It has a SpriteKit overlay that according to Explore advanced rendering with RealityKit 2 I can add with the code below. The code runs fine if the SKScene only contains a SKSpriteNode (see the commented out line), but when I add a SKShapeNode with a fillColor instead, the app crashes with this error: -[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5970: failed assertion `Draw Errors Validation MTLDepthStencilDescriptor uses frontFaceStencil but MTLRenderPassDescriptor has a nil stencilAttachment texture MTLDepthStencilDescriptor uses backFaceStencil but MTLRenderPassDescriptor has a nil stencilAttachment texture ' I don't know enough about low-level graphics and stencils yet to figure out a quick solution, so I would appreciate if anyone could share an easy fix or explanation of what's wrong. Thanks! class ViewController: NSViewController { var device: MTLDevice! var renderer: SKRenderer! override func loadView() { let arView = ARView(frame: NSScreen.main!.frame) view = arView arView.renderCallbacks.prepareWithDevice = { [weak self] device in guard let self = self else { return } self.device = device renderer = SKRenderer(device: MTLCreateSystemDefaultDevice()!) let scene = SKScene() let shape = SKShapeNode(rectOf: CGSize(width: 10, height: 10)) shape.fillColor = .red scene.addChild(shape) // scene.addChild(SKSpriteNode(color: .red, size: CGSize(width: 10, height: 10))) renderer.scene = scene } arView.renderCallbacks.postProcess = { [weak self] context in guard let self = self else { return } let encoder = context.commandBuffer.makeBlitCommandEncoder() encoder?.copy(from: context.sourceColorTexture, to: context.targetColorTexture) encoder?.endEncoding() renderer.update(atTime: context.time) let descriptor = MTLRenderPassDescriptor() descriptor.colorAttachments[0].loadAction = .load descriptor.colorAttachments[0].storeAction = .store descriptor.colorAttachments[0].texture = context.targetColorTexture renderer.render(withViewport: CGRect(x: 0, y: 0, width: context.targetColorTexture.width, height: context.targetColorTexture.height), commandBuffer: context.commandBuffer, renderPassDescriptor: descriptor) } } }
Replies
9
Boosts
0
Views
2.4k
Activity
2w
RealityView attachment draw order
My visionOS 26.3 app displays a diorama-like scene in a RealityView in a mixed immersive space, about 1 meter square, with view attachments floating above the scene. Each view attachment fades out after user interaction, by animating the view's opacity. What I'm observing is that depending on the position of a view attachment relative to the scene and the camera, an unwanted cutout effect is observed (presumably because of draw order issues), as shown in the right column in the screenshots below. YouTube video link of these sequences: https://youtu.be/oTuo0okKCkc (19 seconds) My question: How does visionOS determine the view attachment draw order relative to the RealityView scene? If I better understood how the draw order is determined, I could modify my scene to ensure that the view attachments were always drawn after the scene, fixing the unwanted cutout effect. I've successfully used ModelSortGroupComponent to control the draw order of entities within the RealityView scene, but my understanding is that this approach cannot be used with view attachments. I've submitted FB22014370 about this issue. Thank you.
Replies
4
Boosts
0
Views
1.6k
Activity
2w
LowLevelInstanceData & animation
AppleOS 26 introduces LowLevelInstanceData that can reduce CPU draw calls significantly by instancing. However, I have noticed trouble with animating each individual instance. As I wanted low-level control, I'm using a custom system and LowLevelInstanceData.replace(using:) to update the transform each frame. The update closure itself is extremely efficient (Xcode Instruments reports nearly no cost). But I noticed extremely high runloop time, reach around 20ms. Time Profiler shows that the CPU is blocked by kernel.release.t6401. I think it is caused by synchronization between CPU and GPU, however, as I am already using a MTLCommandBuffer to coordinate it, I don't understand why I am still seeing large CPU time.
Replies
3
Boosts
0
Views
733
Activity
2w
ParticleEmitterComponent Position Offset Issue After iOS 26.1 Update – Seeking Solutions & Workarounds
Problem Summary After upgrading to iOS 26.1 and 26.2, I'm experiencing a particle positioning bug in RealityKit where ParticleEmitterComponent particles render at an incorrect offset relative to their parent entity. This behavior does not occur on iOS 18.6.2 or earlier versions, suggesting a regression introduced in the newer OS builds. Environment Details Operating System: iOS 26.1 & iOS 26.2 Framework: RealityKit Xcode Version: 16.2 (16C5032a) Expected vs. Actual Behavior Expected: Particles should render at the position of the entity to which the ParticleEmitterComponent is attached, matching the behavior on iOS 18.6.2 and earlier. Actual: Particles appear away from their parent entity, creating a visual misalignment that breaks the intended AR experience. Steps to Reproduce Create or open an AR application with RealityKit that uses particle components Attach a ParticleEmitterComponent to an entity via a custom system Run the application on iOS 26.1 or iOS 26.2 Observe that particles render at an offset position away from the entity Minimal Code Example Here's the setup from my test case: Custom Component & System: struct SparkleComponent4: Component {} class SparkleSystem4: System { static let query = EntityQuery(where: .has(SparkleComponent4.self)) required init(scene: Scene) {} func update(context: SceneUpdateContext) { for entity in context.scene.performQuery(Self.query) { // Only add once if entity.components.has(ParticleEmitterComponent.self) { continue } var newEmitter = ParticleEmitterComponent() newEmitter.mainEmitter.color = .constant(.single(.red)) entity.components.set(newEmitter) } } } AR Setup: let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true) let model = Entity() model.components.set(ModelComponent(mesh: boxMesh, materials: [material])) model.components.set(SparkleComponent4()) model.position = [0, 0.05, 0] model.name = "MyCube" let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: [0.2, 0.2])) anchor.addChild(model) arView.scene.addAnchor(anchor) Questions for the Community Has anyone else encountered this particle positioning issue after updating to iOS 26.1/26.2? Are there known workarounds or configuration changes to ParticleEmitterComponent that restore correct positioning? Is this a confirmed bug, or could there be a change in coordinate system handling or transform inheritance that I'm missing? Additional Information I've already submitted this issue via Feedback Assistant(FB21346746)
Replies
3
Boosts
1
Views
1.1k
Activity
3w
RealityView content disappears when selecting Lock In Place on visionOS
Hi, I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace. A red cube is placed 1m in front of the user via RealityView. Long press the X button on any floating window Select "Lock In Place". The cube disappears immediately. Expected: Cube remains visible after window is locked Actual: Cube disappears. Minimal reproducible code: var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3<Float>(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
Replies
5
Boosts
0
Views
1.4k
Activity
3w
RealityView camera feed not shown
I have two RealityView: ParentView and When click the button in ParentView, ChildView will be shown as full screen cover, but the camera feed in ChildView will not be shown, only black screen. If I show ChildView directly, it works with camera feed. Please help me on this issue? Thanks. import RealityKit import SwiftUI struct ParentView: View{ @State private var showIt = false var body: some View{ ZStack{ RealityView{content in content.camera = .virtual let box = ModelEntity(mesh: MeshResource.generateSphere(radius: 0.2),materials: [createSimpleMaterial(color: .red)]) content.add(box) } Button("Click here"){ showIt = true } } .fullScreenCover(isPresented: $showIt){ ChildView() .overlay( Button("Close"){ showIt = false }.padding(20), alignment: .bottomLeading ) } .ignoresSafeArea(.all) } } import ARKit import RealityKit import SwiftUI struct ChildView: View{ var body: some View{ RealityView{content in content.camera = .spatialTracking } } }
Replies
5
Boosts
1
Views
2.1k
Activity
3w
RealityKit fill the background environment
I am new to RealityKit and Metal and I am building a RealityKit app that renders a procedural LowLevelMesh road. But the left and right side of the road is a complete green terrain mesh object and it doesn't look great. What I want is to add some rocks, tall trees and dence bushes (or weed) to make it look like the player is in the woods. But when I add many of those objects then the performance drains. What is the best approach to fill background empty spaces in the scene?
Replies
3
Boosts
0
Views
529
Activity
3w
RealityView AR - anchored to the screen not the floor
This started out as a plea for help, but in preparing this post I discovered the root cause. I'm posting it as a lesson learned in hopes it will help someone. I've spent a good chunk of March trying to get AR-mode working again in my unreleased game. I had it working with SceneKit and ARView 5 years ago, but since 2024 I've been converting the game to use RealityKit and RealityView on iOS, macOS, visionOS, and tvOS. I've been having no joy getting AR mode to work on iOS. I get the pass-through device video but the game content isn't anchored to the floor but rather anchored to the screen. I made a simple project with just a simple shape in the middle of a RealityView and an overlay with a SwiftUI toggle to go in and out of AR-mode. At first, my simple project worked, and I couldn't figure out what was different in the logic. Both projects used the same logic: func transitionToXR(_ content: inout RealityViewCameraContent) { content.remove(gameBoard.rootEntity) content.add(xrAnchor) content.camera = .spatialTracking Self.anchorStateChangedSubscription = content.subscribe(to: SceneEvents.AnchoredStateChanged.self) { event in if event.anchor == xrAnchor, event.isAnchored { xrAnchor.addChild(gameBoard.rootEntity) } } } Then I made an alternate version of my view, and reproduced the same "anchored to the screen not the floor" issue. I compared the code side-by-side and finally saw the difference! The one that didn't work, like my game, had a property 'cameraEntity' which is initialized with PerspectiveCamera(), position and look-at configured, then added as a child of the root entity. So, the simple fix was to remove 'cameraEntity' from the root entity before adding it to the detected AnchorEntity when going into AR-mode. Then when leaving AR-mode, I add back 'cameraEntity' as a child of the root entity and configure it again. So the lesson learned is: make sure there isn't a PerspectiveCamera in the tree of Entities added to an AnchorEntity with a .spatialTracking content camera. Apple: let me know if you think this is a bug or if I was being dumb. If a bug, I can use Feedback Assistant to report this. If I was being dumb, it wouldn't be the first time. :-)
Replies
1
Boosts
0
Views
191
Activity
3w
Blending walk and run animations in RealityKit
Hi everybody, I have 2 separate animations run.usdz and walk.usdz animation files which are loaded perfectly in Reality Composer Pro and in the RealityKit application. I want to gradually increase the speed of my player by switching blending weight values from 0.0 (walking) to 1.0 (full speed running). let rabbit = await RabbitBuilder.loadWalkingRabbit() let runningRabbit = await RabbitBuilder.loadRunningRabbit() rabbit.scale = SIMD3(0.05, 0.05, 0.05) runningRabbit.scale = SIMD3(0.05, 0.05, 0.05) let walkAnimation = rabbit.availableAnimations let runAnimation = runningRabbit.availableAnimations RabbitWalker.walkAnim = walkAnimation.first! RabbitWalker.runAnim = runAnimation.first! guard let walk = RabbitWalker.walkAnim, let run = RabbitWalker.runAnim else { return } let blendTree = BlendTreeAnimation<JointTransforms>( BlendTreeBlendNode(sources: [ BlendTreeSourceNode(source: walk.definition, name: "walk", weight: .value(1 - weight)), BlendTreeSourceNode(source: run.definition, name: "run", weight: .value(weight)) ]), name: "rabbitLocomotion", repeatMode: .repeat, offset: TimeInterval(elapsed) ) // I have runtime error after executing this line: "Cannot add incompatible timeline type to blend tree." guard let resource = try? AnimationResource.generate(with: blendTree) else { return } entity.playAnimation(resource) static func loadWalkingRabbit() async -> Entity? { do { let scene = try await Entity(named: "Scene", in: realityKitEnvironmentBundle) guard let rabbit = await scene.findEntity(named: "RabbitWalk") else { return nil } await rabbit.removeFromParent() return rabbit } catch { return nil } } static func loadRunningRabbit() async -> Entity? { do { let scene = try await Entity(named: "Scene", in: realityKitEnvironmentBundle) guard let rabbit = await scene.findEntity(named: "RabbitRun") else { return nil } await rabbit.removeFromParent() return rabbit } catch { return nil } } But when I run this code I have this error; Cannot add incompatible timeline type to blend tree. By the way I have looked to developer's sample codes from here but I couldn't find any relevant BlendTreeAnimation sample which blends 2 animations. I would very happy if someone could direct me to a solution. Regards.
Replies
4
Boosts
0
Views
376
Activity
3w
ManipulationComponent Not Translating using indirect input
When using the new RealityKit Manipulation Component on Entities, indirect input will never translate the entity - no matter what settings are applied. Direct manipulation works as expected for both translation and rotation. Is this intended behaviour? This is different from how indirect manipulation works on Model3D. How else can we get translation from this component? visionOS 26 Beta 2 Build from macOS 26 Beta 2 and Xcode 26 Beta 2 Attached is replicable sample code, I have tried this in other projects with the same results. var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "MovieFilmReel", in: reelRCPBundle) { ManipulationComponent.configureEntity(immersiveContentEntity, allowedInputTypes: .all, collisionShapes: [ShapeResource.generateBox(width: 0.2, height: 0.2, depth: 0.2)]) immersiveContentEntity.position.y = 1 immersiveContentEntity.position.z = -0.5 var mc = ManipulationComponent() mc.releaseBehavior = .stay immersiveContentEntity.components.set(mc) content.add(immersiveContentEntity) } } }
Replies
17
Boosts
5
Views
3.2k
Activity
Apr ’26
Does VisionOS have the equivalent of ARView.physicsOrigin?
I'm trying to scale the physics of a scene without changing the apparent size to avoid the low-speed zeroing-out of motion that the physics simulation does. I found a technique for using separate simulation and physics roots in the docs, but it relies on ARView, which VisionOS doesn't have. This seems more elegant than scaling absolutely everything with shared root -- any chance I'm just failing in my searches to find the equivalent functionality?
Replies
3
Boosts
1
Views
788
Activity
Mar ’26
RealityKit .Kinematic + collisions (visionOs)
Hi everyone, I'm new to visionOS development. I'm trying to create a physics-based scene (with gravity) where users can pick up and move objects on a workbench. I am struggling with physics interactions during the drag gesture: Kinematic Mode: If I switch to .kinematic during the drag, the object moves smoothly but clips through other objects (no collisions). Dynamic Mode: I tried keeping it .dynamic and applying linear velocity toward the hand position, but the movement feels laggy and unresponsive. Hybrid Approach: I tried switching to .kinematic during DragGesture.onChange and back to .dynamic on collision, but this causes the entity to jitter/shake violently when touching other objects. Has anyone found a clean way to drag objects while maintaining solid collisions. Thanks for your help!
Replies
3
Boosts
0
Views
888
Activity
Mar ’26
RealityKit equivalent of ARGeoAnchor?
In ARKit there is ARGeoAnchor, which lets you anchor content using latitude and longitude so objects stay fixed to a real-world location. Is there an equivalent feature in RealityKit? I want to place points in the world and make sure they don't move or drift after placement. If RealityKit doesn't support this directly, what is the recommended approach?
Replies
0
Boosts
1
Views
548
Activity
Mar ’26
Can I use metal shader if I use RealityKit to build a VisionOS app?
I asked AI to build a realistic ocean shader for a VisionOS project using RealityKit. It gave a bunch instructions and asked me to connect large amount of nodes for the ShaderGraph in the Reality Composer Pro. I am just wondering if AI can help to generate the Metal shader code directly, or build the ShaderGraph nodes for me automatically.
Replies
0
Boosts
0
Views
754
Activity
Mar ’26
Best approach for animating a speaking avatar in a macOS/iOS SwiftUI application
I am developing a macOS application using SwiftUI (with an iOS version as well). One feature we are exploring is displaying an avatar that reads or speaks dynamically generated text produced by an AI service. The basic flow would be: Text generated by an AI service Text converted to speech using a TTS engine An avatar (2D or 3D) rendered in the app that animates lip movement synchronized with the speech Ideally the avatar would render locally on the device. Questions: What Apple frameworks would be most appropriate for implementing a speaking avatar? SceneKit RealityKit SpriteKit (for 2D avatars) Is there any recommended way to drive lip-sync animation from speech audio using Apple frameworks? Does AVSpeechSynthesizer expose phoneme or viseme timing information that could be used for avatar animation? If such timing information is not available, what is the recommended approach for synchronizing character mouth animation with speech audio on macOS/iOS? Are there examples of real-time character animation synchronized with speech on macOS/iOS? Any architectural guidance or references would be greatly appreciated.
Replies
0
Boosts
0
Views
755
Activity
Mar ’26