RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit subtopic

Post

Replies

Boosts

Views

Activity

RealityKit Blend Modes
I have 2 planes with textures on. I want these planes to intersect [ –|– ], and I want the blend mode to be additive. Currently I get z fighting on the planes, and I can't see how to set blend modes. I've done this before in Unity and Godot in a fairly straight forward manner. How do I accomplish this with RealityKit, preferably using code only (my scene is quite dynamic)? Do I need to do it with a shader manually? How can I stop the z fighting?
2
0
748
Aug ’25
Moving from SceneKit - fog missing
I am rewriting an unfinished SceneKit project as RealityKit (NonAR). As far as I can see, RealityKit is missing basic fog functionality? Fog was simple & easy to implement in SCeneKit (fogStartDistance / fogEndDistance / fogDensityExponent / fogColor). Are there any plans to implement something like this in RealityKit? Are there any simple workarounds?
3
1
584
Aug ’25
Unable to play .aivu with VideoPlayerComponent
I’m trying to play an Apple Immersive video in the .aivu format using VideoPlayerComponent using the official documentation found here: https://developer.apple.com/documentation/RealityKit/VideoPlayerComponent Here is a simplified version of the code I'm running in another application: import SwiftUI import RealityKit import AVFoundation struct ImmersiveView: View { var body: some View { RealityView { content in let player = AVPlayer(url: Bundle.main.url(forResource: "Apple_Immersive_Video_Beach", withExtension: "aivu")!) let videoEntity = Entity() var videoPlayerComponent = VideoPlayerComponent(avPlayer: player) videoPlayerComponent.desiredImmersiveViewingMode = .full videoPlayerComponent.desiredViewingMode = .stereo player.play() videoEntity.components.set(videoPlayerComponent) content.add(videoEntity) } } } Full code is here: https://github.com/tomkrikorian/AIVU-VideoPlayerComponentIssueSample But the video does not play in my project even though the file is correct (It can be played in Apple Immersive Video Utility) and I’m getting this error when the app crashes: App VideoPlayer+Component Caption: onComponentDidUpdate Media Type is invalid Domain=SpatialAudioServicesErrorDomain Code=2020631397 "xpc error" UserInfo={NSLocalizedDescription=xpc error} CA_UISoundClient.cpp:436 Got error -4 attempting to SetIntendedSpatialAudioExperience [0x101257490|InputElement #0|Initialize] Number of channels = 0 in AudioChannelLayout does not match number of channels = 2 in stream format. Video I’m using is the official sample that can be found here but tried several different files shot from my clients and the same error are displayed so the issue is definitely not the files but on the RealityKit side of things: https://developer.apple.com/documentation/immersivemediasupport/authoring-apple-immersive-video Steps to reproduce the issue: - Open AIVUPlayerSample project and run. Look at the logs. All code can be found in ImmersiveView.swift Sample file is included in the project Expected results: If I followed the documentation and samples provided, I should see my video played in full immersive mode inside my ImmersiveSpace. Am i doing something wrong in the code? I'm basically following the documentation here. Feedback ticket: FB19971306
3
0
810
Aug ’25
How to apply the same SystemImage to both mainEmitter and spawnedEmitter without clipping in ParticleEmitterComponent?
Hi everyone, I’m currently learning about ParticleEmitterComponentParticleEmitterComponent and exploring the sample app provided in the Simulating particles in your visionOS app documentation. In the sample app, when I set the EmitterPreset to fireworks from the settings panel on the left side of the window and choose SystemImage, I noticed two issues: The image applied to mainEmitter appears clipped or cropped. The image on spawnedEmitter does not update to the selected SystemImage. What I want to achieve: Apply the same SystemImage to both mainEmittermainEmitter and spawnedEmitterspawnedEmitter so that it displays correctly without clipping. Remove the animation that changes the size of spawnedEmitterspawnedEmitter over time and keep it at a constant size. Could someone explain which properties should be adjusted to achieve this behavior? Any guidance or examples would be greatly appreciated. Thanks in advance!
0
0
454
Sep ’25
Showing a MTLTexture on an Entity in RealityKit
Is there any standard way of efficiently showing a MTLTexture on a RealityKit Entity? I can't find anything proper on how to , for example, generate a LowLevelTexture out of a MTLTexture. Closest match was this two year old thread. In the old SceneKit app, we would just do guard let material = someNode.geometry?.materials.first else { return } material.diffuse.contents = mtlTexture Our flow is as follows (for visualizing the currently detected object): Camera-Stream -> CoreML Segmentation -> Send the relevant part of the MLShapedArray-Tensor to a MTLComputeShader that returns a MTLTexture -> Show the resulting texture on a 3D object to the user
5
0
1k
Sep ’25
RealityKit - How to change camera target in response of a touch event?
Hello, I’m porting my UIKit/SceneKit app to SwiftUI/RealityKit and I’m wondering how to change the camera target programmatically. I created a simple scene in Reality Composer Pro with two spheres. My goal is straightforward: when the user taps a sphere, the camera should look at it as the main target. Following Apple’s videos, I implemented the .gesture modifier and it is printing the tapped sphere correctly, but updating my targetEntity state doesn’t change anything, so the camera won't update its target. Is there a way to access the scene content at that level? Or what else should I do? Here’s my current code implementation: Thanks!
1
0
224
Sep ’25
Custom EntityAction - different behaviour VisionOS 2.6 vs 26
I implemented an EntityAction to change the baseColor tint - and had it working on VisionOS 2.x. import RealityKit import UIKit typealias Float4 = SIMD4<Float> extension UIColor { var float4: Float4 { if cgColor.numberOfComponents == 4, let c = cgColor.components { Float4(Float(c[0]), Float(c[1]), Float(c[2]), Float(c[3])) } else { Float4() } } } struct ColourAction: EntityAction { // MARK: - PUBLIC PROPERTIES let startColour: Float4 let targetColour: Float4 // MARK: - PUBLIC COMPUTED PROPERTIES var animatedValueType: (any AnimatableData.Type)? { Float4.self } // MARK: - INITIATION init(startColour: UIColor, targetColour: UIColor) { self.startColour = startColour.float4 self.targetColour = targetColour.float4 } // MARK: - PUBLIC STATIC FUNCTIONS @MainActor static func registerEntityAction() { ColourAction.subscribe(to: .updated) { event in guard let animationState = event.animationState else { return } let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime)) animationState.storeAnimatedValue(interpolatedColour) } } } extension Entity { // MARK: - PUBLIC FUNCTIONS func changeColourTo(_ targetColour: UIColor, duration: Double) { guard let modelComponent = components[ModelComponent.self], let material = modelComponent.materials.first as? PhysicallyBasedMaterial else { return } let colourAction = ColourAction(startColour: material.baseColor.tint, targetColour: targetColour) if let colourAnimation = try? AnimationResource.makeActionAnimation(for: colourAction, duration: duration, bindTarget: .material(0).baseColorTint) { playAnimation(colourAnimation) } } } This doesn't work in VisionOS 26. My current fix is to directly set the material base colour - but this feels like the wrong approach: @MainActor static func registerEntityAction() { ColourAction.subscribe(to: .updated) { event in guard let animationState = event.animationState, let entity = event.targetEntity, let modelComponent = entity.components[ModelComponent.self], var material = modelComponent.materials.first as? PhysicallyBasedMaterial else { return } let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime)) material.baseColor.tint = UIColor(interpolatedColour) entity.components[ModelComponent.self]?.materials[0] = material animationState.storeAnimatedValue(interpolatedColour) } } So before I raise this as a bug, was I doing anything wrong in the former version and got lucky? Is there a better approach?
0
0
89
Sep ’25
Anchor an Reality scene on an image anchor
Developing a prototype Vision Pro app and would like to render a 3D scene made from Reality Composer Pro on an image anchor in a RealityView. But I have no luck so far to make it work and need some guidance to move on. I got the image file stored in the assets like below: And from below is the source codes: import SwiftUI import RealityKit import RealityKitContent struct AnchorView: View { @State var imageEntity: Entity = { let anchorEntity = AnchorEntity(.image(group: "AR Resources", name: "reanchor")) return anchorEntity }() var body: some View { RealityView { content in do { // Add the initial RealityKit content if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { imageEntity.addChild(scene) content.add(imageEntity) } } catch { print("Error occurs when adding reality view content: \(error)") } } } }
3
0
1.2k
Sep ’25
RealityKit captureHighResolutionFrame from session is broken on iOS26?
A bit of background on what our app is doing: We have a RealityKit ARView session running. During this period we place objects in RealityKit. At some point user can "take photo" and we use session.captureHighResolutionFrame to capture a frame. We then use captured frame and frame.camera.projectPoint to project my objects back to 2D Issue we found is that on devices that have iOS26, first photo user takes and the first frame received from session.captureHighResolutionFrame gives incorrect CGPoint for frame.camera.projectPoint. If user takes the second photo with the same camera phostion, second frame received from session.captureHighResolutionFrame gives correct CGPoint for frame.camera.projectPoint I notices some difference between first and subsequent frames that i believe is corresponding with the issue. Yaw value of camera (frame.camera.eulerAngles.y) on first frame is not correct ( inconsistent with any subsequent frame) I also created a small example app and i followed Building an Immersive Experience with RealityKit example to create it. The issue exists in this app for iOS26, while iOS18.* has consistent values between first and subsequent captured frames. Note: The yaw value seems to differ more if we start session in portrait but take photo in landscape. Example result for 3 captured frames: Frame captured with yaw: 1.4855177402496338 Frame captured with yaw: -0.08803760260343552 Frame captured with yaw: -0.08179682493209839 Example code: class CustomARView: ARView, ARSessionDelegate { required init(frame: CGRect) { super.init(frame: frame) } required init?(coder decoder: NSCoder) { fatalError("init(coder:) has not been implemented")} func setup() { let singleTap = UITapGestureRecognizer(target: self, action: #selector(handleTap)) addGestureRecognizer(singleTap) } @objc func handleTap(_ gestureRecognizer: UIGestureRecognizer) { Task { do { let frame = try await session.captureHighResolutionFrame() print("Frame captured with yaw: \(Double(frame.camera.eulerAngles.y))") } catch { } } } } struct CustomARViewUIViewRepresentable: UIViewRepresentable { func makeUIView(context: Context) -> some UIView { let arView = CustomARView(frame: .zero) arView.setup() return arView } func updateUIView(_ uiView: UIViewType, context: Context) { } } struct ContentView: View { var body: some View { CustomARViewUIViewRepresentable() .frame(maxWidth: .infinity, maxHeight: .infinity) .ignoresSafeArea() } }
3
1
548
Sep ’25
Bone deformation
I have been tasked with creating content for the Apple Vision Pro. Just the 3D content and animation, not the programming end of things. I can't seem to get any kind of mesh deformation animation to import into Reality Composer Pro. By that I mean bones/skin, or even point cache. I work on PC, and my main software is 3DS Max, but I'm borrowing an iMac for this job, and was instructed to use RCP on it for testing before handing things off to the programmer. My files open and play fine in other USD programs, like Omniverse, or USD View, just not Reality Composer Pro. I've seen the dinosaur demo in AVP, so I know mesh deformation is possible. If there are other essential tools that might make this possible, I have not been made aware of them. I am experimenting with bouncing things off of Blender, in case that exports better, but not really having luck there either -though my results are different. Thanks.
0
0
107
Sep ’25
Custom Cameras in RealityKit
Hi all, I've developed some code that enables an arcball camera interaction with my scene. I've done this using components and systems. The implementation feels a bit messy as I've got gesture code on my realityView, and then a bunch of other code that uses those gesture inputs in my component and system. Is there a demo app, or some example code that shows a nice way to encapsulate these things in to one item for custom cameras, something like Apple's .realityViewCameraControls(.orbit) If not can anyone recommend an approach to take?
0
0
256
Oct ’25
scenekit is deprecated but realitykit is for special computing what should i use ?
hello i am new to apple ecosystem and development i have some coding experience with c# now i like to develop my game for iphone 16 and up(due to ability to run ai models) but i am having hard time figuring out what to use there is a lot of resources for scene-kit but on its doc page it says its deprecated so i look at the reality-kit docs and tutorials and its strictly tells how to develop for visionos and i am really confused about this since there is no tutorials that shows how to develop a game for ios with reality-kit that does not focus visionos. i just want to develop for iphone 16 and up but i cant find resources focuses at that.
1
0
200
Oct ’25
RealityKit migration
Hey there, I’m currently planning to use RealityKit in a new multiplatform app I’m building. Unfortunately, I noticed that WatchOS is not supported for RealityKit, while SceneKit is getting deprecated. However, I’d like to maintain the same codebase across platforms. What are my options?
2
0
418
Oct ’25
RealityKit - Full 3D experience
I have a question I guess more for the Apple team. But why are there no totally 3D experiences for the Vision Pro lineup? I know they have given us tools to implement unity 3D games into iPhone and I guess you can also build it in RealityKit. But why at this moment are 3D games limited to just iPad and iPhone and can't you bring that into Vision Pro? Just to explain. When I say a totally 3D game, I mean games like Gorn. I mean the Vision Pro is definitely powerful enough, but it just feels limited to tabletop games and AR games. Is this something Apple is thinking about implementing?
0
0
522
Oct ’25
RealityKit Instanced Rendering on visionOS
Hello, I've been trying to leverage instanced rendering in RealityKit on visionOS but have not had success. RealityKit states this is supported: https://developer.apple.com/documentation/realitykit/validating-usd-files https://developer.apple.com/videos/play/wwdc2021/10075/?time=1373 https://developer.apple.com/videos/play/wwdc2023/10099/?time=772 RealityKit Trace metrics Validating instancing is working: To test I made a base visionOS app with immersive space and the entity replaced with my test usdz file. I've been using the RealityKit Trace profiling template in xcode instruments in the immersive space and volume closed. This gets consistent draw call results. If I have a single sphere mesh with one material I get one draw call, but the number of draw calls grows linearly with mesh count no matter how my entity is configured. What I've tried Create a test scene in blender, export with instancing enabled Create a test scene in Reality Composer Pro using references Author usda files by hand based on the OpenUSD spec Programatically create a MeshResource with Contents at runtime References https://openusd.org/release/api/_usd__page__scenegraph_instancing.html https://developer.apple.com/documentation/realitykit/meshresource https://developer.apple.com/documentation/realitykit/meshresource/instance Thank you
4
4
1.4k
Oct ’25