I have been concentrating on developing the visionOS application. While I am currently quite familiar with RealityKit, CompositorServices has also captured my attention. I have not yet acquired knowledge of CompositorServices. Could you please clarify whether it is essential for me to learn CompositorServices? Additionally, I would appreciate it if you could provide insights into the advantages of RealityKit and CompositorServices.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have Mac mini M4 with 16GB memory, the Xcode is 16.1, when I test my Vision Pro App with the Simulator, it is very slow and system shows the memory is under the high pressure.
How do I run/test/debug the application on Vision Pro directly? Tried to add my Vision Pro to my developer account, it didn't work due to cannot find UDID, when I hook the USB to the battery, it only shows Battery device ID.
Topic:
Spatial Computing
SubTopic:
General
I want to record animation with entity, then export it to .usd without using Reality Composer Pro, how to achieve that?
Hi, I'm working with CameraFrameProvider from Enterprise API. Is it always capped at 30fps, or is there something I can switch to get more?
I assume it is capped at 30, so let me cram in additional question here :). If I'd get a developer strap and attach an external camera capable of doing >30fps, will I get the full stream, or some other limitation will kick in?
Hello Community,
I’m currently working with the sample code “CapturingDepthUsingTheLiDARCamera” and using it to capture the depth map of an image taken with the iPhone 14 Pro.
From this depth map, I generate a point cloud using the intrinsic camera parameters.
I've noticed that objects not facing the camera directly appear distorted in the resulting point cloud.
For example: An object with surfaces that are perpendicular to each other appears with a sharper angle in the point cloud — around 60° instead of 90°.
My question is:
Is this due to the general accuracy limitations of the LiDAR sensor? Or could it be related to the sample code?
To obtain the depth map, I’m using:
AVCapturePhoto.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
Thanks in advance for your help!
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it?
Thanks a lot,
Christophe
I've encountered an unexpected crash with RoomPlan on iOS 16 devices. The odd part is the code is protected by an available check, since I'm using newer RoomPlan features.
Xcode error
dyld[40588]: Symbol not found: _$s8RoomPlan08CapturedA0V16USDExportOptionsV5modelAEvgZ
I can repro using the Apple sample code.
https://developer.apple.com/documentation/roomplan/create-a-3d-model-of-an-interior-room-by-guiding-the-user-through-an-ar-experience
Modify RoomCaptureViewController.swift as follows.
Remove
try finalResults?.export(to: destinationURL, exportOptions: .parametric)
Add
if #available(iOS 17.0, *) {
try finalResults?.export(to: destinationURL, exportOptions: .model)
} else {
try finalResults?.export(to: destinationURL, exportOptions: .parametric)
}
I would have expected this code to at least compile and run on older devices.
When the app was targeting iOS 15, the available checks worked as expected and the app is able to launch properly.
While using Screen Mirroring in developer mode within my immersive space, I noticed an alignment issue with the computer cursor (transparent circle). When I move it toward an attachment view, the cursor remains horizontal instead of aligning with the surface of the attachment view. It shows correctly on a 2D window only wrong on attachment view.
Is this behavior a bug, or could it be caused by a missing or incorrect configuration on the attachment view?
Want help, thanks.
Hi, I am a new developer. I want to add articulated objects and deformable objects into my AR game. I haven't found any tutorial on this, I hope to interact with these objects. Please let me know if this is available in visionOS.
I work on a game where I use timeline animations in Reality Composer Pro.
The game runs in an immersive space, but can be paused where I then move the whole level root entity from the immersive space to another RealityView in a Window Group. When the player continues I do it exactly the other way around to move the level root from the window group back to my immersive space RealityView.
And it seems like all animations get automatically stopped and restarted when the scene gets changed. The problem is, it does not resume where it stopped before, it completely starts again from where it stopped and therefore, has for example a wrong y offset as visible in the picture.
For example in the picture, the yellow sphere loops the following animation:
0 to 100
100 to -100
-100 to 0
If I now pause the game (and basically switch scenes), the previous animation gets stopped and restarted at position y = 100. So now it loops:
100 to 200
200 to 0
0 to 100
I already tried all kind of setups - like:
Setting the animations relative to root, parent, local
Using behaviors (on Added to Scene, on Notification)
And finally even by accessing the availableAnimations directly and saving the playback controller of the animation
There I saw, if I manually trigger the following code before switching the scene, everything works as expected:
Button("Reset") {
animationPlaybackController.time = 0
animationPlaybackController.pause()
animationPlaybackController.stop(blendOutDuration: 0.00001)
}
But if I use time = 0 with .stop() directly, the time = 0 seems to be ignored and I get the same behavior as before that it stops in a wrong y offset, hence my assumption that animations get stopped and invalidated once they change the scene.
I tried to call the code manually on ImmersiveSpace.onDisappear, WindowGroup.onAppear and different kind of SceneEvents subscriptions, but unfortunately nothing worked.
So am I doing something wrong in general or is there a way to fix this?
I am developing a Unity application for the Apple Vision Pro using PolySpatial and RealityKit integration.
The goal is to create a graspable object (for example, a handheld cube) that includes a secondary camera. When the user grabs and moves the object, the secondary camera should render its view to a RenderTexture, which is displayed on a quad attached to the object, simulating a live camera screen.
In the Unity Editor, this setup works correctly. The RenderTexture updates in real time, and the quad displays the camera’s view as expected.
However, when building and running the application on the Vision Pro, the quad only displays the clear background color of the secondary camera. No scene content appears. The graspable interaction itself works fine: the object can be grabbed and moved as intended.
Steps I have taken:
Created a new layer (CameraFeed) and assigned the relevant objects to it.
Set the secondary camera’s culling mask to render only the CameraFeed layer.
Assigned the RenderTexture as the camera’s target texture.
Applied the RenderTexture to an Unlit/Texture material on a quad.
Confirmed the camera is active and correctly positioned relative to the object.
From my research, it appears that once objects are managed by RealityKit through PolySpatial (for example, made graspable), they are no longer rendered through Unity's normal camera pipeline. Only the main XR camera (managed by RealityKit) seems able to see these objects. Secondary Unity cameras cannot render RealityKit-synced content to a RenderTexture. If this is correct, it seems there is currently no way to implement a true live secondary camera feed showing graspable objects on Vision Pro using Unity PolySpatial.
My questions are:
Is there any official way to enable multiple camera rendering of RealityKit-managed objects through PolySpatial?
Are there known workarounds to simulate a live camera feed that still allows objects to be grabbed?
Has anyone found alternative design patterns or methods for this kind of interaction?
Environment: Unity 6.0 , PolySpatial 2.2.4, Apple Vision OS XR 2.2.4
Any insight or suggestions would be greatly appreciated.
Thank you.
When you click the button in the background of three horizontal lines, when the view is about to appear, add buried event statistics, but click the button to close it, it will repeat the view will appear method API, equivalent to the view method repeated execution twice, resulting in incorrect buried event statistics
Environment
Xcode: 16.2
VisionOS SDK 2.4
Swift 6.1
Targets: Apple Vision Pro (immersive space)
Frameworks: ARKit, RealityKit, SwiftUI
What I’m Trying to Do
I have a view-model class PlacementManager that holds two AR providers:
private var worldTracking: WorldTrackingProvider
private var planeDetection: PlaneDetectionProvider
I want to dynamically replace these providers in a setEnvironment(_:) method (so I can save/clear a JSON scene and restart ARKit).
What’s Happening
If I declare them as :
private let worldTracking = WorldTrackingProvider()
private let planeDetection = PlaneDetectionProvider()
I get compile-errors when I later do:
self.worldTracking = newWorldTracking // Cannot assign to property: 'worldTracking' is a 'let' constant
If I change them to un-initialized vars:
private var worldTracking: WorldTrackingProvider
private var planeDetection: PlaneDetectionProvider
then in my init() I get:
self used in property access 'worldTracking' before all stored properties are initialized
Code snipet
@Observable
final class PlacementManager : ObservableObject {
private var worldTracking: WorldTrackingProvider
private var planeDetection: PlaneDetectionProvider
// … other props …
@MainActor
init() {
// error: self.worldTracking used before init…
planeAnchorHandler = PlaneAnchorHandler(rootEntity: root)
persistenceManager = PersistenceManager(
worldTracking: worldTracking,
rootEntity: root
)
// …
}
@MainActor
func setEnvironment(env: Environnement) async {
let newWorldTracking = WorldTrackingProvider()
let newPlaneDetection = PlaneDetectionProvider()
try await appState!.arkitSession.run(
[ newWorldTracking, newPlaneDetection ]
)
self.worldTracking = newWorldTracking
self.planeDetection = newPlaneDetection
// …
}
}
What I’ve Tried
Giving them default values at declaration (= WorldTrackingProvider())
Initializing them at the top of init() before any use
Passing the new providers into arkitSession.run(...)
My Question
What is the recommended Swift-style pattern to declare and reassign these ARKit provider properties so that:
They’re fully initialized before use in init(), and
I can swap them out later in setEnvironment(...) without compiler errors?
Any pointers (or links to forum threads / docs) would be greatly appreciated!
Hi,
after upgrading to 2.4.1 (from 1.0) my vision stucks on "Retrieving configuration" screen. Apple Store didn't support my case since it has been sold in USA and the product isn't still present in italian market. I don't have dev strap, how can I manage the issue?
Thank you
Topic:
Spatial Computing
SubTopic:
General
Hi there,
I was looking to add a particle emitter to my augmented reality app I'm developing using RealityKit. I'm targeting iOS. I noticed in the documentation for the ParticleEmitterComponent that it looks like iOS 18.0+ is supported, but when I try to use the ParticleEmitterComponent in my code in XCode, I get an error that it isn't found. Furthermore, this StackOverflow post seems to indicate that particle systems are not available for iOS. Would it be possible to get clarification on this?
In the DestinationVideo demo, the onAppear in UpNextView is triggered again when it is closed, but I only want it to be triggered once. How can I achieve this?
Alternatively, I would like to capture the button click events in the player menu, as shown in the screenshot below.
Seeing this magical sand table, the unfolding and folding effects are similar to spreading out cards, which is very interesting. But I don't know how to achieve it. I want to see if there are any ways to achieve this effect and give some ideas. May I ask if this effect can be achieved under the existing API
I noticed that when I drag the menu window in an Immersive View, the entities behind it becomes semi-transparent, and the boundary between virtual and real-world objects is very pronounced.
May I ask how does VisionOS implement this effect? Is there any API or technique I can use in my own code to enable the same semi-transparent overlay - even when I am not dragging the menu window?
Using a 360 image that I have taken with 72MP with a Insta360 X3 I would like to add those images into my VisionPro and see them surrounding me completely as we expect of a 360 image. I was able to do by performing the described on some tutorial.
The problem is the quality. On my 2D window the image looks with great quality.
I will still write down the code:
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
var body: some View {
RealityView { content in
content.add(createImmersivePicture(imageName: appModel.activeSpace))
}
}
func createImmersivePicture(imageName: String) -> Entity {
let sphereRadius: Float = 1000
let modelEntity = Entity()
let texture = try? TextureResource.load(named: imageName, options: .init(semantic: .raw, compression: .none))
var material = UnlitMaterial()
material.color = .init(texture: .init(texture!))
modelEntity.components.set(
ModelComponent(
mesh: .generateSphere(
radius: sphereRadius
),
materials: [material]
)
)
modelEntity.scale = .init(x: -1, y: 1, z: 1)
modelEntity.transform.translation += SIMD3<Float>(0.0, 10.0, 0.0)
return modelEntity
}
}
Since the quality is a problem. I thought about reducing the radius of the sphere or decreasing the scale. On both cases, nothing changes.
I have tried: modelEntity.scale = .init(x: -0.5, y: 0.5, z: 0.5)
And also let sphereRadius: Float = 2000, let sphereRadius: Float = 500, but nothing is changed.
I also get the warning:
IOSurface creation failed: e00002c2 parentID: 00000000 properties: {
IOSurfaceAddress = 4651830624;
IOSurfaceAllocSize = 35478941;
IOSurfaceCacheMode = 0;
IOSurfaceMapCacheAttribute = 1;
IOSurfaceName = CMPhoto;
IOSurfacePixelFormat = 1246774599;
}
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceCacheMode
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfacePixelFormat
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceMapCacheAttribute
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceAddress
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceAllocSize
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceName
Is there anything I can do to reduce the radius or just to improve the quality itself?
I am following this example to create a stereoscopic image: https://developer.apple.com/documentation/visionos/creating-stereoscopic-image-in-visionos
I would also like to add corner radius to the stereoscopic RealityView. With ordinary SwiftUI views, we typically just use .clipShape(RoundedRectangle(cornerRadius: 32)):
struct StereoImage: View {
var body: some View {
let spacing: CGFloat = 10.0
let padding: CGFloat = 40.0
VStack(spacing: spacing) {
Text("Stereoscopic Image Example")
.font(.largeTitle)
RealityView { content in
let creator = StereoImageCreator()
guard let entity = await creator.createImageEntity() else {
print("Failed to create the stereoscopic image entity.")
return
}
content.add(entity)
}
.frame(depth: .zero)
}
.padding(padding)
.clipShape(RoundedRectangle(cornerRadius: 32)) // <= HERE!
}
}
This doesn't seem to actually clip the RealityView shown in the sample above. I am guessing this is due to the fact that the box in the RealityView has a non-zero z scale, which means it isn't on the same "layer" as its SwiftUI containers, and thus isn't clipped by the modifiers apply to the containers.
How can I properly apply a clipshape to RealityViews like this? Thanks!