https://developer.apple.com/cn/augmented-reality/tools/. Why is this address missing? Reality Converter, what should we use now to convert the model
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
In Vision OS app, We have two types of windows:
Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content.
Immersive Space Window – This opens only when a user starts streaming or playing a video.
Issue:
When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting. However, this causes the app to restart instead of resuming from its previous state.
Desired Behavior:
I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting.
Attempts & Challenges:
Tried managing opacity, visibility, and state preservation, but none worked as expected.
Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground.
Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
As in the title: openImmersiveSpace works as expected. The ImmersiveSpace in the ID opens normally but the function never resolves a Result. Here is a workaround I used to make user it was on the UI thread and scenePhase was active:
`
@MainActor func openSpaceWithStateCheck() async {
if scenePhase == .active {
Task {
switch await openImmersiveSpace(id: "RoomCaptureInteraction") {
case .opened:
isCapturingImagery = true
break
case .error:
print("!! An error occurred when trying to open the immersive space captureRoomImagery")
case .userCancelled:
print("!! The user declined opening immersive space captureRoomImagery")
@unknown default:
print("!! unknown default result of opening space")
break
}
}
} else {
print("Scene not active, deferring immersive space opening")
}
}
I'm on visionOS 2.4 and SDK 2.2.
I have tried uninstalling the app and rebuilding. Tried simply opening an empty ImmersiveSpace.
The consistency of the ImmersiveSpace opening at least means I can work around it. Even dismissImmersiveSpace works normally and closes the immersive space. But a workaround seems hamfisted.
Hello, I've pre-ordered the Logitech Muse with hopes of developing with it, but have yet to find any documentation relating to the capabilities it will have/any APIs that will be available to take advantage of the Muse. Is anyone aware of what might become available?
Thank you in advance.
We got very excited when we saw support for the PSVR2 on WWDC!
Particularly interesting is WebXR to us, so we got the controllers to give it a try.
Unfortunately they only register as gamepads in the navigator but not as XRInputDevice's
We went through the experimental flags and didn't find something that is directly related. Is there a flag we missed? If not, when do you have PSVR2 support planned for WebXR?
I want to display a model image in the windowGroup window. This image is not unique. To display the model image, how should I convert it into an image
Has anyone had success with MeshInstancesComponent? I tried to follow the sample code from What's New in RealityKit but it wouldn't compile. I was able to use one of the init overloads to get it to compile, but using it crashes both my device and the simulator. Even with one instance.
Platform: visionOS 2.6
Framework: RealityKit, SwiftUIComponent: ImagePresentationComponent
I’m working with the new ImagePresentationComponent from visionOS 26 and hitting a rendering limitation when switching to .spatialStereoImmersive viewing mode within a WindowGroup context.
This is what I’m seeing:
Pure immersive space: ImagePresentationComponent with .spatialStereoImmersive mode works perfectly in a standalone ImmersiveSpace
Mode switching API: All mode transitions work correctly (logs confirm the component updates)
Spatial content: .spatialStereo mode renders correctly in both window and immersive contexts.
This is where it’s breaking for me:
Window context: When the same RealityView + ImagePresentationComponent is placed inside a WindowGroup (even when that window is floating in a mixed immersive space), switching to .spatialStereoImmersive mode shows no visual change
The API calls succeed, state updates correctly, but the immersive content doesn’t render.
Apple’s Spatial Gallery demonstrates exactly what I’m trying to achieve:
Spatial photos displayed in a window with what feels like horizontal scroll view using system window control bar, etc.
Tapping a spatial photo smoothly transitions it to immersive mode in-place.
The immersive content appears to “grow” from the original window position by just changing IPC viewing modes.
This proves the functionality should be possible, but I can’t determine the correct configuration.
So, my question to is:
Is there a specific RealityView or WindowGroup configuration required to enable immersive content rendering from window contexts that you know of?
Are there bounds/clipping settings that need to be configured to allow immersive content to “break out” of window constraints?
Does .spatialStereoImmersive require a specific rendering context that’s not available in windowed RealityView instances?
How do you think Apple’s SG app achieves this functionality?
For a little more context:
All viewing modes are available: [.mono, .spatialStereo, .spatialStereoImmersive]
The spatial photos are valid and work correctly in pure immersive space
Mixed immersive space is active when testing window context
No errors or warnings in console beyond the successful mode switching logs I’m getting
Any insights into the proper configuration for window-hosted immersive content
Hello everyone
I would like to create my own spatial video on my Apple Vision Pro. According to all the documentation from Apple, this requires two camera angles that enhance the spatial perception. I have purchased the Enterprise license with main camera access for this purpose. However, this only gives me access to the left main camera of the glasses. Is there a way to access the right camera as well? Or is the one camera image enough to create a spatial video by splitting the image, for example?
I am open to any help and ideas. My goal is to create the video with the cameras on the glasses, not externally.
Apple published a set of examples for using system gestures to interact with RealityKit entities. I've been using DragGesture a lot in my apps and noticed an issue when using it in an immersive space.
When dragging an entity, if I turn my body to face another direction, the dragged entity does not stay relative to my hand. This can lead to situations where the entity is pulled very close to me, or pushed far way, or even ends up behind me.
In the examples linked above, there are two versions of how they use drag.
handleFixedDrag: This is similar to what I'm doing now. It uses the value from value.gestureValue.translation3D as the basis for the drag
handlePivotDrag: This version aims to solve the problem I described above by using value.inputDevicePose3D as the basis of the gesture.
I've tried the example from handlePivotDrag, but it has one limitation. Using this version, I can move the entity around me as if it were on the inside of an arc or sphere. However, I can no longer move the entity further or closer. It stays within a similar (though not exact) distance relative to me while I drag.
Is there a way to combine these concepts? Ideally, I would like to use a gesture that behaves the same way that visionOS windows do. When we drag windows, I can move them around relative to myself, pull them closer, push them further, all while avoiding the issues described above.
Example from handleFixedDrag
mutating private func handleFixedDrag(value: EntityTargetValue<DragGesture.Value>) {
let state = EntityGestureState.shared
guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") }
if !state.isDragging {
state.isDragging = true
state.dragStartPosition = entity.scenePosition
}
let translation3D = value.convert(value.gestureValue.translation3D, from: .local, to: .scene)
let offset = SIMD3<Float>(x: Float(translation3D.x),
y: Float(translation3D.y),
z: Float(translation3D.z))
entity.scenePosition = state.dragStartPosition + offset
if let initialOrientation = state.initialOrientation {
state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil)
}
}
Example from handlePivotDrag
mutating private func handlePivotDrag(value: EntityTargetValue<DragGesture.Value>) {
let state = EntityGestureState.shared
guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") }
// The transform that the pivot will be moved to.
var targetPivotTransform = Transform()
// Set the target pivot transform depending on the input source.
if let inputDevicePose = value.inputDevicePose3D {
// If there is an input device pose, use it for positioning and rotating the pivot.
targetPivotTransform.scale = .one
targetPivotTransform.translation = value.convert(inputDevicePose.position, from: .local, to: .scene)
targetPivotTransform.rotation = value.convert(AffineTransform3D(rotation: inputDevicePose.rotation), from: .local, to: .scene).rotation
} else {
// If there is not an input device pose, use the location of the drag for positioning the pivot.
targetPivotTransform.translation = value.convert(value.location3D, from: .local, to: .scene)
}
if !state.isDragging {
// If this drag just started, create the pivot entity.
let pivotEntity = Entity()
guard let parent = entity.parent else { fatalError("Non-root entity is missing a parent.") }
// Add the pivot entity into the scene.
parent.addChild(pivotEntity)
// Move the pivot entity to the target transform.
pivotEntity.move(to: targetPivotTransform, relativeTo: nil)
// Add the targeted entity as a child of the pivot without changing the targeted entity's world transform.
pivotEntity.addChild(entity, preservingWorldTransform: true)
// Store the pivot entity.
state.pivotEntity = pivotEntity
// Indicate that a drag has started.
state.isDragging = true
} else {
// If this drag is ongoing, move the pivot entity to the target transform.
// The animation duration smooths the noise in the target transform across frames.
state.pivotEntity?.move(to: targetPivotTransform, relativeTo: nil, duration: 0.2)
}
if preserveOrientationOnPivotDrag, let initialOrientation = state.initialOrientation {
state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil)
}
}
hi guys,
I'm working in VFX industry and I've got the question that, is it possible to create immersive video directly from virtual scene created in DCC software like maya, rendered into footage, then coded into immersive video, and finally play in in vision pro?
thanks.
Topic:
Spatial Computing
SubTopic:
General
I'm playing about with the hand tracking systems in reality kit / Vision Pro
I thought it would be interesting if I could attach a virtual object to a hand when the hand is gripping (thought it would be fun to attach a basic cylinder to mimic a wand from Harry Potter)
I'm able to detect when the user is gripping but having trouble placing an object as though it's within the hand.
The simplest version of this is using an AnchorEntity pointing to the user's palm which kind of works, but quickly breaks the illusion when you rotate the wrist or hand.
It seems as though I will have to roll my own anchor entity using the various points of the user's hand and I thought calculating some median point between the thumb and little finger tips would be a good start but it's proven a little difficult as we need both rotation and position.
I'm already out of my depth with reality kit and matrices (and thanks to ChatGPT) I have some code, but as soon as I apply the position manually (as opposed to a hand anchor entity) it fails to render on the user's hand.
It feels like this should already have been something someone has looked in to, any ideas on what might be the issue here?
Note: HandTrackingSystem.handTracking is a HandTrackingProvider()
guard let anchors = HandTrackingSystem.handTracking.latestAnchors.leftHand else {
return
}
if
let thumb = anchors.handSkeleton?.joint(.thumbTip),
let little = anchors.handSkeleton?.joint(.littleFingerTip)
{
let thumbPos = simd_make_float3(thumb.anchorFromJointTransform.columns.3)
let littlePos = simd_make_float3(little.anchorFromJointTransform.columns.3)
let midPos = (thumbPos + littlePos) / 2
let direction = normalize(littlePos - thumbPos)
let rotation = simd_quatf(from: [0, 1, 0], to: direction)
wandEntity.transform.translation = midPos
wandEntity.transform.rotation = rotation
content.add(wandEntity)
}
Hello since updating to beta 3 the sculpting sample app doesn't work it crashes on running.
seems to be something in AnchorEntity or AccessoryAnchoringSource
Referenced from: <00B81486-1A74-30A0-B75B-4B39E3AF57DF> /private/var/containers/Bundle/Application/3D2EBF59-19F0-4BF4-8567-6962AA36A2C6/delete.app/delete.debug.dylib
Expected in: <BAA9B221-78A1-3B99-AA2F-B8DFCD179FC7> /System/Library/Frameworks/RealityFoundation.framework/RealityFoundation
Hi everyone,
I’m trying to verify something mentioned in the WWDC session “Explore enhancements to your spatial business app.”
At timestamp 3:36, the presenter states:
“You can now access your enterprise license files directly within your Apple Developer account.”
I’ve checked every section of my Developer account, including:
• Membership and Agreements
• Certificates, Identifiers & Profiles
• App Store Connect
• Additional Resources
• Account settings
…but no UI or section exposes these enterprise license files.
Since the Vision Entitlement Services framework actively checks these licenses (for example, mainCameraAccess entitlement approval), I need to confirm the location of the new license file.
Could someone from Apple or anyone who has seen this feature clarify:
1. Where exactly do these enterprise license files appear in the Developer account UI, or
2. Whether this feature has not rolled out yet?
Any guidance or screenshots from those who have access would be invaluable.
Thanks,
Topic:
Spatial Computing
SubTopic:
General
Tags:
Enterprise
Entitlements
Business and Enterprise
visionOS
Can I combine FromToByAction and BindTarget.MaterialPath to animate my ShaderGraphMaterial. I don't know how to use the BindTarget.MaterialPath.
Spatial widget is a new feature of visionos 26. I notice The system’s Photo app can add a Spatial Image in the widget. I wonder if third apps can use spatial image or any 3D content in it's widget? I try to use RealityView in widget and it run with a crash.
So does spatial Image in widget only supported by the system Photo app, and not available to developers now?
So with the new ManipulationComponent, we can choose "stay" and then if you drag it out of your volume, once you let go it will instantly disappear.
We can "animate" it back to inside the volume, eg.:
content.subscribe(to: ManipulationEvents.WillRelease.self) { event in
Entity.animate(
.easeInOut(duration: 1),
body: { event.entity.position = [0, 0.2, 0] },
completion: {}
)
},
Howeve,r for the duration that it travels outside of the volume it's invisible the whole time.
In this apple video, it seems to be visible when dragging and when letting go, but perhaps that's not a volume they're dragging it out of?
https://youtu.be/VtenPKrvPOU?si=y1zoZOs2IMyDzOm6&t=1748
Does anyone know how to keep the entity visible even when after letting the entity go while you animate it back towards inside of your volume?
Apple's new Spatial Personas use Gaussian Splatting,
but I have not found any APIs for visionOS to display a Gaussian Splat like a PLY file.
Am I just missing the Apple documentation? If not, are there common practices developers are using for displaying Gaussian Splats in visionOS?
Hi, we would like to create something where you can open multiple volumetric windows and place them in a room, our biggest issue is that we want these windows to be persistent, so when I close and reopen the app, the windows to be in the same position. We can't use immersive spaces because we also want to have the possibility to access the shared space.
Is it possible with the current features and capabilities to do that? If yes do you have some advices how can we achieve this?
The alternative is if is it possible to open the virtual display in immersive spaces or if we have the possibility to implement our own virtual display.
I want to step into portal world. I've know PortalCrossingComponent can make an entity to cross portal, but how to make device cross into portal world?