Following up on my previous question here: https://developer.apple.com/forums/thread/774262
Having solved the clipping problem, I am now trying to overlay some content in front of the RealityView. However, it looks like any content with transparency does not render in front of the RealityView, while opaque views seem to work; placing content with transparency like glassBackgroundEffect() behind the RealityView in a ZStack causes the entire window to flicker.
Additionally, my SwiftUI attachment placed in front of the stereoscopic image plane are invisible if the user look at it straight at 90 degrees. However, if the user look at it from increasing angles from the sides, the attachment gradually turns visible again.
Are these behaviors expected? What is a recommended approach to overlay content in front of a RealityView? Thanks!
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm working on a project that uses imageTrackingProvider through ARKit on VisionPro, and I want to detect multiple images(about 5) and show info at the same time.
However, I found that it seems only 1 image could be detected by device at one time.
And the api of maximumNumberOfTrackedImages doing this seems not available for visionOS but only iOS.
Anyone knows possible ways to detect multiple images at the same time on VisionPro?
Topic:
Spatial Computing
SubTopic:
ARKit
Hello,
I have downloaded and run the sample object tracking app for visionos.
Now I'm working on my own objects for tracking. I have made a model using Create ML using images of my object.
However, I cannot see how to convert the Create ML output file (xxx.mlmodel) into a reference object like the files in the sample project.
is there a tool for converting them?
TIA
Topic:
Spatial Computing
SubTopic:
ARKit
How do I configure a Unity project for a fully immersive VR app on Apple Vision Pro using Metal Rendering, and add a simple pinch-to-teleport-where-looking feature? I've tried the available samples and docs, but they don't cover this clearly (to me).
So far, I've reviewed Unity XR docs, Apple dev guides, and tutorials, but most emphasize spatial apps. Metal examples exist but don't include teleportation. Specifically:
visionOS sample "XRI_SimpleRig" – Deploys to device/simulator, but no full immersion or teleport.
XRI Toolkit sample "XR Origin Hands (XR Rig)" – Pinch gestures detect, but not linked to movement.
visionOS "XR Plugin" sample "Metal Sample URP" – Metal setup works, but static scene without locomotion.
I'm new in Unity XR development and would appreciate a simple, standalone scene or document focused only on the essentials for "teleport to gaze on pinch" in VR mode—no extra features. I do have some experience in unreal, world toolkit, cosmo, etc from the 90's and I'm ok with code.
Please include steps for:
Setting up immersive VR (disabling spatial defaults if needed).
Integrating pinch detection with ray-based teleport.
Any config changes or basic scripts.
Project Configuration:
Unity Editor Version: 6000.2.5f1.2588.7373 (Revision: 6000.2/staging 43d04cd1df69)
Installed Packages:
Apple visionOS XR Plugin: 2.3.1
AR Foundation: 6.2.0
PolySpatial XR: 2.3.1
XR Core Utilities: 2.5.3
XR Hands: 1.6.1
XR Interaction Toolkit: 3.2.1
XR Legacy Input Helpers: 2.1.12
XR Plugin Management: 4.5.1
Imported Samples:
Apple visionOS XR Plugin 2.3.1: Metal Sample - URP
XR Hands 1.6.1
XR Interaction Toolkit 3.2.1: Hands Interaction Demo, Starter Assets, visionOS
Build Platform Settings:
Target: Apple visionOS
App Mode: Metal Rendering with Compositor Services
Selected Validation Profiles: visionOS Metal
Documentation: Enabled
Xcode Version: 26.01
visionOS SDK: 26
Mac Hardware: Apple M1 Max
Target visionOS Version: 20 or 26
Test Environment: Model: Apple Vision Pro, visionOS 26.0.1 (23M341), Apple M1 Max
No errors in builds so far; just missing the desired functionality.
Thanks for a complete response with actionable steps.
While using Screen Mirroring in developer mode within my immersive space, I noticed an alignment issue with the computer cursor (transparent circle). When I move it toward an attachment view, the cursor remains horizontal instead of aligning with the surface of the attachment view. It shows correctly on a 2D window only wrong on attachment view.
Is this behavior a bug, or could it be caused by a missing or incorrect configuration on the attachment view?
Want help, thanks.
Hi,
I'm looking to build something similar to the header blur in the App Store and Apple TV app settings. Does anyone know the best way to achieve this so that when there is nothing behind the header it looks the same as the rest of the view background but when content goes underneath it has a blur effect. I've seen .scrollEdgeEffect on IOS26 is there something similar for visionOS?
Thanks!
Using a 360 image that I have taken with 72MP with a Insta360 X3 I would like to add those images into my VisionPro and see them surrounding me completely as we expect of a 360 image. I was able to do by performing the described on some tutorial.
The problem is the quality. On my 2D window the image looks with great quality.
I will still write down the code:
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
var body: some View {
RealityView { content in
content.add(createImmersivePicture(imageName: appModel.activeSpace))
}
}
func createImmersivePicture(imageName: String) -> Entity {
let sphereRadius: Float = 1000
let modelEntity = Entity()
let texture = try? TextureResource.load(named: imageName, options: .init(semantic: .raw, compression: .none))
var material = UnlitMaterial()
material.color = .init(texture: .init(texture!))
modelEntity.components.set(
ModelComponent(
mesh: .generateSphere(
radius: sphereRadius
),
materials: [material]
)
)
modelEntity.scale = .init(x: -1, y: 1, z: 1)
modelEntity.transform.translation += SIMD3<Float>(0.0, 10.0, 0.0)
return modelEntity
}
}
Since the quality is a problem. I thought about reducing the radius of the sphere or decreasing the scale. On both cases, nothing changes.
I have tried: modelEntity.scale = .init(x: -0.5, y: 0.5, z: 0.5)
And also let sphereRadius: Float = 2000, let sphereRadius: Float = 500, but nothing is changed.
I also get the warning:
IOSurface creation failed: e00002c2 parentID: 00000000 properties: {
IOSurfaceAddress = 4651830624;
IOSurfaceAllocSize = 35478941;
IOSurfaceCacheMode = 0;
IOSurfaceMapCacheAttribute = 1;
IOSurfaceName = CMPhoto;
IOSurfacePixelFormat = 1246774599;
}
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceCacheMode
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfacePixelFormat
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceMapCacheAttribute
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceAddress
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceAllocSize
IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceName
Is there anything I can do to reduce the radius or just to improve the quality itself?
Hello,
In my project, I have attached a ManipulationComponent to Entity A and as expected, I'm able interact with it using the built-in gestures. I have another Entity B which is a child of A that I would like to interact with as well, so I attempted to add a ManipulationComponent to B. However, no gestures seem to be registered on B; I can still interact with A but B cannot be interacted with despite having ManipulationComponents on both entities.
So I'm wondering if I'm just doing something wrong, if this is an issue with the ManipulationComponent, or if this is a limitation of the API.
Attached is the code used to add the ManipulationComponent to an Entity and it was done on both A and B:
let mc = ManipulationComponent()
model.components.set(mc)
var boxShape = ShapeResource.generateBox(width: 0.25, height: 0.05, depth: 0.25)
boxShape = boxShape.offsetBy(translation: simd_float3(0, -0.05, -0.25))
ManipulationComponent.configureEntity(model, collisionShapes: [boxShape])
if var mc = model.components[ManipulationComponent.self] {
mc.releaseBehavior = .stay
mc.dynamics.inertia = .low
model.components.set(mc)
}
I am using visionOS 26.0; let me know if there's any additional information needed.
it looks like one week after accepting as a nearby other AVP device... it expires
since we are providing our clients for a timeless app to walk inside archtiecture, it's a shame that not technical staff should connect every week 5 devices to work together
is there any roundabout for this issue or straight to the wishlist ?
thanks for the support !!
Basically, take just the Xcode 26 AR App template, where we put the ContentView as the detail end of a NavigationStack.
Opening app, the app uses < 20MB of memory. Tapping on Open AR the memory usage goes up to ~700MB for the AR Scene. Tapping back, the memory stays up at ~700MB.
Checking with Debug memory graph I can still see all the RealityKit classes in the memory, like ARView, ARRenderView, ARSessionManager.
Here's the sample app to illustrate the issue.
PS: To keep memory pressure on the system low, there should be a way of freeing all the memory the AR uses for apps that only occasionally show AR scenes.
When I run my app from Xcode on a device running iOS 26, the roomplan capture is corrupted and the recording is green and purple. This issue does not occur when I use an older version of iOS or when I run the app via testFlight or the App Store.
Hi,
I’m trying to configure camera feed in ARKit to be in Apple Log color space.
I can change Capture Device’s format to one that has Apple Log and I see one frame being in proper log-gray colors but then all AR tracking stops and tracking state hangs at “initializing”. In other combinations I see error “sensor failed to initialize” and session restarts with default format.
I suspect that this is because normal AR capture formats are 420f, whereas ones that have Apple Log are 422.
Could someone confirm if it’s even possible to run ARKit session with camera feed in a different pixel format?
I’m trying it on iphone 15 pro
Hello,
We discovered that a bunch of our old animated models were no longer animated on iOS15 and onwards.
After a few days of playing spot the difference between usda files I noticed that all the broken models had an xform called "Scene". Lo and behold, changing the name of that xform fixed the issue on all the models. Even lowercase "scene" makes the animations work again. Is "Scene" a reserved keyword or something? What other keywords do we need to avoid so we can create more robust USDZ files?
I'm surprised this issue isn't more widespread considering Blender wraps models in a "Scene" node.
At the drive link below you can find two animated cube USDZs. The only difference is the name of one of the xforms. The one with a "Scene" xform is not animated in quicklook (replicated on iPhone 13 iOS v15.2, iPhone 13 iOS v 18.3, and various devices on Browserstack including iPhone 16 iOS v18.3).
https://drive.google.com/drive/folders/1dch1WaM9O6mbHy29S6NGWgnSHkZkPiBf?usp=sharing
Is there any way to convert TextureResource to Image
Here is my code in visionOS 2.3
NavigationSplitView {
List {
}
.navigationTitle("Passwords")
} detail: {
Text("Hello")
.navigationTitle("All")
}
The font size of "Passwords" and "All" are smaller than the ones in Passwords app.
This is no longer highlighting my entity when looking at it:
RealityView { content
let hoverComponent = HoverEffectComponent(.spotlight(
HoverEffectComponent.SpotlightHoverEffectStyle(
color: .white, strength: 2.0
)
))
entity.components.set(hoverComponent)
The entity is in a window. The same code works in an immersive view.
Collision Component and Input type are set in RCP.
It's also stopped working on my published app (built under visionOS 2.x) using my visionOS 26 device.
If I use a 2.x simulator, it works.
Is this a bug or is there something I'm missing?
Thanks.
I am creating a vision pro app with a 3D model, it has a mesh hierarchy of head, hands, feet etc. I want the character to look towards the camera, but am not able to access head of character through sceneKit nor reality kit. when I try to print names of the child meshes, it only prints till the character, it does iterate through all the body parts. Can anyone help?
I've been struggling with this for far too long so I've decided to finally come here and see if anyone can point me to the documentation that I'm missing. I'm sure it's something so simple but I just can't figure it out.
I can SharePlay our test app with my brother (device to device) but when I open a volumetric window, it says "not shared" under it. I assume this will likely fix the video sharing problem we have as well. Everything else works so smooth but SharePlay has just been such a struggle for me. It's the last piece to the puzzle before we can put it on the App Store.
I am using ARKit to detect image in visionPro. However I met some question about adding the reference image.
Some of my images can not be added correctly sometimes. (As you can see in the picture above, the 'orange' can not be added correctly, but the 'cup' can). However, sometimes they will be added without any problem. I do not know why it will happen. And I want they all be added steadily.
Hello,
For GuessTogether source code, it seems like the code assumes that you're already in a FaceTime call before pressing the custom SharePlay button (labeled "Play Guess Together"). If not already on a FaceTime call, my Apple Vision Pro and the visionOS simulator both do nothing after throwing warnings. Is this intended behavior?
If so, how do I make it so that pressing the button can also initiate FaceTime calls? Is this allowed?
Thank you!