Hi!
I'm trying to play video on monitor 3D model like a material.
I wanna know if it's possible work. I searched about it, but I couldn't get enough information. Thank you in advance.
Reality Composer Pro
RSS for tagPrototype and produce content for AR experiences using Reality Composer Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
What is recommended best practice for importing a Blender 3D file into RCP? I assume as a .usdz file? Is there a WWDC24 session or other Apple resource that best explains this. I want to make sure I provide the right format/file to RCP from Blender.
"Although Xcode generates loading methods for all Reality Composer files in your Xcode project"
I do not find this to be true, sadly.
Does anyone have any luck or insight on how one can build just a simple MacOS app that will import a scene from a Reality File?
The documentation suggests that the simple act of bringing a .Reality File in (What about .realitycomposerpro?) will generate code, but that doesn't seem to happen.
The sample code (Spaceship) does not compile for MacOS.
I'd really love just the most generic template of an Xcode Project that compiles with a button that pops open a scene., Like the VisionOS default immersive project.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Swift
AR / VR
RealityKit
Reality Composer Pro
Does anyone have any idea if Apple plans to add in UDIM support for its 3D development? It is a real bummer to not have this feature and makes an otherwise clean USD pipeline kinda suck.
Hi Apple engineers,
I'm currently working on an app that uses the incoming microphone audio and gives visual feedback to the user about the incoming audio.
I would like to use Reality Composer Pro's Developer Capture to get a high quality recording of the app and its use cases for the App Store — but any time I have an in-progress capture, my app stops receiving the incoming audio. It almost seems as if the microphone audio is getting 'hijacked' during the screen capture, which prevents me from demonstrating the app's core features.
Could you please advise on how to proceed?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Hi
I try to make a 360 stereo viewer, and I have made a ShaderGraphMaterial on Reality Composer Pro.
Im trying to use that material on a inverted sphere whitch is generated in Swift.
When I try to attach the material I get this error "Type of expression is ambiguous without a type annotation"
Here is the code (sorry im noob =) ):
import SwiftUI
import RealityKit
import RealityKitContent
import PhotosUI
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
var body: some View {
RealityView { content in
// Add the initial RealityKit content
guard let skyBoxEntity = await createSkybox() else {
return
}
content.add(skyBoxEntity)
}
}
}
private func createSkybox () async -> Entity? {
var matX = try? await ShaderGraphMaterial(named:
"/Root/Mat_Stereo360",
from: "360Stereo.usda",
in: realityKitContentBundle)
let sphere = await MeshResource.generateSphere(radius:1000)
let entity = await Entity()
entity.components.set(ModelComponent(mesh: sphere, materials:
[matX])). //ERROR HERE:
Type of expression is ambiguous without a type annotation
//entity.scale *= .init(x:-1, y:1, z:1)
return entity
}
I hope someone can help me =)
Best regards,
Kim
Hey everyone,
I'm working on an object viewer where users can place objects in a real room using AR, and I want both visionOS (Apple Vision Pro) and iOS devices (iPad, iPhone) to participate in the same shared spatial experience. The idea is that a user with a Vision Pro can place an object, and peers using iPhones/iPads can see the same object in the same position in their AR view.
I've looked into ARKit's Shared ARWorldMap and MultipeerConnectivity, but I'm not sure if this extends seamlessly to visionOS or if Apple has an official way to sync spatial data between visionOS and iOS devices.
Has anyone tried sharing a spatial world between visionOS and iOS?
Are there any built-in frameworks that allow for a shared multiuser AR session across these devices?
If not, what would be the best way to sync object positions between them?
Would love to hear if anyone has insights or experience with this! 🚀
Thanks!
We have a project which is currently being built as a XCFramework.
The framework contains a custom component to be used with entities in Reality Composer Pro.
I have tried to se set the RCP Package.swift file to reference the framework package for the in the dependancies.
Nothing that I do with the folder path to reference the code is working.
Do I need to change the project to be using Swift source code instead of a XCFramework?
The component needs to be in the framework as there is a class in the framework that works directly with the custom compoent.
I am able to reference the XCFramework as a Swift Package with other projects.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I want to make a model with added bones move by dragging it with gestures,This is a model exported using Blender,
What I understand is using IKComponent, but I don't know how to use it specifically
I sketched a idea for a project in Reality Composer on my iPad, thinking when I had a chance to sit down I would work it up in Xcode.
However, when I got back to my computer, I discovered I cannot open a file created in Reality Composer (or the exported Reality file) in Reality Composer Pro.
Am I missing something obvious here, because this seems like a huge oversight.
If anyone, can let me know how to open a file created in Reality Composer in Reality Composer Pro, I would greatly appreciate it. Partly, because there seems to be objects available in Reality Composer that are not in Reality Composer Pro.
Thanks
Stan
I downloaded the file through Scoot, and when I remove VisionPro, the app will call the StreamDelegate method and return ". endEncountered". How can I solve this problem?
Thank you!
Goal: To render in an apple vision pro app, the solid-mechanics 3D simulation results coming form an FEA code.
Starting point: I have surface vtks with deformations on each node. Each time step has a a mesh with the nodal coordinates. This is straighforward translatable to a usd MeshSequence. Unfortunately, the results cannot be simplified to a scaling o linear transformation as you would do with other game-oriented animations.
Tools: Right now, I am using Xcode and reality composer pro (RCP) to build the scenes.
Technical limitations: I am aware that RCP can do animations with BlendMesh and skeletons and that MeshSequence is not a problem.
Progress:
Coverting to the sequence of vtk meshes to a usd MeshSequence is straighforward. This animates correctly in Preview and Blender (see screenshot).
I managed to convert from MeshSequence to multiple keys and BlendMesh. This also animates correctly in Blender and preview. Unfortunately, the BlendMesh of multiple blended meshes shows a zero animation time in RCP (see screenshot below)
Also, see below usda file scheme for the animation. Of course I am not showing full vectors such as faceVertexCounts, faceVertexIndex, normals.
Question: what is the right set up to create a BlendMesh animation that RCP will correctly import and animate, form a set of Meshes or multiple key shapes?
Blender animation
Time zero RCP "animations"
#usda 1.0
(
defaultPrim = "BlendMeshRoot"
doc = "Blender v4.5.3 LTS"
endTimeCode = 48
framesPerSecond = 24
metersPerUnit = 1
startTimeCode = 0
timeCodesPerSecond = 24
upAxis = "Z"
)
def Xform "BlendMeshRoot" (
customData = {
dictionary Blender = {
bool generated = 1
}
}
)
{
def SkelRoot "Mesh"
{
custom string userProperties:blender:object_name = "Mesh"
float3 xformOp:rotateXYZ = (89.99999, -0, 0)
float3 xformOp:scale = (0.009999999, 0.01, 0.01)
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateXYZ", "xformOp:scale"]
def Mesh "Mesh" (
active = true
prepend apiSchemas = ["MaterialBindingAPI", "SkelBindingAPI"]
)
{
uniform bool doubleSided = 1
float3[] extent = [(25.091871, -34.121277, -13.298501), (299.94482, 245.10088, 202.35126)]
int[] faceVertexCounts = [3, 3, ...
int[] faceVertexIndices = [0, 10293, ...
rel material:binding = </BlendMeshRoot/_materials/MeshSequence_Default>
normal3f[] normals = [(-0.3632836, -0.9102419, -0.19870725), ....
point3f[] points = [(244.41148, 155.42062, 70.454926),.....
float3[] primvars:node_displacement = [(93.54703, 110.9341, 48.37992)....
float3[] primvars:Normals = [(-0.0050530406, -0.9910114, -0.13368203),...
int[] primvars:skel:jointIndices = [0, 0, 0, 0, 0 ...
float[] primvars:skel:jointWeights = [1, 1, 1, 1, 1...
uniform token[] skel:blendShapes = ["frame_0000", "frame_0001", "frame_0002", "frame_0003", "frame_0004", "frame_0005"]
rel skel:blendShapeTargets = [
</BlendMeshRoot/Mesh/Mesh/frame_0000>,
.......
</BlendMeshRoot/Mesh/Mesh/frame_0005>,
]
prepend rel skel:skeleton = </BlendMeshRoot/Mesh/Skel>
uniform token subdivisionScheme = "none"
custom string userProperties:blender:data_name = "Mesh"
custom float userProperties:originalTime
float userProperties:originalTime.timeSamples = {
0: 0,
}
def BlendShape "frame_0000"
{
uniform vector3f[] offsets = [(0, 0, 0), (0, 0, 0),.....
uniform int[] pointIndices = [0, 1, 2, .....
}
.....
.....
#### BlendShape frame to 0005
.....
def Skeleton "Skel" (
prepend apiSchemas = ["SkelBindingAPI"]
)
{
uniform matrix4d[] bindTransforms = [( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )]
uniform token[] joints = ["joint1"]
uniform matrix4d[] restTransforms = [( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )]
prepend rel skel:animationSource = </BlendMeshRoot/Mesh/Skel/Anim>
def SkelAnimation "Anim"
{
uniform token[] blendShapes = ["frame_0000", "frame_0001", "frame_0002", "frame_0003", "frame_0004", "frame_0005"]
float[] blendShapeWeights.timeSamples = {
0: [1, 0, 0, 0, 0, 0],
1: [0.9697085, 0.03029152, 0, 0, 0, 0],
2: [0.88787615, 0.11212383, 0, 0, 0, 0],
.....
46: [0, 0, 0, 0, 0.11212379, 0.8878762],
47: [0, 0, 0, 0, 0.030291557, 0.96970844],
48: [0, 0, 0, 0, 0, 1],
}
}
}
}
def Scope "_materials"
{
def Material "MeshSequence_Default"
{
token outputs:surface.connect = </BlendMeshRoot/_materials/MeshSequence_Default/Principled_BSDF.outputs:surface>
custom string userProperties:blender:data_name = "MeshSequence_Default"
def Shader "Principled_BSDF"
{
uniform token info:id = "UsdPreviewSurface"
float inputs:clearcoat = 0
float inputs:clearcoatRoughness = 0.03
color3f inputs:diffuseColor = (0.8, 0.4, 0.3)
float inputs:ior = 1.5
float inputs:metallic = 0
float inputs:opacity = 1
float inputs:roughness = 0.5
float inputs:specular = 0.2
token outputs:surface
}
}
}
def Scope "AnimationClips"
{
custom rel animations = </BlendMeshRoot/Mesh/Skel/Anim>
}
def RealityKitComponent "AnimationLibrary"
{
custom rel animations = </BlendMeshRoot/Mesh/Skel/Anim>
custom token info:id = "RealityKit.AnimationLibrary"
custom double realitykit:approximateDuration = 2
custom double[] realitykit:clipDurations = [2]
custom string[] realitykit:clipNames = ["Anim"]
custom rel realitykit:clipTargets = </BlendMeshRoot/Mesh/Skel/Anim>
custom double realitykit:frameRate = 24
custom bool realitykit:isAnimationLibrary = 1
}
}
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Swift Packages
Developer Tools
Reality Converter
Reality Composer
I've been using the MacOS XCode Reality Composer to export interactive .reality files that can be hosted on the web and linked to, triggering QuickLook to open the interactive AR experience.
That works really well.
I've just downloaded XCode 15 Beta which ships with the new Reality Composer Pro and I can't see a way to export to .reality files anymore. It seems that this is only for building apps that ship as native iOS etc apps, rather than that can be viewed in QuickLook.
Am I missing something, or is it no longer possible to export .reality files?
Thanks.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
QuickLook
Reality Composer
Reality Composer Pro
Hi, I'm trying to achieve 3D photo effects using a photo and a depth map, using GeomertyModifier. The offset is getting applied correctly in Reality Composer Pro, and in Xcode, but when I launch the app, the model offset is not getting applied. here are the screenshots of shader graphs and how it looks in the app
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
Most models are only available as glb or fbx, so I usually reexport them into usdz using Blender.
When I import them into Reality Composer Pro, Mesh, Textures etc look great, but in the Animation Library subsection all I can see is one default subtree animation.
In Blender I can see all available animations and play them individually. The default subtree animation just plays the default idle animation.
In fact when I open the nonlinear animation view in Blender and select a different animation as the default animation, the exported usdz shows the newly selected animation as default subtree animation.
I can see in the Apple sample apps models can have multiple animations in their Animation Library.
I'm using the latest Blender 4.5 and the usdz exporter should be working properly?
Reality Composer is no longer available in XCode 15 Release. Is this intended or will be available in later releases? Do I need to revert to Xcode15 Beta8 to get Reality Composer?
I made an animation in Blender using geometry nodes that I exported to USDC file (then I used Reality Converter to convert to USDZ) and I can see the animation when viewing from the finder but does not play after importing to RCP. Any idea how I can play the animation? Or can the animation be accessed through Xcode?
Thanks!
Hi, I'm very new to 3D and am currently porting a SwiftUI iOS app to visionOS 2.0.
I saw WWDC24 feature Blender in multiple spatial videos, and have begun integrating Blender models and animations into my VisionOS app (I would also like to integrate skeletons and programmatic rigging, more on that later).
I'm wondering if there are “Best Practices” for this workflow - from Blender to USD to RCP 2.0 to visionOS 2 in Xcode. I’ve cobbled together the following that has some obvious holes:
I’ve been able to find some pre-rigged and pre-animated models online that can serve as a great starting point. As a reference, here is a free model from SketchFab - a simple rigged skeleton with 6 built in animations:
https://sketchfab.com/3d-models/skeleton-character-low-poly-8856e0138f424d68a8e0b40e185951f6
When exporting to USD from Blender, I haven’t been able to export more than one animation per USD file. Is there a workflow to export multiple animations in a single USDC file, or is this just not possible?
As a temporary workaround, here is a python script I’ve been using to loop through all Blender animations, and export a model for each animation:
import bpy
import os
# Set the directory where you want to save the USD files
output_directory = “/path/to/export”
# Ensure the directory exists
if not os.path.exists(output_directory):
os.makedirs(output_directory)
# Function to export current scene as USD
def export_scene_as_usd(output_path, start_frame, end_frame):
bpy.context.scene.frame_start = start_frame
bpy.context.scene.frame_end = end_frame
# Export the scene as a USD file
bpy.ops.wm.usd_export(
filepath=output_path,
export_animation=True
)
# Save the current scene name
original_scene = bpy.context.scene.name
# Iterate through each action and export it as a USD file
for action in bpy.data.actions:
# Create a new scene for each action
bpy.context.window.scene = bpy.data.scenes[original_scene].copy()
new_scene = bpy.context.scene
# Link the action to all relevant objects
for obj in new_scene.objects:
if obj.animation_data is not None:
obj.animation_data.action = action
# Determine the frame range for the action
start_frame, end_frame = action.frame_range
# Export the scene as a USD file
output_path = os.path.join(output_directory, f"{action.name}.usdc")
export_scene_as_usd(output_path, int(start_frame), int(end_frame))
# Delete the temporary scene to free memory
bpy.data.scenes.remove(new_scene)
print("Export completed.")
I have also been able to successfully export rigging armatures as a single Skeleton - each “bone” showing getting imported into Reality Composer Pro 2.0 when exporting/importing manually.
I would like to have all of these animations available in a single scene to be used in a RealityView in visionOS - so I have placed all animation models in a RCP scene and created named Timeline Action animations for each, showing the correct model and hiding the rest when triggering specific animations.
I apply materials/textures to each so they appear the same, using Shader Graph.
Then in SwiftUI I use notifications (as shown here - https://forums.developer.apple.com/forums/thread/756978) to trigger each RCP Timeline Action animation from code.
Two questions:
Is there a better way than to have multiple models of the same skeleton - each with a different animation - in a scene to be able to trigger multiple animations? Or would this require recreating Blender animations using skeleton rigging and keyframes from within RCP Timelines?
If I want to programmatically create custom animations and move parts of the skeleton/armatures - do I need to do this by defining custom components in RCP, using IKRig and define movement of each of the “bones” in Xcode?
I’m looking for any tips/tricks/workflow from experienced engineers or 3D artists that can create a more efficient/optimized workflow using Blender, USD, RCP 2 and visionOS 2 with SwiftUI.
Thanks so much, I appreciate any help! I am very excited about all the new tools that keep evolving to make spatial apps really fun to build!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
visionOS
Hello,
After watching the Work with Reality Composer Pro content in Xcode, I had created the following custom component.
public struct TestComponent : Component, Codable{
public var text : String = "helloWorld"
public init() {}
}
I had registered the custom component as suggested in App.init function
init() {
RealityKitContent.TestComponent.registerComponent()
}
The custom component is decoded and realityView shows the sphere, when I load the "Scene" from realityKitContent bundle.
But if I export the scene to a separate file named "test_scene.usdz" on disk and shared to the simulator and then trying to load it load in reality view causes
EXC_BAD_ACCESS #0 0x0000000194c8d508 in Swift._StringObject.getSharedUTF8Start() -> Swift.UnsafePointer<Swift.UInt8> ()
Printing the loaded entity, shows the customComponent but when trying to load in show realityview , crashes the app immediately. Is there a way to fix it?
Hi, I'm developing a prototype VisionOS game.
How to access the bones or joints information when exporting a USD file from Blender to RCP?
The animation in RCP works fine and the joints' information is correctly embedded in the USDA file (with usdchecker). However, RCP does not read it in USDA, USDC or USDZ. It should be possible based on Apple WWDC24 (Compose Interactive 3D content in RCP).
I want to attach and detach an entity to a particular bone in certain moments. So I need the bones' data. They are standard mixamo animations. My mesh is a single unified mesh. Using Blender 4.4