Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

Custom EntityAction - different behaviour VisionOS 2.6 vs 26
I implemented an EntityAction to change the baseColor tint - and had it working on VisionOS 2.x. import RealityKit import UIKit typealias Float4 = SIMD4<Float> extension UIColor { var float4: Float4 { if cgColor.numberOfComponents == 4, let c = cgColor.components { Float4(Float(c[0]), Float(c[1]), Float(c[2]), Float(c[3])) } else { Float4() } } } struct ColourAction: EntityAction { // MARK: - PUBLIC PROPERTIES let startColour: Float4 let targetColour: Float4 // MARK: - PUBLIC COMPUTED PROPERTIES var animatedValueType: (any AnimatableData.Type)? { Float4.self } // MARK: - INITIATION init(startColour: UIColor, targetColour: UIColor) { self.startColour = startColour.float4 self.targetColour = targetColour.float4 } // MARK: - PUBLIC STATIC FUNCTIONS @MainActor static func registerEntityAction() { ColourAction.subscribe(to: .updated) { event in guard let animationState = event.animationState else { return } let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime)) animationState.storeAnimatedValue(interpolatedColour) } } } extension Entity { // MARK: - PUBLIC FUNCTIONS func changeColourTo(_ targetColour: UIColor, duration: Double) { guard let modelComponent = components[ModelComponent.self], let material = modelComponent.materials.first as? PhysicallyBasedMaterial else { return } let colourAction = ColourAction(startColour: material.baseColor.tint, targetColour: targetColour) if let colourAnimation = try? AnimationResource.makeActionAnimation(for: colourAction, duration: duration, bindTarget: .material(0).baseColorTint) { playAnimation(colourAnimation) } } } This doesn't work in VisionOS 26. My current fix is to directly set the material base colour - but this feels like the wrong approach: @MainActor static func registerEntityAction() { ColourAction.subscribe(to: .updated) { event in guard let animationState = event.animationState, let entity = event.targetEntity, let modelComponent = entity.components[ModelComponent.self], var material = modelComponent.materials.first as? PhysicallyBasedMaterial else { return } let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime)) material.baseColor.tint = UIColor(interpolatedColour) entity.components[ModelComponent.self]?.materials[0] = material animationState.storeAnimatedValue(interpolatedColour) } } So before I raise this as a bug, was I doing anything wrong in the former version and got lucky? Is there a better approach?
0
0
125
Sep ’25
Threadgroup configuration for tile shading
Hello! I have a question about how thread groups work with tile shading. When running "traditional" compute, I get to choose both thread group size and the grid size. However, when using tile shading kernel I only have dispatchThreadsPerTile method - this controls how many threads will be ran in each tile. So far so good, but what about thread groups? The examples in video "Tile Shading on A11" seem to suggest that there will be only one thread group per tile. In the video, [[thread_index_in_threadgroup]] is called "local_id" and it is used to access the image block. I assume this is the default configuration. So when one does the following: Creates MTLRenderPassDescriptor with tileWidth set to W and tileHeight set to H Fires up the tile shading kernel using dispatchThreadsPerTile with MTLSize size = { W, H, 1 } I understand that the result is 1-to-1 mapping between the tile "pixels" and kernel threads. Now, what I would like to do is to have more than one thread group there. I want this for performance reasons: I have a certain compute kernel which I know executes very well with small thread group size. In fact, { 32, 1, 1 } seems to be the fastest. My understanding is that even if I set tile size to 16x16, and so I am executing 256 threads there, there will only be one SIMD group active in a thread group. Meaning that this SIMD group has to execute 8 times over the tile. Is it possible somehow? Or perhaps the limitations of the API are pointing at the limitations of hardware itself, and if I want to execute with SIMD group sized thread groups I have to use "traditional" compute encoder? Will be grateful for help. Michał
0
0
76
Mar ’25
SCNTechnique clearColor Always Shows sceneBackground When Passes Share Depth Buffer
Problem Description I'm encountering an issue with SCNTechnique where the clearColor setting is being ignored when multiple passes share the same depth buffer. The clear color always appears as the scene background, regardless of what value I set. The minimal project for reproducing the issue: https://www.dropbox.com/scl/fi/30mx06xunh75wgl3t4sbd/SCNTechniqueCustomSymbols.zip?rlkey=yuehjtk7xh2pmdbetv2r8t2lx&st=b9uobpkp&dl=0 Problem Details In my SCNTechnique configuration, I have two passes that need to share the same depth buffer for proper occlusion handling: "passes": [ "box1_pass": [ "draw": "DRAW_SCENE", "includeCategoryMask": 1, "colorStates": [ "clear": true, "clearColor": "0 0 0 0" // Expecting transparent black ], "depthStates": [ "clear": true, "enableWrite": true ], "outputs": [ "depth": "box1_depth", "color": "box1_color" ], ], "box2_pass": [ "draw": "DRAW_SCENE", "includeCategoryMask": 2, "colorStates": [ "clear": true, "clearColor": "0 0 0 0" // Also expecting transparent black ], "depthStates": [ "clear": false, "enableWrite": false ], "outputs": [ "depth": "box1_depth", // Sharing the same depth buffer "color": "box2_color", ], ], "final_quad": [ "draw": "DRAW_QUAD", "metalVertexShader": "myVertexShader", "metalFragmentShader": "myFragmentShader", "inputs": [ "box1_color": "box1_color", "box2_color": "box2_color", ], "outputs": [ "color": "COLOR" ] ] ] And the metal shader used to display box1_color and box2_color with splitting: fragment half4 myFragmentShader(VertexOut in [[stage_in]], texture2d<half, access::sample> box1_color [[texture(0)]], texture2d<half, access::sample> box2_color [[texture(1)]]) { half4 color1 = box1_color.sample(s, in.texcoord); half4 color2 = box2_color.sample(s, in.texcoord); if (in.texcoord.x < 0.5) { return color1; } return color2; }; Expected Behavior Both passes should clear their color targets to transparent black (0, 0, 0, 0) The depth buffer should be shared between passes for proper occlusion Actual Behavior Both box1_color and box2_color targets contain the scene background instead of being cleared to transparent (see attached image) This happens even when I explicitly set clearColor: "0 0 0 0" for both passes Setting scene.background.contents = UIColor.clear makes the clearColor work as expected, but I need to keep the scene background for other purposes What I've Tried Setting different clearColor values - all are ignored when sharing depth buffer Using DRAW_NODE instead of DRAW_SCENE - didn't solve the issue Creating a separate pass to capture the background - the background still appears in the other passes Various combinations of clear flags and render orders Environment iOS/macOS, running with "My Mac (Designed for iPad)" Xcode 16.2 Question Is this a known limitation of SceneKit when passes share a depth buffer? Is there a workaround to achieve truly transparent clear colors while maintaining a shared depth buffer for occlusion testing? The core issue seems to be that SceneKit automatically renders the scene background in every DRAW_SCENE pass when a shared depth buffer is detected, overriding any clearColor settings. Any insights or workarounds would be greatly appreciated. Thank you!
0
0
245
Jun ’25
Metal recommendedMaxWorkingSetSize vs actual RAM on iPhone (LLM load fails)
Context I’m deploying large language models on iPhone using llama.cpp. A new iPhone Air (12 GB RAM) reports a Metal MTLDevice.recommendedMaxWorkingSetSize of 8,192 MB, and my attempt to load Llama-2-13B Q4_K (~7.32 GB weights) fails during model initialization. Environment Device: iPhone Air (12 GB RAM) iOS: 26 Xcode: 26.0.1 Build: Metal backend enabled llama.cpp App runs on device (not Simulator) What I’m seeing MTLCreateSystemDefaultDevice().recommendedMaxWorkingSetSize == 8192 MiB Loading Llama-2-13B Q4_K (7.32 GB) fails to complete. Logs indicate memory pressure / allocation issues consistent with the 8 GB working-set guidance. Smaller models (e.g., 7B/8B with similar quantization) load and run (8B Q4_K provide around 9 tokens/second decoding speed). Questions Is 8,192 MB an expected recommendedMaxWorkingSetSize on a 12 GB iPhone? What values should I expect on other 2025 devices including iPhone 17 (8 GB RAM) and iPhone 17 Pro (12 GB RAM) Is it strictly enforced by Metal allocations (heaps/buffers), or advisory for best performance/eviction behavior? Can a process practically exceed this for long-lived buffers without immediate Jetsam risk? Any guidance for LLM scenarios near the limit?
0
0
508
Oct ’25
How does Game Kit offline leaderboard submission work?
I do not understand how offline leaderboard submission is supposed to work in Game Kit: While the documentation briefly states that offline submission is supported, how is that even possible when you first have to fetch a leaderboard object in order to then call its submitScore function? How can I get the leaderboard object in the first place when offline? Can anyone enlighten me how this works? Or maybe point me to some relevant documentation?
0
0
345
Dec ’25
ARView [.showStatistics] doesn't work on Xcode Canvas
Hi, I can't see RealityKit statistics on Xcode Canvas using: arView.debugOptions = [.showStatistics] The statistics only show on a physical device, not Xcode live canvas with #Preview. Testing in Xcode 26.0.1 (17A400) on Tahoe 26.0.1 (25A362). Use case: I'm using RealityKit as a non-AR 3D engine. Xcode Canvas is useful for live iterations. Is this expected behavior? How can I see FPS on Xcode canvas? SKView for example shows all debug options on both Xcode Canvas and physical devices.
0
0
440
Oct ’25
Slow compilation
Hi, I am working with a large project. We are compiling each material to its own .metallib. They all include many common files full of inline functions. Finally we link it all together at the end with a single big pathtrace kernel. Everything works as expected, however the compile times have gotten completely out of hand and it takes multiple minutes to compile at runtime (to native code). I have gathered that I can do this offline by using metal-tt however if I am wondering if there is a way to reduce the compile times in such a scenario, and how to investigate what the root cause of the problem is. I suspect it could have to do with the fact that every materials metallib contains duplications of all the inline functions. Any ideas on how to profile and debug this? Thanks, Rasmus
0
1
124
Mar ’25
How can I uninstall game-porting-toolkit completely
So, I'm done with GPTK and decided to delete it. The only thing I installed was brew -v install apple/apple/game-porting-toolkit and the external libraries from the ditto command. Now, I tried to remove it, but even after brew remove game-porting-toolkit brew autoremove all of the dependencies installed with brew are still there. The most obvious was game-porting-toolkit-compiler, but even after removing this there are so many libraries that are now orphaned and it's just impossible to manually identify those. Is there a way or is the easiest way to simply uninstall Homebrew completely and reinstall it again?
0
0
253
May ’25
iPad - Can I prevent Multitasking on my app?
I have a game built in Unreal Engine 5.6 which uses tilt motion controls to rotate an object. I've restricted the app to only run in portrait for iPhone, and everything works fine, however for iPad I've had a few issues relating to multitasking and I can't seem to solve it. Forcing the app to portrait only still allows the app to run in landscape mode, but shows black bars either side of the game, and the axes for the motion controls are incorrect. X becomes Y and Y becomes X, and there's no way for my app to know which orientation it is because the container is still technically portrait. Allowing my game to run in all orientations makes the whole app more presentable, it doesn't add black bars and the game is still functional and I'm able to map the controls correctly because the game knows it's landscape rather than portrait. The problem with allowing my app to run in landscape mode is if multitasking is enabled on the ipad, you can resize the app to be portrait, and then I run into the same problem again where the game thinks it's portrait mode and all of the axes are wrong again. I tried getting the true orientation of the device rather than the scene, but the game is intended to be played flat so instead of returning the orientation of the OS the orientation is FaceUp, which doesn't help. I need to either disable multitasking or find a way of getting the orientation of the OS (not the scene or the device). I haven't found how to get the OS orientation so I've been trying to disable multitasking. I've got Requires Fullscreen true and UIApplicationSupportsMultipleScreens false in my info.plist but my iPad still seems to allow the window to be resized in landscape view. Opening the IOS workspace of my project Requires Fullscreen is ticked but under that it says "Supports Multiple Windows" and the arrow button next to it takes my to my info.plist values, but no indication of how I can change it. I'm using Unreal Engine 5.6 and Xcode 16.0. Xcode is old I know, but this version of unreal engine doesn't seem to support any newer.
0
0
296
Nov ’25
EXC_BREAKPOINT, QuartzCore , Crash CA::Render::Image::new_image
We are seeing crashes in Xcode organizer. So far we are not able to reproduce them locally. They affect multiple app releases (some older, built with Xcode 15.x and newer built with Xcode 16.0). They only affect iOS 18.5. Is there anything that changed in latest iOS? It's hard to tell what exactly is causing this crash because setting symbolic breakpoint on CA::Render::Image::new_image(unsigned int, unsigned int, unsigned int, unsigned int, CGColorSpace*, void const*, unsigned long const*, void (*)(void const*, void*), void*) triggers this breakpoint all the time, but not necessarily with exactly the previous stack frames matching the crash report. Is it a known issue? crash.crash Thank you.
0
5
399
Jul ’25
ParticleEmitterComponent Position Offset Issue After iOS 26.1 Update – Seeking Solutions & Workarounds
Problem Summary After upgrading to iOS 26.1 and 26.2, I'm experiencing a particle positioning bug in RealityKit where ParticleEmitterComponent particles render at an incorrect offset relative to their parent entity. This behavior does not occur on iOS 18.6.2 or earlier versions, suggesting a regression introduced in the newer OS builds. Environment Details Operating System: iOS 26.1 & iOS 26.2 Framework: RealityKit Xcode Version: 16.2 (16C5032a) Expected vs. Actual Behavior Expected: Particles should render at the position of the entity to which the ParticleEmitterComponent is attached, matching the behavior on iOS 18.6.2 and earlier. Actual: Particles appear away from their parent entity, creating a visual misalignment that breaks the intended AR experience. Steps to Reproduce Create or open an AR application with RealityKit that uses particle components Attach a ParticleEmitterComponent to an entity via a custom system Run the application on iOS 26.1 or iOS 26.2 Observe that particles render at an offset position away from the entity Minimal Code Example Here's the setup from my test case: Custom Component & System: struct SparkleComponent4: Component {} class SparkleSystem4: System { static let query = EntityQuery(where: .has(SparkleComponent4.self)) required init(scene: Scene) {} func update(context: SceneUpdateContext) { for entity in context.scene.performQuery(Self.query) { // Only add once if entity.components.has(ParticleEmitterComponent.self) { continue } var newEmitter = ParticleEmitterComponent() newEmitter.mainEmitter.color = .constant(.single(.red)) entity.components.set(newEmitter) } } } AR Setup: let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true) let model = Entity() model.components.set(ModelComponent(mesh: boxMesh, materials: [material])) model.components.set(SparkleComponent4()) model.position = [0, 0.05, 0] model.name = "MyCube" let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: [0.2, 0.2])) anchor.addChild(model) arView.scene.addAnchor(anchor) Questions for the Community Has anyone else encountered this particle positioning issue after updating to iOS 26.1/26.2? Are there known workarounds or configuration changes to ParticleEmitterComponent that restore correct positioning? Is this a confirmed bug, or could there be a change in coordinate system handling or transform inheritance that I'm missing? Additional Information I've already submitted this issue via Feedback Assistant(FB21346746)
0
0
380
Dec ’25
Diagnose data access latency
The code is pretty simple kernel void naive( constant RunParams *param [[ buffer(0) ]], const device float *A [[ buffer(1) ]], // [N, K] device float *output [[ buffer(2) ]], uint2 gid [[ thread_position_in_grid ]]) { uint a_ptr = gid.x * param->K; for (uint i = 0; i < param->K; i++, a_ptr++) { val += A[b_ptr]; } output[ptr] = val; } when uint a_ptr = gid.x * param->K, the code got 150 GFLops when uint a_ptr = gid.y * param->K, the code got 860 GFLops param->K = 256; thread per group: [16, 16] I'd like to understand why the performance is so different, and how can I profile/diagnose this to help with further optimization.
0
0
91
Apr ’25
ParticleEmitterComponent Position Offset Issue After iOS 26.1 Update – Seeking Solutions & Workarounds
Problem Summary After upgrading to iOS 26.1 and 26.2, I'm experiencing a particle positioning bug in RealityKit where ParticleEmitterComponent particles render at an incorrect offset relative to their parent entity. This behavior does not occur on iOS 18.6.2 or earlier versions, suggesting a regression introduced in the newer OS builds. Environment Details Operating System: iOS 26.1 & iOS 26.2 Framework: RealityKit Xcode Version: 16.2 (16C5032a) Expected vs. Actual Behavior Expected: Particles should render at the position of the entity to which the ParticleEmitterComponent is attached, matching the behavior on iOS 18.6.2 and earlier. Actual: Particles appear away from their parent entity, creating a visual misalignment that breaks the intended AR experience. Steps to Reproduce Create or open an AR application with RealityKit that uses particle components Attach a ParticleEmitterComponent to an entity via a custom system Run the application on iOS 26.1 or iOS 26.2 Observe that particles render at an offset position away from the entity Minimal Code Example Here's the setup from my test case: Custom Component & System: struct SparkleComponent4: Component {} class SparkleSystem4: System { static let query = EntityQuery(where: .has(SparkleComponent4.self)) required init(scene: Scene) {} func update(context: SceneUpdateContext) { for entity in context.scene.performQuery(Self.query) { // Only add once if entity.components.has(ParticleEmitterComponent.self) { continue } var newEmitter = ParticleEmitterComponent() newEmitter.mainEmitter.color = .constant(.single(.red)) entity.components.set(newEmitter) } } } AR Setup: let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true) let model = Entity() model.components.set(ModelComponent(mesh: boxMesh, materials: [material])) model.components.set(SparkleComponent4()) model.position = [0, 0.05, 0] model.name = "MyCube" let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: [0.2, 0.2])) anchor.addChild(model) arView.scene.addAnchor(anchor) Questions for the Community Has anyone else encountered this particle positioning issue after updating to iOS 26.1/26.2? Are there known workarounds or configuration changes to ParticleEmitterComponent that restore correct positioning? Is this a confirmed bug, or could there be a change in coordinate system handling or transform inheritance that I'm missing? Additional Information I've already submitted this issue via Feedback Assistant(FB21346746)
0
0
500
Dec ’25
Game Center leaderboards not posting scores
My app is live but the leaderboards still aren’t updating. App was built with unreal engine 5 with blueprints. I have the leaderboard stat info entered into the node for write integer to leaderboard and a node for show platform specific leaderboard. The leaderboards are shown as live on app connect. When I run the app, the Game Center login functions and the leaderboard interface launches as expected but it just lists a group of friends to invite. There are no scores listed and it says number of players 0 even though I have scored on two different devices and accounts. I have the Game Center entitlement added in Xcode. Not sure where else to look.
0
0
689
Nov ’25