Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

GameKit Turn Based Matches Push Notifications
I'm developing a game that supports GameKit turn based matches. What I don't understand is this: Is tapping on the Game Center notification push messages the only way for the GKTurnBasedEventListener to trigger? What if someone misses the push message (swiping it away by accident or something like that) but still wants to join? Is there some inbox somewhere where the pending messages can be seen or fetched? Also it was mentioned in a very old WWDC video (from 2013, I think that's the latest with information about turn based matches) that the notification also includes a badge for the icon. However, I do not understand how to implement that. Is there any documentation for that?
3
0
584
2w
CGSetDisplayTransferByTable no longer working on macOS Tahoe
For an app of mine I use CGSetDisplayTransferByTable to adjust the gamma table of the device. Since macOS Tahoe, these modifications are silently ignored. The display's actual gamma curve remains unchanged despite the API reporting successful completion. I've filed a FB for it a few weeks ago, and would love to figure out what could be causing this. FB18559786
3
1
378
Aug ’25
Updated Object Capure -- needs LiDAR?
I have two apps released -- ReefScan and ReefBuild -- that are based on the WWDC21 sample photogrammetry apps for iOS and MacOS. Those run fine without LiDAR and are used mostly for underwater models where LiDAR does not work at all. It now appears that the updated photogrammetry session requires LiDAR data, and building my app on current xcode results in a non-working app. Has the "old" version of photgrammetry session been broken by this update? It worked very well previously so I would hate to see this regression to needing LiDAR. Most of my users do not have that.
3
0
595
Mar ’25
CAMetalLayer nextDrawable crash
Hi , My application meet below crash backtrace at very low repro rate from the public users, i do not see it relate to a specific iOS version or iPhone model. The last code line from my application is calling CAMetalLayer nextDrawable API. I did some basic studying, suppose it may relate to the wrong CAMetaLayer configuration, like frame property w or h <= 0.0 bounds property w or h <= 0.0 drawableSize w or h <= 0.0 or w or h > max value (like 16384) Not sure my above thinking is right or not? Will the UIView which my CAMetaLayer attached will cause such nextDrawable crash or not ? Thanks a lot Main Thread - Crashed libsystem_kernel.dylib __pthread_kill libsystem_c.dylib abort libsystem_c.dylib __assert_rtn Metal MTLReportFailure.cold.1 Metal MTLReportFailure Metal _MTLMessageContextEnd Metal -[MTLTextureDescriptorInternal validateWithDevice:] AGXMetalA13 0x245b1a000 + 4522096 QuartzCore allocate_drawable_texture(id<MTLDevice>, __IOSurface*, unsigned int, unsigned int, MTLPixelFormat, unsigned long long, CAMetalLayerRotation, bool, NSString*, unsigned long) QuartzCore get_unused_drawable(_CAMetalLayerPrivate*, CAMetalLayerRotation, bool, bool) QuartzCore CAMetalLayerPrivateNextDrawableLocked(CAMetalLayer*, CAMetalDrawable**, unsigned long*) QuartzCore -[CAMetalLayer nextDrawable] SpaceApp -[MetalRender renderFrame:] MetalRenderer.mm:167 SpaceApp -[FrameBuffer acceptFrame:] VideoRender.mm:173 QuartzCore CA::Display::DisplayLinkItem::dispatch_(CA::SignPost::Interval<(CA::SignPost::CAEventCode)835322056>&) QuartzCore CA::Display::DisplayLink::dispatch_items(unsigned long long, unsigned long long, unsigned long long) QuartzCore CA::Display::DisplayLink::dispatch_deferred_display_links(unsigned int) UIKitCore _UIUpdateSequenceRun UIKitCore schedulerStepScheduledMainSection UIKitCore runloopSourceCallback CoreFoundation __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ CoreFoundation __CFRunLoopDoSource0 CoreFoundation __CFRunLoopDoSources0 CoreFoundation __CFRunLoopRun CoreFoundation CFRunLoopRunSpecific GraphicsServices GSEventRunModal UIKitCore -[UIApplication _run] UIKitCore UIApplicationMain
3
0
367
Jul ’25
GestureComponent does not support DragGesture
The following code using the new GestureComponent demonstrates inconsistency. The tap gesture prints output, but the drag gesture does not. I already checked this post, which points to this seemingly outdated sample code I assume that example is deprecated in favour of the now built in version of GestureComponent. Nonetheless, there are no compiler warnings or errors, it just fails silently. TapGesture, LongPressGesture, MagnifyGesture, RotateGesture all work, so this feels like an oversight. RealityView { content in let testEntity = ModelEntity(mesh: .generateBox(size: .init(x: 1, y: 1, z: 1))) testEntity.position = SIMD3<Float>(0,0,-1) testEntity.components.set(InputTargetComponent()) testEntity.components.set(CollisionComponent( shapes: [.generateBox(size: .init(x: 1, y: 1, z: 1))] )) let testGesture = TapGesture() .onEnded { value in print("Tapped") } testEntity.components.set(GestureComponent(testGesture)) let dragGesture = DragGesture() .onEnded { value in print("Dragged") } testEntity.components.set(GestureComponent(dragGesture)) content.add(testEntity) }
3
1
412
Jul ’25
App Freezes on iPadOS 26.x - GPU Metal Errors
I work on a Qt/QML app that uses Esri Maps SDK for Qt and that is deployed to both Windows and iPads. With a recent iPad OS upgrade to 26.1, many iPad users are reporting the application freezing after panning and/or identifying features in the map. It runs fine for our Windows users. I was able to reproduce this and grabbed the following error messages when the freeze happens: IOGPUMetalError: Caused GPU Address Fault Error (0000000b:kIOGPUCommandBufferCallbackErrorPageFault) IOGPUMetalError: Invalid Resource (00000009:kIOGPUCommandBufferCallbackErrorInvalidResource) Environment: Qt 6.5.4 (Qt for iOS) Esri Maps SDK for Qt 200.3 iPadOS 26.1 Because it appears to be a Metal error, I tried using OpenGL (Qt offers a way to easily set hte target graphics api): QQuickWindow::setGraphicsApi(QSGRendererInterface::GraphicsApi::OpenGL) Which worked! No more freezing. But I'm seeing many posts that OpenGL has been deprecated by Apple. I've seen posts that Apple deprecated OpenGL ES. But it seems to still be available with iPadOS 26.1. If so, will this fix (above) just cause problems with a future iPadOS update? Any other suggestions to address this issue? Upgrading our version of Qt + Esri SDK to the latest version is not an option for us. We are in the process to upgrade the full application, but it is a year or two out. So, we just need a fix to buy us some time for now. Appreciate any thoughts/insights....
3
0
516
Dec ’25
Metal useResource vs. MTLFence
Hello, I'm tracking down a bug where useResource doesn't seem to apply proper synchronization when a resource is produced by the render pass then consumed by the compute pass, but when I use MTLFence between the to signal and wait between the render/compute encoders, the artifact goes away. The resource is created with MTLHazardTrackingModeTracked and useResource is called on the compute encoder after the render pass. Metal API Validation doesn't report any warnings/errors. Am I misunderstanding the difference between the two APIs? I dug through the Metal documentation and it looks like useResource should handle synchronization given the resource has MTLHazardTrackingModeTracked but on the other hand, MTLFence should be used to ensure proper synchronization between command encoders. Can someone can clarify the difference between the two APIs and when to use them.
3
0
158
Jul ’25
RealityView content scale factor
Hi, following the recent deprecation of SceneKit, I'm trying to move a couple of my SceneKit projects to RealityKit. One thing I can't seem to find is how to change the content scale factor when using a RealityView in SwiftUI. It was really easy to do in SceneKit with just a SCNView property, and it seems that it's also possible when using ARView, but I can't find a way to do it with a RealityView. Maybe it's a SwiftUI limitation?
3
1
175
Jul ’25
You cannot debug in simulator
I can't create any breakpoint in my Xcode after I upgraded to macOS 15.4 macOS: Version 15.4 (24E248) visionOS Simulator: 2.3 Xcode: Version 16.2 (16C5032a) My app works well without any breakpoints. But if I create any breakpoint it shows me this: Couldn't find the Objective-C runtime library in loaded images. Message from debugger: The LLDB RPC server has crashed. You may need to manually terminate your process. The crash log is located in ~/Library/Logs/DiagnosticReports and has a prefix 'lldb-rpc-server'. Please file a bug and attach the most recent crash log.
3
5
458
Apr ’25
Scenekit view and scenekit editor color difference
Hello there, I'm having trouble matching what I see in the scenekit editor and the output of the resulting scene in a scnview. For a glitter effect I have set a high value on the diffuse intensity which looks fine in the editor but when running the game the colors are much darker. To see if the intensity value is merely capped I have set the same multiplier on the hat below - but it is blown out which looks to me like there is some grading going on I have tried to switch on hdr rendering but that didn't make a difference. I tried disabling linear rendering and that simply made everything darker still - which I expect. Does someone have an idea what else this could be? What rendering is the scenekit editor using and how can I match it? Interestingly when I take a screenshot of the editor window for this post, the image is also blown out... what is going on? :) Thanks so much for any pointers, Seb
3
0
192
Apr ’25
iOS Metal system delayed one Vsync period to really display the frame on the screen
View Layout Add the following views in a view controller: Label View A, with a subview of the same size: MTKView A View B, with a subview of the same size: MTKView B Refresh Rates of Each View The label view refreshes at 60fps (driven by CADisplayLink). MTKView A and B refresh at 15fps. MTKView Implementation Details The corresponding CAMetalLayer's maximumDrawableCount is set to 2, changed to double buffering. The scheduling mechanism is modified; drawing is not driven by the internal loop but is done manually. The draw call is triggered immediately upon receiving a frame. self.metalView.enableSetNeedsDisplay = NO; self.metalView.paused = YES; A new high-priority queue is created for drawing, instead of handling it on the main queue. MTKView Latency Tracking The GPU completion time T1 is observed through the addCompletedHandler callback of the CommandBuffer. The presentation time T2 of the frame is observed through the addPresentedHandler callback of the currentDrawable in MTKView. Testing shows that T2 - T1 > 16.6ms (the Vsync period at 60Hz). This means that after the GPU rendering in MTLView is finished, the frame is not actually displayed at the next Vsync instruction but only at the Vsync instruction after that. I believe there is an extra 16.6ms of latency here, which I want to eliminate by adjusting the rendering mechanism. Observation from Instruments From Instruments, the Surface presentation aligns with the above test results. After the Metal encoder finishes, the Surface in Display switches only after the next-next Vsync instruction. See the image in the link for details. Questions According to a beginner's understanding, after MTKView's GPU rendering is finished, the next Vsync instruction should officially display (make it visible). However, this is not what is observed. Does the subview MTKView need to wait for another Vsync cycle to be drawn to the actual display buffer? The label updates its text at 60fps, so the entire interface should be displayed at 60fps. Is the content of MTKView not synchronized when the display happens? Explanation of the Reasoning Behind Some MTKView Code Details Changing from the default triple buffering to double buffering helps reduce the latency introduced by rendering. Not using MTKView's own scheduling mechanism but using manual triggering of the draw method is because MTKView's own scheduling mechanism is driven by CADisplayLink. Therefore, if a frame falls within a Vsync window, it needs to wait for the next Vsync window to trigger the draw operation, which introduces waiting latency.
3
0
569
Dec ’25
MetalFx
Recently, I adopted MetalFX for Upscale feature. However, I have encountered a persistent build failure for the iOS Simulator with the error message, 'MetalFX is not available when building for iOS Simulator.' To address this, I modified the MetalFX.framework status to 'Optional' within Build Phases > Link Binary With Libraries, adding the linker option (-weak_framework). Despite this adjustment, the build process continues to fail. Furthermore, I observed that the MetalFX sample application provided by Apple, specifically the one found at https://developer.apple.com/documentation/metalfx/applying-temporal-antialiasing-and-upscaling-using-metalfx, also fails to build for the iOS Simulator target. Has anyone encountered this issue?
3
0
820
Mar ’25
RealityKit captureHighResolutionFrame from session is broken on iOS26?
A bit of background on what our app is doing: We have a RealityKit ARView session running. During this period we place objects in RealityKit. At some point user can "take photo" and we use session.captureHighResolutionFrame to capture a frame. We then use captured frame and frame.camera.projectPoint to project my objects back to 2D Issue we found is that on devices that have iOS26, first photo user takes and the first frame received from session.captureHighResolutionFrame gives incorrect CGPoint for frame.camera.projectPoint. If user takes the second photo with the same camera phostion, second frame received from session.captureHighResolutionFrame gives correct CGPoint for frame.camera.projectPoint I notices some difference between first and subsequent frames that i believe is corresponding with the issue. Yaw value of camera (frame.camera.eulerAngles.y) on first frame is not correct ( inconsistent with any subsequent frame) I also created a small example app and i followed Building an Immersive Experience with RealityKit example to create it. The issue exists in this app for iOS26, while iOS18.* has consistent values between first and subsequent captured frames. Note: The yaw value seems to differ more if we start session in portrait but take photo in landscape. Example result for 3 captured frames: Frame captured with yaw: 1.4855177402496338 Frame captured with yaw: -0.08803760260343552 Frame captured with yaw: -0.08179682493209839 Example code: class CustomARView: ARView, ARSessionDelegate { required init(frame: CGRect) { super.init(frame: frame) } required init?(coder decoder: NSCoder) { fatalError("init(coder:) has not been implemented")} func setup() { let singleTap = UITapGestureRecognizer(target: self, action: #selector(handleTap)) addGestureRecognizer(singleTap) } @objc func handleTap(_ gestureRecognizer: UIGestureRecognizer) { Task { do { let frame = try await session.captureHighResolutionFrame() print("Frame captured with yaw: \(Double(frame.camera.eulerAngles.y))") } catch { } } } } struct CustomARViewUIViewRepresentable: UIViewRepresentable { func makeUIView(context: Context) -> some UIView { let arView = CustomARView(frame: .zero) arView.setup() return arView } func updateUIView(_ uiView: UIViewType, context: Context) { } } struct ContentView: View { var body: some View { CustomARViewUIViewRepresentable() .frame(maxWidth: .infinity, maxHeight: .infinity) .ignoresSafeArea() } }
3
1
615
Sep ’25
How to use MetalPeformancePrimitives
I am trying to learn the new Metal Peformance Primitives APIs. I have added the MetalPeformancePrimitives framework and included the header in my shader code as per documentation #include <MetalPeformancePrimitives/MetalPeformancePrimitives.h> Unfortunately, Xcode complains that the header cannot be found. How do I include it properly? I am using Xcode 26 on Tahoe. The MetalPeformancePrimitives framework is present on my machine and I can inspect the headers in the filesystem.
3
1
744
Oct ’25
Anchor an Reality scene on an image anchor
Developing a prototype Vision Pro app and would like to render a 3D scene made from Reality Composer Pro on an image anchor in a RealityView. But I have no luck so far to make it work and need some guidance to move on. I got the image file stored in the assets like below: And from below is the source codes: import SwiftUI import RealityKit import RealityKitContent struct AnchorView: View { @State var imageEntity: Entity = { let anchorEntity = AnchorEntity(.image(group: "AR Resources", name: "reanchor")) return anchorEntity }() var body: some View { RealityView { content in do { // Add the initial RealityKit content if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { imageEntity.addChild(scene) content.add(imageEntity) } } catch { print("Error occurs when adding reality view content: \(error)") } } } }
3
0
1.2k
Sep ’25
Xcode 26 – "Manage Game Progress" not showing achievements/leaderboards on macOS
Hello, When testing GameKit "Manage Game Progress" in Xcode 26: On iOS devices, achievements, leaderboards, and party code data display and work correctly. On macOS devices, none of these data appear in "Manage Game Progress." Is this a known issue with macOS GameKit, or is there a limitation compared to iOS? If it is not a bug, is there any additional configuration needed to make achievements and leaderboards visible on macOS? I also included the GameKit bundle in my macOS app and enabled Enable Debug Mode in GameKit Configuration in the scheme options. Thank you.
3
0
666
Sep ’25
BlendShapes don’t animate while playing animation in RealityKit
Hi everyone, I’m running into an issue with RealityKit when trying to animate BlendShapes (ShapeKeys) while a skeletal animation is playing. The model is a rigged character in .usdz format with both predefined skeletal animations and BlendShapes (exported from Blender). The problem: when I play any animation using entity.playAnimation(...), the BlendShapes stop responding. Calling setBlendShapes(...) still logs that weights are being updated, but the visual changes are not visible. The exact same blend shape animation works perfectly when no animation is playing. In SceneKit the same model works as expected: shape keys get animated during animation playback. But not in realitykit Still, as soon as an animation starts, the shape keys don’t animate anymore. Here’s the test project on GitHub that demonstrates the issue clearly: https://github.com/IAMTHEBURT/RealityKitWitnBlendShapesSample The goal is to play facial expressions (like blinking or talking) while a body animation (like waving) is playing. Is this a known limitation in RealityKit? Or is there a recommended way to combine skeletal animations with real-time BlendShape updates? Thanks in advance for any insights.
3
3
324
Jul ’25