Hello experts, I'm trying to implement a material with custom shader code, but I saw that visionOS doesn't allow you to inject custom Metal functions or use CustomMaterial like iOS/macOS, nor can you directly write Metal Shading Language (.metal) and use it through ShaderGraphMaterial. So my question is, if i want to implement your own shader code, how should i do it?
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello, I am trying to capture screen recording ( output.mp4 ) using ScreenCaptureKit and also the mouse positions during the recording ( mouse.json ). The recording and the mouse positions ( tracked based on mouse movements events only ) needs to be perfectly synced in order to add effects in post editing.
I started off by using the await stream?.startCapture() and after that starting my mouse tracking function :-
try await captureEngine.startCapture(configuration: config, filter: filter, recordingOutput: recordingOutput)
let captureStartTime = Date()
mouseTracker?.startTracking(with: captureStartTime)
But every time I tested, there is a clear inconsistency in sync between the recorded video and the recorded mouse positions.
The only thing I want is to know when exactly does the recording "actually" started so that I can start the mouse capture at that same time, and thus I tried using the Delegates, but being able to set them up perfectly.
import Foundation
import AVFAudio
import ScreenCaptureKit
import OSLog
import Combine
class CaptureEngine: NSObject, @unchecked Sendable {
private let logger = Logger()
private(set) var stream: SCStream?
private var streamOutput: CaptureEngineStreamOutput?
private var recordingOutput: SCRecordingOutput?
private let videoSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.VideoSampleBufferQueue")
private let audioSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.AudioSampleBufferQueue")
private let micSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.MicSampleBufferQueue")
func startCapture(configuration: SCStreamConfiguration, filter: SCContentFilter, recordingOutput: SCRecordingOutput) async throws {
// Create the stream output delegate.
let streamOutput = CaptureEngineStreamOutput()
self.streamOutput = streamOutput
do {
stream = SCStream(filter: filter, configuration: configuration, delegate: streamOutput)
try stream?.addStreamOutput(streamOutput, type: .screen, sampleHandlerQueue: videoSampleBufferQueue)
try stream?.addStreamOutput(streamOutput, type: .audio, sampleHandlerQueue: audioSampleBufferQueue)
try stream?.addStreamOutput(streamOutput, type: .microphone, sampleHandlerQueue: micSampleBufferQueue)
self.recordingOutput = recordingOutput
recordingOutput.delegate = self
try stream?.addRecordingOutput(recordingOutput)
try await stream?.startCapture()
} catch {
logger.error("Failed to start capture: \(error.localizedDescription)")
throw error
}
}
func stopCapture() async throws {
do {
try await stream?.stopCapture()
} catch {
logger.error("Failed to stop capture: \(error.localizedDescription)")
throw error
}
}
func update(configuration: SCStreamConfiguration, filter: SCContentFilter) async {
do {
try await stream?.updateConfiguration(configuration)
try await stream?.updateContentFilter(filter)
} catch {
logger.error("Failed to update the stream session: \(String(describing: error))")
}
}
func stopRecordingOutputForStream(_ recordingOutput: SCRecordingOutput) throws {
try self.stream?.removeRecordingOutput(recordingOutput)
}
}
// MARK: - SCRecordingOutputDelegate
extension CaptureEngine: SCRecordingOutputDelegate {
func recordingOutputDidStartRecording(_ recordingOutput: SCRecordingOutput) {
let startTime = Date()
logger.info("Recording output did start recording \(startTime)")
}
func recordingOutputDidFinishRecording(_ recordingOutput: SCRecordingOutput) {
logger.info("Recording output did finish recording")
}
func recordingOutput(_ recordingOutput: SCRecordingOutput, didFailWithError error: any Error) {
logger.error("Recording output failed with error: \(error.localizedDescription)")
}
}
private class CaptureEngineStreamOutput: NSObject, SCStreamOutput, SCStreamDelegate {
private let logger = Logger()
override init() {
super.init()
}
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of outputType: SCStreamOutputType) {
guard sampleBuffer.isValid else { return }
switch outputType {
case .screen:
break
case .audio:
break
case .microphone:
break
@unknown default:
logger.error("Encountered unknown stream output type:")
}
}
func stream(_ stream: SCStream, didStopWithError error: Error) {
logger.error("Stream stopped with error: \(error.localizedDescription)")
}
}
I am getting error
Value of type 'SCRecordingOutput' has no member 'delegate'
Even though I am targeting macOs 15+ ( macOs 26 actually ) and macOs only.
What is the best way to achieving the desired result? Is there any other / better way to do it?
Hi,
seems MSL is missing support for a clock() shader instruction available in other graphics APIs like Vulkan or OpenGL for example..
useful for counting cost in number of clock cycles of some code insider shader with much finer granularity than launching a micro kernel with same instructions and measuring cycles cost from CPU..
also useful for MoltenVK to support that extensions..
thanks..
Dear Apple Developer Support,
I hope this message finds you well. I am a game developer looking to integrate Game Center into our game. Before proceeding, I would like to understand the analytical capabilities available post-integration. Specifically, I am interested in tracking detailed metrics such as:
The number of new player downloads driven by Game Center features (e.g., friend challenges, leaderboards, or achievements).
Data on user engagement and conversions originating from Game Center interactions.
I have explored App Store Connect’s App Analytics but would appreciate clarification on whether Game Center-specific data (e.g., referrals from challenges or social features) is available to measure growth and optimize our strategies.
If such data is accessible, could you guide me on how to view it? If not, are there alternative methods or plans to incorporate this in the future?
Thank you for your time and assistance. I look forward to your response.
Best regards,
Bella
I used xcode gpu capture to profile render pipeline's bandwidth of my game.Then i found depth buffer and stencil buffer use the same buffer whitch it's format is Depth32Float_Stencil8.
But why in a single pass of pipeline, this buffer was loaded twice, and the Load Attachment Size of Encoder Statistics was double.
Is there any bug with xcode gpu capture?Or the pass really loaded the buffer twice times?
Topic:
Graphics & Games
SubTopic:
Metal
i have a game that i upload it in the app store that my game size is 3 gigaByte but when I download it, it show that the really size is about 100 megaByte, i upload the game in google app is given me the real size,
so the problem i think is when it get out the xcode, maybe some one can give me i clue for what is going on.
my game was made by unity2020.
if that helps.
Hello,
I created a new project with the provided template for Immersive Environments.
Straight out of box I build to both the Simulator and to Vision Pro and the provided Environment looks like this.
What's interesting is that in Reality Composer Pro, it looks correct so how do I achieve the same look?
Thank you in advance!
Hey I wanted to make an app that tracks changes in the room and room lightning and I was wondering if its possible to use VirtualEnvironmentProbeComponent to obtain the EnvironmentResource image and store it?
If so are there any example of similar operation I could use?
Thank you!
When I take a frame capture of my application in Xcode, it shows a warning that reads "Your application created separate command encoders which can be combined into a single encoder. By combining these encoders you may reduce your application's load/store bandwidth usage."
In the minimal reproduction case I've identified for this warning, I have two render pipeline states: The first writes to the current drawable, the depth buffer, and a secondary color buffer. The second writes only to the current drawable.
Because these are writing to a different set of outputs, I was initially creating two separate render command encoders to handle the draws under each of these states.
My understanding is that Xcode is telling me I could only create one, however when I try to do that, I get runtime asserts when attempting to apply the second render pipeline state since it doesn't have a matching attachment configured for the second color buffer or for the depth buffer, so I can't just combine the encoders.
Is the only solution here to detect and propagate forward the color/depth attachments from the first state into the creation of the second state?
Is there any way to suppress this specific warning in Xcode?
Topic:
Graphics & Games
SubTopic:
Metal
I am using the latest version of the Game Center plugin for Unity and have noticed that my game will crash on launch when trying to authenticate.
I've tried this in an empty project with just the plugin and it still crashes with this exception.
GfxDevice: creating device client; threaded=1; jobified=0
Initializing Metal device caps: Apple A14 GPU
Initialize engine version: 2022.3.62f2 (7670c08855a9)
GameKitException: Code=-7 Domain=GKErrorDomain Description=The operation couldn’t be completed. (GKErrorDomain error -7.) (UnsupportedOperationForOSVersion)
at Apple.GameKit.DefaultNSErrorHandler.ThrowNSError (System.IntPtr nsErrorPtr) [0x00000] in <00000000000000000000000000000000>:0
Rethrow as TypeInitializationException: The type initializer for 'Apple.GameKit.GKGameActivity' threw an exception.
And the area in the native code that is triggering the crash is this inside the GKLocalPlayer_SetAuthenticateHandler function
`_onAuthenticate!(tid, _mostRecentAuthenticatePlayer!.passRetainedUnsafeMutablePointer());
I am using Unity 2022.3.62f2 and MacOS 15.6 with iOS 18.6.2 which based on the min specs for the plugin we should be within spec.
I have also included this message because I thought it might help too
`terminating due to uncaught exception of type Il2CppExceptionWrapper
Could not import Swift modules for translation unit: failed to get module "GameKitWrapper" from AST context:
error: 'GKErrorCodeExtension.h' file not found
in file included from :1:
error: could not build Objective-C module 'GameKitWrapper'
warning: Ignoring missing VFS file: /Users/james/Library/Developer/Xcode/DerivedData/GameKitWrapper-dzawbtxqdxdviiakfxmfunexppqv/Build/Intermediates.noindex/GameKitWrapper.build/Release-iphoneos/GameKitWrapper-bc72bd3638f4d2956cac9b00e84c1a7d-VFS-iphoneos/all-product-headers.yaml
This is the likely root cause for any subsequent compiler
errors.warning: Ignoring missing VFS file: /Users/bill/Library/Developer/Xcode/DerivedData/GameKitWrapper-dzawbtxqdxdviiakfxmfunexppqv/Build/Intermediates.noindex/GameKitWrapper.build/Release-iphoneos/GameKitWrapper iOS.build/unextended-module-overlay.yaml
This is the likely root cause for any subsequent compiler errors.warning: TypeSystemSwiftTypeRef::GetNumChildren: had to engage SwiftASTContext fallback for type $syyXCD
I've also attached the script that I am using for authentication, this script runs on the first scene.
GameCenterManager.cs
I have a scene built up in RealityComposerPro, in which I've added a ParticleEmitter with isEmitting set to False and 'Loop' set to True.
In my app, when I toggle isEmitting to True there can be a delay of a few seconds before the ParticleEmitter starts.
However, if I programatically add the emitter in code at that point, it starts immediately.
To be clear, I'm seeing this on the VisionOS simulator - I don't have access to a device at this time.
Am I misunderstanding how to control the ParticleEmitter when I need precise control on when it starts.
Question:
I'm encountering an issue with in-app purchases (IAP) in Unity, specifically for a non-consumable product in the iOS sandbox environment. Below are the details:
Environment:
Unity Version: 2022.3.55f1 Unity In-App Purchasing
Version: v4.12.2
Device: iPhone (15, iOS 18.1.1)
Connection: Wi-Fi iOS
Settings: In-App Purchases set to “Allowed” initially Problem Behavior:
I attempted to purchase a non-consumable item for the first time. The payment is successfully completed by entering the password.
I then background the game app and navigate to the iOS Settings to set In-App Purchases to "Don't Allow."
After returning to the game and either closing or killing the app, I try to purchase the same non-consumable item again.
I checked canMakePayments() through the Apple configuration, and the app correctly detected that I could not make purchases due to the restriction.
I then navigate back to Settings and set In-App Purchases to "Allow."
Upon returning to the game, I try purchasing the non-consumable item again. A pop-up appears, saying, "You’ve already purchased this. Would you like to get it again for free?"
The issue is: Will it deduct money for the second time, and why is the system allowing the user to purchase the same non-consumable item multiple times after purchasing it once?
Is this the expected behavior for Unity In-App Purchasing, or is there something I might be missing in handling non-consumable purchases in this scenario?
Additional Information:
I’ve confirmed that the "In-App Purchases" are set to “Allowed” before attempting the purchase again.
I understand that non-consumable products should not be purchased more than once, so I’m unsure why the system is offering to let the user purchase it again.
I appreciate any insights into whether this is expected behavior or if I need to adjust how I handle the purchase flow.
hi everyone,
我们发现了一个和Metal相关崩溃。应用中使用了Metal相关的接口,在进行性能测试时,打开了设置-开发者-显示HUD图形。运行应用后,正常展示HUD,但应用很快发生了崩溃,日志主要信息如下:
Incident Identifier: 1F093635-2DB8-4B29-9DA5-488A6609277B
CrashReporter Key: 233e54398e2a0266d95265cfb96c5a89eb3403fd
Hardware Model: iPhone14,3
Process: waimai [16584]
Path: /private/var/containers/Bundle/Application/CCCFC0AE-EFB8-4BD8-B674-ED089B776221/waimai.app/waimai
Identifier:
Version: 61488 (8.53.0)
Code Type: ARM-64
Parent Process: ? [1]
Date/Time: 2025-06-12 14:41:45.296 +0800
OS Version: iOS 18.0 (22A3354)
Report Version: 104
Monitor Type: Mach Exception
Exception Type: EXC_BAD_ACCESS (SIGBUS)
Exception Codes: KERN_PROTECTION_FAILURE at 0x000000014fffae00
Crashed Thread: 57
Thread 57 Crashed:
0 libMTLHud.dylib esfm_GenerateTriangesForString + 408
1 libMTLHud.dylib esfm_GenerateTriangesForString + 92
2 libMTLHud.dylib Renderer::DrawText(char const*, int, unsigned int) + 204
3 libMTLHud.dylib Overlay::onPresent(id<CAMetalDrawable>) + 1656
4 libMTLHud.dylib CAMetalDrawable_present(void (*)(), objc_object*, objc_selector*) + 72
5 libMTLHud.dylib invocation function for block in void replaceMethod<void>(objc_class*, objc_selector*, void (*)(void (*)(), objc_object*, objc_selector*)) + 56
6 Metal __45-[_MTLCommandBuffer presentDrawable:options:]_block_invoke + 104
7 Metal MTLDispatchListApply + 52
8 Metal -[_MTLCommandBuffer didScheduleWithStartTime:endTime:error:] + 312
9 IOGPU IOGPUNotificationQueueDispatchAvailableCompletionNotifications + 136
10 IOGPU __IOGPUNotificationQueueSetDispatchQueue_block_invoke + 64
11 libdispatch.dylib _dispatch_client_callout4 + 20
12 libdispatch.dylib _dispatch_mach_msg_invoke + 464
13 libdispatch.dylib _dispatch_lane_serial_drain + 368
14 libdispatch.dylib _dispatch_mach_invoke + 456
15 libdispatch.dylib _dispatch_lane_serial_drain + 368
16 libdispatch.dylib _dispatch_lane_invoke + 432
17 libdispatch.dylib _dispatch_lane_serial_drain + 368
18 libdispatch.dylib _dispatch_lane_invoke + 380
19 libdispatch.dylib _dispatch_root_queue_drain_deferred_wlh + 288
20 libdispatch.dylib _dispatch_workloop_worker_thread + 540
21 libsystem_pthread.dylib _pthread_wqthread + 288
我们测试了几个不同的机型,只有iPhone 13 Pro Max会发生崩溃。
Q1:为什么会发生这个崩溃?
Q2:相同的逻辑,为什么仅在iPhone 13 Pro Max机型上出现崩溃?
期待您的解答。
https://developer.apple.com/forums/profile/mozheralqahtani
Topic:
Graphics & Games
SubTopic:
General
Hi, I'm Beginner with Metal 4 and Model I/O 🥺.
I can render simple models with just one mesh, but when I try to render models with SubMeshes, nothing shows up on screen.
Can anyone help me figure out how to properly render models with multiple submeshes? I think I'm not iterating through them correctly or maybe missing some buffers setup.
Here's what I have so far:
https://www.icloud.com.cn/iclouddrive/0a6x_NLwlWy-herPocExZ8g3Q#LoadModel
Hello,
MacOS 26 Betas are limiting games (noticeably, games that use java) to the native display of the MacBook Pro (120hz). Even connecting an external display this is not changing. I have submitted a bug report, but I have not had any responses to it yet. I am looking to see if anyone may have an answer or fix to this issue.
Thanks!
Topic:
Graphics & Games
SubTopic:
General
I'm running into an issue with collisions between two entities with a character controller component. In the collision handler for moveCharacter the collision has both hitEntity and characterEntity set to the same object. This object is the entity that was moved with moveCharacter()
The below example configures 3 objects.
stationary sphere with character controller
falling sphere with character controller
a stationary cube with a collision component
if the falling sphere hits the stationary sphere then the collision handler reports both hitEntity and characterEntity to be the falling sphere. I would expect that the hitEntity would be the stationary sphere and the character entity would be the falling sphere.
if the falling sphere hits the cube with a collision component the the hit entity is the cube and the characterEntity is the falling sphere as expected.
Is this the expected behavior? The entities act as expected visually however if I want the spheres to react differently depending on what character they collided with then I am not getting the expected results. IE: If a player controlled character collides with a NPC then exchange resource with NPC. if player collides with enemy then take damage.
import SwiftUI
import RealityKit
struct ContentView: View {
@State var root: Entity = Entity()
@State var stationary: Entity = createCharacter(named: "stationary", radius: 0.05, color: .blue)
@State var falling: Entity = createCharacter(named: "falling", radius: 0.05, color: .red)
@State var collisionCube: Entity = createCollisionCube(named: "cube", size: 0.1, color: .green)
//relative to root
@State var fallFrom: SIMD3<Float> = [0,0.5,0]
var body: some View {
RealityView { content in
content.add(root)
root.position = [0,-0.5,0.0]
root.addChild(stationary)
stationary.position = [0,0.05,0]
root.addChild(falling)
falling.position = fallFrom
root.addChild(collisionCube)
collisionCube.position = [0.2,0,0]
collisionCube.components.set(InputTargetComponent())
}
.gesture(SpatialTapGesture().targetedToAnyEntity().onEnded { tap in
let tapPosition = tap.entity.position(relativeTo: root)
falling.components.remove(FallComponent.self)
falling.teleportCharacter(to: tapPosition + fallFrom, relativeTo: root)
})
.toolbar {
ToolbarItemGroup(placement: .bottomOrnament) {
HStack {
Button("Drop") {
falling.components.set(FallComponent(speed: 0.4))
}
Button("Reset") {
falling.components.remove(FallComponent.self)
falling.teleportCharacter(to: fallFrom, relativeTo: root)
}
}
}
}
}
}
@MainActor
func createCharacter(named name: String, radius: Float, color: UIColor) -> Entity {
let character = ModelEntity(mesh: .generateSphere(radius: radius), materials: [SimpleMaterial(color: color, isMetallic: false)])
character.name = name
character.components.set(CharacterControllerComponent(radius: radius, height: radius))
return character
}
@MainActor
func createCollisionCube(named name: String, size: Float, color: UIColor) -> Entity {
let cube = ModelEntity(mesh: .generateBox(size: size), materials: [SimpleMaterial(color: color, isMetallic: false)])
cube.name = name
cube.generateCollisionShapes(recursive: true)
return cube
}
struct FallComponent: Component {
let speed: Float
}
struct FallSystem: System{
static let predicate: QueryPredicate<Entity> = .has(FallComponent.self) && .has(CharacterControllerComponent.self)
static let query: EntityQuery = .init(where: predicate)
let down: SIMD3<Float> = [0,-1,0]
init(scene: RealityKit.Scene) {
}
func update(context: SceneUpdateContext) {
let deltaTime = Float(context.deltaTime)
for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) {
let speed = entity.components[FallComponent.self]?.speed ?? 0.5
entity.moveCharacter(by: down * speed * deltaTime, deltaTime: deltaTime, relativeTo: nil) { collision in
if collision.hitEntity == collision.characterEntity {
print("hit entity has collided with itself")
}
print("\(collision.characterEntity.name) collided with \(collision.hitEntity.name) ")
}
}
}
}
#Preview(windowStyle: .volumetric) {
ContentView()
}
Hi,
Apple’s documentation on Order-Independent Transparency (OIT) describes an approach using image blocks, where an array of size 4 is allocated per fragment to store depth and color in a tile shading compute pass.
However, when increasing the scene’s depth complexity by adding more overlapping quads, the OIT implementation fails due to the fixed array size.
Is there a way to dynamically allocate storage for fragments based on actual depth complexity encountered during rasterization, rather than using a fixed-size array? Specifically, can an adaptive array of fragments be maintained and sorted by depth, where the size grows as needed instead of being limited to 4 entries?
Any insights or alternative approaches would be greatly appreciated.
Thank you!
The title is self-exploratory. I wasn't able to find the CAMetalDisplayLink on the most recent metal-cpp release (metal-cpp_macOS15_iOS18-beta). Are there any plans to include it in the next release?
Following the post on
https://developer.apple.com/documentation/realitykit/custommaterial it's simple to use shader for materials and get uniforms and params from each vertex. However it's not available for visionOS. Any alternative to use in this case? I want to write shader to fill material by myself. (I have shader experience from web, familiar with fragment shader)