After watching WWDC 2025 session "Combine Metal 4 machine learning and graphics", I have decided to give it a shot to integrate the latest MTL4MachineLearningCommandEncoder to my existing render pipeline. After a lot of trial and errors, I managed to set up the pipeline and have the app compiled.
However, I am now stuck on creating a MTLLibrary with .mtlpackage.
Here is the code I have to create a MTLLibrary according the WWDC session https://developer.apple.com/videos/play/wwdc2025/262/?time=550:
let coreMLFilePath = bundle.path(forResource: "my_model", ofType: "mtlpackage")!
let coreMLURL = URL(string: coreMLFilePath)!
do {
metalDevice.makeLibrary(URL: coreMLURL)
} catch {
print("error: \(error)")
}
With the above code, I am getting error:
Error Domain=MTLLibraryErrorDomain Code=1 "Invalid metal package" UserInfo={NSLocalizedDescription=Invalid metal package}
What is the correct way to create a MTLLibrary with .mtlpackage? Do I see this error because the .mtlpackage I am using is incorrect? How should I go with debugging this?
I'd really appreciate if I could get some help on this as I have been stuck with it for some time now. Thanks in advance!
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Question:
I'm encountering an issue with in-app purchases (IAP) in Unity, specifically for a non-consumable product in the iOS sandbox environment. Below are the details:
Environment:
Unity Version: 2022.3.55f1 Unity In-App Purchasing
Version: v4.12.2
Device: iPhone (15, iOS 18.1.1)
Connection: Wi-Fi iOS
Settings: In-App Purchases set to “Allowed” initially Problem Behavior:
I attempted to purchase a non-consumable item for the first time. The payment is successfully completed by entering the password.
I then background the game app and navigate to the iOS Settings to set In-App Purchases to "Don't Allow."
After returning to the game and either closing or killing the app, I try to purchase the same non-consumable item again.
I checked canMakePayments() through the Apple configuration, and the app correctly detected that I could not make purchases due to the restriction.
I then navigate back to Settings and set In-App Purchases to "Allow."
Upon returning to the game, I try purchasing the non-consumable item again. A pop-up appears, saying, "You’ve already purchased this. Would you like to get it again for free?"
The issue is: Will it deduct money for the second time, and why is the system allowing the user to purchase the same non-consumable item multiple times after purchasing it once?
Is this the expected behavior for Unity In-App Purchasing, or is there something I might be missing in handling non-consumable purchases in this scenario?
Additional Information:
I’ve confirmed that the "In-App Purchases" are set to “Allowed” before attempting the purchase again.
I understand that non-consumable products should not be purchased more than once, so I’m unsure why the system is offering to let the user purchase it again.
I appreciate any insights into whether this is expected behavior or if I need to adjust how I handle the purchase flow.
Context
I’m deploying large language models on iPhone using llama.cpp. A new iPhone Air (12 GB RAM) reports a Metal MTLDevice.recommendedMaxWorkingSetSize of 8,192 MB, and my attempt to load Llama-2-13B Q4_K (~7.32 GB weights) fails during model initialization.
Environment
Device: iPhone Air (12 GB RAM)
iOS: 26
Xcode: 26.0.1
Build: Metal backend enabled llama.cpp
App runs on device (not Simulator)
What I’m seeing
MTLCreateSystemDefaultDevice().recommendedMaxWorkingSetSize == 8192 MiB
Loading Llama-2-13B Q4_K (7.32 GB) fails to complete. Logs indicate memory pressure / allocation issues consistent with the 8 GB working-set guidance.
Smaller models (e.g., 7B/8B with similar quantization) load and run (8B Q4_K provide around 9 tokens/second decoding speed).
Questions
Is 8,192 MB an expected recommendedMaxWorkingSetSize on a 12 GB iPhone?
What values should I expect on other 2025 devices including iPhone 17 (8 GB RAM) and iPhone 17 Pro (12 GB RAM)
Is it strictly enforced by Metal allocations (heaps/buffers), or advisory for best performance/eviction behavior?
Can a process practically exceed this for long-lived buffers without immediate Jetsam risk?
Any guidance for LLM scenarios near the limit?
Hi all,
I've developed some code that enables an arcball camera interaction with my scene. I've done this using components and systems. The implementation feels a bit messy as I've got gesture code on my realityView, and then a bunch of other code that uses those gesture inputs in my component and system.
Is there a demo app, or some example code that shows a nice way to encapsulate these things in to one item for custom cameras, something like Apple's .realityViewCameraControls(.orbit)
If not can anyone recommend an approach to take?
I have been tasked with creating content for the Apple Vision Pro. Just the 3D content and animation, not the programming end of things.
I can't seem to get any kind of mesh deformation animation to import into Reality Composer Pro. By that I mean bones/skin, or even point cache.
I work on PC, and my main software is 3DS Max, but I'm borrowing an iMac for this job, and was instructed to use RCP on it for testing before handing things off to the programmer. My files open and play fine in other USD programs, like Omniverse, or USD View, just not Reality Composer Pro.
I've seen the dinosaur demo in AVP, so I know mesh deformation is possible. If there are other essential tools that might make this possible, I have not been made aware of them.
I am experimenting with bouncing things off of Blender, in case that exports better, but not really having luck there either -though my results are different.
Thanks.
Topic:
Graphics & Games
SubTopic:
RealityKit
I recently needed to develop an application to obtain the window list, which requires Screen Recording permissions. Apple's official documentation mentions using the two functions CGPreflightScreenCaptureAccess and CGRequestScreenCaptureAccess to request permissions. These functions are stated to be available since version 10.15. However, when I used these two functions on a device running macOS 10.15.7, I encountered the errors shown in the attached screenshot. I used the nm tool to inspect the symbols in the CoreGraphics.framework and found that these two functions were not present. Could you help me understand why this is happening?
Topic:
Graphics & Games
SubTopic:
General
hi
When analyzing our game using Instruments, I've always been confused about the two items "Drawable Present" and "Drawable Presented" in the GPU column. The timing of Drawable Present seems to be when the CPU layer calls commandbuffer:present, rather than when the actual encoding is completed on the GPU. Also, what does drawable presented specifically mean? In our case, when a CPU stall occurs, it appears that the vsync interval changes in the next frame, and a surface that has already been calculated is not displayed. Why is this happening?
The simplest realityView (content, attachments in ...
causes Contextual closure expects 1 argument but 2 were used in closure body. I have checked every example and i cannot understand why i get this error regardless of any content. Note: i have added Attachment(id: "test") to the attachment closure and get Attachment not is scope.
imported both realityKit and SwiftUI.
We have a macOS app (not yet released, but in use by ourselves), that provides scoreboards for streaming sport events.
Today it is expected, that there are nice animations for goals, etc. We are streaming using NDI, which requires a CVPixelBuffer for each frame.
We currently create these animations using CABasicAnimation, CAAnimation and CAKeyframeAnimation. In addition we use ScreenCaptureKit to generate the frames.
This works fine with 25/30 fps, as long as the window where our animations are performed in is visible. But this is not what it should be. We have a smaller window as main app window and control display performing the animations in reduced size, while the streaming animations need to be in HD format and later maybe in 4K.
When using an offscreen window, the animations are not calculated. We get 1 frame per second or so. So we actually have to connect an external display to the MacBook and open the large windows there. Ugly solution.
Do we use a completely wrong approach? Or is there a way to tell the macOS to perform the animations although it is an offscreen window?
If it cannot work that way, what is an alternative?
I'm running into an issue with collisions between two entities with a character controller component. In the collision handler for moveCharacter the collision has both hitEntity and characterEntity set to the same object. This object is the entity that was moved with moveCharacter()
The below example configures 3 objects.
stationary sphere with character controller
falling sphere with character controller
a stationary cube with a collision component
if the falling sphere hits the stationary sphere then the collision handler reports both hitEntity and characterEntity to be the falling sphere. I would expect that the hitEntity would be the stationary sphere and the character entity would be the falling sphere.
if the falling sphere hits the cube with a collision component the the hit entity is the cube and the characterEntity is the falling sphere as expected.
Is this the expected behavior? The entities act as expected visually however if I want the spheres to react differently depending on what character they collided with then I am not getting the expected results. IE: If a player controlled character collides with a NPC then exchange resource with NPC. if player collides with enemy then take damage.
import SwiftUI
import RealityKit
struct ContentView: View {
@State var root: Entity = Entity()
@State var stationary: Entity = createCharacter(named: "stationary", radius: 0.05, color: .blue)
@State var falling: Entity = createCharacter(named: "falling", radius: 0.05, color: .red)
@State var collisionCube: Entity = createCollisionCube(named: "cube", size: 0.1, color: .green)
//relative to root
@State var fallFrom: SIMD3<Float> = [0,0.5,0]
var body: some View {
RealityView { content in
content.add(root)
root.position = [0,-0.5,0.0]
root.addChild(stationary)
stationary.position = [0,0.05,0]
root.addChild(falling)
falling.position = fallFrom
root.addChild(collisionCube)
collisionCube.position = [0.2,0,0]
collisionCube.components.set(InputTargetComponent())
}
.gesture(SpatialTapGesture().targetedToAnyEntity().onEnded { tap in
let tapPosition = tap.entity.position(relativeTo: root)
falling.components.remove(FallComponent.self)
falling.teleportCharacter(to: tapPosition + fallFrom, relativeTo: root)
})
.toolbar {
ToolbarItemGroup(placement: .bottomOrnament) {
HStack {
Button("Drop") {
falling.components.set(FallComponent(speed: 0.4))
}
Button("Reset") {
falling.components.remove(FallComponent.self)
falling.teleportCharacter(to: fallFrom, relativeTo: root)
}
}
}
}
}
}
@MainActor
func createCharacter(named name: String, radius: Float, color: UIColor) -> Entity {
let character = ModelEntity(mesh: .generateSphere(radius: radius), materials: [SimpleMaterial(color: color, isMetallic: false)])
character.name = name
character.components.set(CharacterControllerComponent(radius: radius, height: radius))
return character
}
@MainActor
func createCollisionCube(named name: String, size: Float, color: UIColor) -> Entity {
let cube = ModelEntity(mesh: .generateBox(size: size), materials: [SimpleMaterial(color: color, isMetallic: false)])
cube.name = name
cube.generateCollisionShapes(recursive: true)
return cube
}
struct FallComponent: Component {
let speed: Float
}
struct FallSystem: System{
static let predicate: QueryPredicate<Entity> = .has(FallComponent.self) && .has(CharacterControllerComponent.self)
static let query: EntityQuery = .init(where: predicate)
let down: SIMD3<Float> = [0,-1,0]
init(scene: RealityKit.Scene) {
}
func update(context: SceneUpdateContext) {
let deltaTime = Float(context.deltaTime)
for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) {
let speed = entity.components[FallComponent.self]?.speed ?? 0.5
entity.moveCharacter(by: down * speed * deltaTime, deltaTime: deltaTime, relativeTo: nil) { collision in
if collision.hitEntity == collision.characterEntity {
print("hit entity has collided with itself")
}
print("\(collision.characterEntity.name) collided with \(collision.hitEntity.name) ")
}
}
}
}
#Preview(windowStyle: .volumetric) {
ContentView()
}
Anyone else unable to download the "Rendering a Scene with Deferred Lighting in C++" (https://developer.apple.com/documentation/metal/rendering-a-scene-with-deferred-lighting-in-c++?language=objc)?
I just an error page:
Is there another place to download this sample?
Topic:
Graphics & Games
SubTopic:
Metal
在正常游戏中,如果非常频繁的调用assetBundle.Unload接口,会导致游戏应用画面卡死,但是游戏的背景音乐仍然正常播放。这类问题仅发生在iphone16 和iphone17的手机上,低版本的手机没有任何问题,请问该如何解决这个问题?
I am developing a macOS terminal app, running on an M4 Pro, and using Metal.
I am not able use float8 or float16, both reporting Variable has incomplete type 'float16' (aka '__Reserved_Name__Do_not_use_float16').
Based on the system I should be able to use these. Either it is because it is also compiling to Intel, which they are not allowed, or something else. Either way I have not been able to figure out how to get past this.
IIs there a compiler setting I need to set to make this work? if so which one and what setting do I need? I only want to run this on M processes, on the latest version of OS so not interested in Intel version or backward compatibility.
I have been trying to run an open source Windows executable that I would like to help porting to macOS using the Game Porting Toolkit but I stumbled on an issue quite early in the application lifecycle.
It looks like the funtion GetThreadDpiHostingBehavior is missing in USER32.dll
Has anyone any idea how to solve that?
During the startup, it fails with the following error:
TiXL crashed. We're really sorry.
The last backup was saved Unknown time to...
C:\users\crossover\AppData\Roaming\TiXL\Backup
Please refer to Help > Using Backups on what to do next.
System.EntryPointNotFoundException: Unable to find an entry point named 'GetThreadDpiHostingBehavior' in DLL 'USER32.dll'.
at System.Windows.Forms.ScaleHelper.DpiAwarenessScope..ctor(DPI_AWARENESS_CONTEXT context, DPI_HOSTING_BEHAVIOR behavior)
at System.Windows.Forms.ScaleHelper.EnterDpiAwarenessScope(DPI_AWARENESS_CONTEXT awareness, DPI_HOSTING_BEHAVIOR dpiHosting)
at System.Windows.Forms.NativeWindow.CreateHandle(CreateParams cp)
at System.Windows.Forms.Control.CreateHandle()
at System.Windows.Forms.Application.ThreadContext.get_MarshallingControl()
at System.Windows.Forms.WindowsFormsSynchronizationContext..ctor()
at System.Windows.Forms.WindowsFormsSynchronizationContext.InstallIfNeeded()
at System.Windows.Forms.Control..ctor(Boolean autoInstallSyncContext)
at System.Windows.Forms.ScrollableControl..ctor()
at System.Windows.Forms.ContainerControl..ctor()
at System.Windows.Forms.Form..ctor()
at T3.Editor.SplashScreen.SplashScreen.SplashForm..ctor()
at T3.Editor.SplashScreen.SplashScreen.Show(String imagePath) in C:\Users\pixtur\dev\tooll\tixl\Editor\SplashScreen\SplashScreen.cs:line 25
at T3.Editor.Program.Main(String[] args) in C:\Users\pixtur\dev\tooll\tixl\Editor\Program.cs:line 111
Hi there,
I'm wondering if it's possible under iOS 28 developer beta to enable MetalFX scaling info with '{"MTL_HUD_ENABLED": "1" for my App.
This information has been added to Mac, but looks to be absent on iPhone / iPad
Hi,
How to enable multitouch on ARView?
Touch functions (touchesBegan, touchesMoved, ...) seem to only handle one touch at a time. In order to handle multiple touches at a time with ARView, I have to either:
Use SwiftUI .simultaneousGesture on top of an ARView representable
Position a UIView on top of ARView to capture touches and do hit testing by passing a reference to ARView
Expected behavior:
ARView should capture all touches via touchesBegan/Moved/Ended/Cancelled.
Here is what I tried, on iOS 26.1 and macOS 26.1:
ARView Multitouch
The setup below is a minimal ARView presented by SwiftUI, with touch events handled inside ARView. Multitouch doesn't work with this setup.
Note that multitouch wouldn't work either if the ARView is presented with a UIViewController instead of SwiftUI.
import RealityKit
import SwiftUI
struct ARViewMultiTouchView: View {
var body: some View {
ZStack {
ARViewMultiTouchRepresentable()
.ignoresSafeArea()
}
}
}
#Preview {
ARViewMultiTouchView()
}
// MARK: Representable ARView
struct ARViewMultiTouchRepresentable: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARViewMultiTouch(frame: .zero)
let anchor = AnchorEntity()
arView.scene.addAnchor(anchor)
let boxWidth: Float = 0.4
let boxMaterial = SimpleMaterial(color: .red, isMetallic: false)
let box = ModelEntity(mesh: .generateBox(size: boxWidth), materials: [boxMaterial])
box.name = "Box"
box.components.set(CollisionComponent(shapes: [.generateBox(width: boxWidth, height: boxWidth, depth: boxWidth)]))
anchor.addChild(box)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
// MARK: ARView
class ARViewMultiTouch: ARView {
required init(frame: CGRect) {
super.init(frame: frame)
/// Enable multi-touch
isMultipleTouchEnabled = true
cameraMode = .nonAR
automaticallyConfigureSession = false
environment.background = .color(.gray)
/// Disable gesture recognizers to not conflict with touch events
/// But it doesn't fix the issue
gestureRecognizers?.forEach { $0.isEnabled = false }
}
required dynamic init?(coder decoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
for touch in touches {
/// # Problem
/// This should print for every new touch, up to 5 simultaneously on an iPhone (multi-touch)
/// But it only fires for one touch at a time (single-touch)
print("Touch began at: \(touch.location(in: self))")
}
}
}
Multitouch with an Overlay
This setup works, but it doesn't seem right. There must be a solution to make ARView handle multi touch directly, right?
import SwiftUI
import RealityKit
struct MultiTouchOverlayView: View {
var body: some View {
ZStack {
MultiTouchOverlayRepresentable()
.ignoresSafeArea()
Text("Multi touch with overlay view")
.font(.system(size: 24, weight: .medium))
.foregroundStyle(.white)
.offset(CGSize(width: 0, height: -150))
}
}
}
#Preview {
MultiTouchOverlayView()
}
// MARK: Representable Container
struct MultiTouchOverlayRepresentable: UIViewRepresentable {
func makeUIView(context: Context) -> UIView {
/// The view that SwiftUI will present
let container = UIView()
/// ARView
let arView = ARView(frame: container.bounds)
arView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
arView.cameraMode = .nonAR
arView.automaticallyConfigureSession = false
arView.environment.background = .color(.gray)
let anchor = AnchorEntity()
arView.scene.addAnchor(anchor)
let boxWidth: Float = 0.4
let boxMaterial = SimpleMaterial(color: .red, isMetallic: false)
let box = ModelEntity(mesh: .generateBox(size: boxWidth), materials: [boxMaterial])
box.name = "Box"
box.components.set(CollisionComponent(shapes: [.generateBox(width: boxWidth, height: boxWidth, depth: boxWidth)]))
anchor.addChild(box)
/// The view that will capture touches
let touchOverlay = TouchOverlayView(frame: container.bounds)
touchOverlay.autoresizingMask = [.flexibleWidth, .flexibleHeight]
touchOverlay.backgroundColor = .clear
/// Pass an arView reference to the overlay for hit testing
touchOverlay.arView = arView
/// Add views to the container.
/// ARView goes in first, at the bottom.
container.addSubview(arView)
/// TouchOverlay goes in last, on top.
container.addSubview(touchOverlay)
return container
}
func updateUIView(_ uiView: UIView, context: Context) {
}
}
// MARK: Touch Overlay View
/// A UIView to handle multi-touch on top of ARView
class TouchOverlayView: UIView {
weak var arView: ARView?
override init(frame: CGRect) {
super.init(frame: frame)
isMultipleTouchEnabled = true
isUserInteractionEnabled = true
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let totalTouches = event?.allTouches?.count ?? touches.count
print("--- Touches Began --- (New: \(touches.count), Total: \(totalTouches))")
for touch in touches {
let location = touch.location(in: self)
/// Hit testing.
/// ARView and Touch View must be of the same size
if let arView = arView {
let entity = arView.entity(at: location)
if let entity = entity {
print("Touched entity: \(entity.name)")
} else {
print("Touched: none")
}
}
}
}
override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent?) {
let totalTouches = event?.allTouches?.count ?? touches.count
print("--- Touches Cancelled --- (Cancelled: \(touches.count), Total: \(totalTouches))")
}
}
Hi Apple,
In VisionOS, for real-time streaming of large 3D scenes, I plan to create Metal buffers and textures in multiple threads and then use a compute shader on the main thread to copy the Metal resources into RealityKit, minimizing main thread usage. Given that most of RealityKit's default APIs require execution on the main actor (main thread), it is not ideal for streaming data. Is this approach the best way to handle streaming data and real-time rendering?
Thank you very much.
Hi, I am using xCode26.x. But my Metal4 classes are not compiling. I downloaded the sample code from Apple's website - https://developer.apple.com/documentation/Metal/processing-a-texture-in-a-compute-function. For example, I am getting errors like "Cannot find protocol declaration for 'MTL4CommandQueue';
I have hit a deadline. Any recommendations are very welcome.
I have downloaded the Metal Tool chain. When I run the following commands on the terminal - xcodebuild -showComponent metalToolchain ; xcrun -f metal ; xcrun metal --version
I get the following response -
Asset Path: /System/Library/AssetsV2/com_apple_MobileAsset_MetalToolchain/86fbaf7b114a899754307896c0bfd52ffbf4fded.asset/AssetData
Build Version: 17A321
Status: installed
Toolchain Identifier: com.apple.dt.toolchain.Metal.32023
Toolchain Search Path: /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded
/Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded/Metal.xctoolchain/usr/bin/metal
Apple metal version 32023.830 (metalfe-32023.830.2)
Target: air64-apple-darwin24.6.0
Thread model: posix
InstalledDir: /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded/Metal.xctoolchain/usr/metal/current/bin
I have implemented the Game Center for authentication and saving player's game data. Both authentication and saving player's data works correctly all the time, but there is a problem with fetching and loading the data.
The game works like this:
At the startup, I start the authentication
After the player successfully logs in, I start loading the player's data by calling fetchSavedGames method
If a game data exists for the player, I receive a list of SavedGame object containing the player's data
The problem is that after I uninstall the game and install it again, sometimes the SavedGame list is empty(step 3). But if I don't uninstall the game and reopen the game, this process works fine.
Here's the complete code of Game Center implementation:
class GameCenterHandler {
public func signIn() {
GKLocalPlayer.local.authenticateHandler = { viewController, error in
if let viewController = viewController {
viewController.present(viewController, animated: false)
return
}
if error != nil {
// Player could not be authenticated.
// Disable Game Center in the game.
return
}
// Auth successfull
self.load(filename: "TestFileName")
}
}
public func save(filename: String, data: String) {
if GKLocalPlayer.local.isAuthenticated {
GKLocalPlayer.local.saveGameData(Data(data.utf8), withName: filename) { savedGame, error in
if savedGame != nil {
// Data saved successfully
}
if error != nil {
// Error in saving game data!
}
}
} else {
// Error in saving game data! User is not authenticated"
}
}
public func load(filename: String) {
if GKLocalPlayer.local.isAuthenticated {
GKLocalPlayer.local.fetchSavedGames { games, error in
if let game = games?.first(where: {$0.name == filename}){
game.loadData { data, error in
if data != nil {
// Data loaded successfully
}
if error != nil {
// Error in loading game data!
}
}
} else {
// Error in loading game data! Filename not found
}
}
} else {
// Error in loading game data! User is not authenticated
}
}
}
I have also added Game Center and iCloud capabilities in xcode. Also in the iCloud section, I selected the iCloud Documents and added a container.
I found a simillar question here but it doesn't make things clearer.
The farther away the center of a large entity is, the less accurate the positioning is?
For example I am changing only the y-axis position of an entity that is tens of meters long, but i notice x and z drifting slowly the farther away the center of the entity is. I would not expect the x and z to move.
It might be compounding rounding errors somewhere, or maybe the RealityKit engine is deciding not to be super precise about distant objects? Otherwise I just have a bug somewhere.
Topic:
Graphics & Games
SubTopic:
RealityKit