I'm an iOS developer, and I've been testing our app in iOS 18.0 Beta. I noticed that there's a problem with the font rendering, and after troubleshooting, I've found out that it's caused by the removal of the PingFang.ttc font in 18.0.
I would like to ask the reason for removing this font file and which font should be used to display Chinese in the future?
My test device is an iPhone 11 Pro and the system version is iOS 18.0 (22A5297). I have also tested Beta 1 and it has the same issue.
In previous versions of the system, the PingFang font is located in this directory /System/Library/Fonts/LanguageSupport/PingFang.ttc. But in iOS 18.0, the font file in this directory has become Kohinoor.ttc, and I've tested that this font can't display Chinese either.
I traversed the following system font directories and could not find the PingFang.ttc font file.
/System/Library/Fonts/AppFonts
/System/Library/Fonts/Core
/System/Library/Fonts/CoreAddition
/System/Library/Fonts/CoreUI
/System/Library/Fonts/LanguageSupport
/System/Library/Fonts/UnicodeSupport
/System/Library/Fonts/Watch
Looking for answers, thanks for the help!
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
[CRITICAL] Metal API Memory Leak - Heap Memory Never Released to OS (CWE-400)
Security Classification
This issue constitutes a resource exhaustion vulnerability (CWE-400):
Aspect
Details
Type
Uncontrolled Resource Consumption
CWE
CWE-400
Vector
Local (any Metal application)
Impact
System instability, denial of service
User Control
None - no mitigation available
Recovery
Requires application restart
Summary
Metal heap allocations are never released back to macOS, even when the memory is entirely unused. This causes continuous, unbounded memory growth until system instability or crash. The issue affects any application using Metal API heap allocation.
This was discovered in Unreal Engine 5, but reproduces in a completely blank UE5 project with zero application code - confirming this is Metal framework behavior, not application-level.
Environment
OS: macOS Tahoe 26.2
Hardware: Apple Silicon M4 Max (also reproduced on M1, M2, M3)
API: Metal
Reproduction Steps
Run any Metal application that allocates and deallocates GPU buffers via Metal heaps
Open Activity Monitor and observe the application's memory usage
Let the application run idle (no user interaction required)
Observe memory growing continuously at ~1-2 MB per second
Memory never plateaus or stabilizes
Eventually system becomes unstable
For testing: Any Unreal Engine 5.4+ project on macOS will reproduce this. Even a blank project with no gameplay code exhibits the leak. (Tested on UE 5.7.1)
Observed Behavior
Memory Analysis
Using Unreal's memreport -full command, two reports taken 86 seconds apart:
Metric
Report 1 (183s)
Report 2 (269s)
Delta
Process Physical
4373.64 MB
4463.39 MB
+89.75 MB
Metal Heap Buffer
7168 MB
8192 MB
+1024 MB
Unused Heap
3453 MB
4477 MB
+1024 MB
Object Count
73,840
73,840
0 (no change)
Key Finding
Metal Heap grew by exactly 1 GB while "Unused Heap" also grew by 1 GB. This demonstrates:
Metal is allocating new heap blocks in ~1 GB increments
Previously allocated heap memory becomes "unused" but is never released
The unused memory accumulates indefinitely
No application-level objects are leaking (count remains constant)
Memory Growth Pattern
Continuous growth while idle (no user interaction)
Growth rate: approximately 1-2 MB per second
No plateau or stabilization occurs
Metal allocates new 1 GB heap blocks rather than reusing freed space
Eventually leads to system instability and crash
What is NOT Causing This
We verified the following are NOT the source:
Application objects - Object count remains constant
Application code - Blank project with no code reproduces the issue
Texture streaming - Disabling texture streaming had no effect
CPU garbage collection - Running GC has no effect (this is GPU memory)
Mitigations Attempted (None Worked)
setPurgeableState
Setting resources to purgeable state before release:
[buffer setPurgeableState:MTLPurgeableStateEmpty];
Result: Metal ignores this hint and does not reclaim heap memory.
Avoiding Heap Pooling
Forcing individual buffer allocations instead of heap-based pooling.
Result: Leak persists - Metal still manages underlying allocations.
Aggressive Buffer Compaction
Attempting to compact/defragment buffers within heaps every frame.
Result: Only moves data between existing heaps. Does NOT release heaps back to OS.
Reducing Pool Sizes
Minimizing all buffer pool sizes to force more frequent reuse.
Result: Slightly slows the leak rate but does not stop it.
Root Cause Analysis
How Metal Heap Allocation Appears to Work
Metal allocates GPU heap blocks in large chunks (~1 GB observed)
Application requests buffers from these heaps
When application releases buffers, memory becomes "unused" within the heap
Metal does NOT release heap blocks back to macOS, even when entirely unused
When fragmentation prevents reuse, Metal allocates new heap blocks
Result: Continuous memory growth with no upper bound
The Core Problem
There appears to be no Metal API to force heap memory release. The only way to reclaim this memory is to destroy the Metal device entirely, which requires restarting the application.
Expected Behavior
Metal should:
Release unused heaps - When a heap block is entirely unused, release it back to macOS
Respect purgeable hints - Honor setPurgeableState calls from applications
Compact allocations - Defragment heap allocations to reduce fragmentation
Provide control APIs - Allow applications to request heap compaction or release
Enforce limits - Have configurable maximum heap memory consumption
Security Implications
Local Denial of Service - Any Metal application can exhaust system memory, causing instability affecting all running applications
Memory Pressure Attack - Forces other applications to swap to disk, degrading system-wide performance
No Upper Bound - Memory consumption continues until system failure
Unmitigable - End users have no way to prevent or limit the leak
Affects All Metal Apps - Any application using Metal heaps is potentially affected
Impact
Applications become unstable after extended use
System-wide performance degrades as memory pressure increases
Users must periodically restart applications
Developers cannot work around this at the application level
Long-running applications (games, creative tools, servers) are particularly affected
Request
Investigate Metal heap memory management behavior
Implement heap release when blocks become entirely unused
Honor setPurgeableState hints from applications
Consider providing an API for applications to request heap compaction
Document any intended behavior or workarounds
Additional Notes
This issue has been observed across multiple Unreal Engine versions (5.4, 5.7) and multiple Apple Silicon generations (M1 through M4). The behavior is consistent and reproducible.
The Unreal Engine team has implemented various CVars to attempt mitigation (rhi.Metal.HeapBufferBytesToCompact, rhi.Metal.ResourcePurgeInPool, etc.) but none successfully address the issue because the root cause is at the Metal framework level.
Tested: January 2026
Platform: macOS Tahoe 26.2, Apple Silicon (M1/M2/M3/M4)
subj
And how in this case are beautiful system dials made with smoke effects and other particles?
Topic:
Graphics & Games
SubTopic:
Metal
I've been thinking of bringing some older games back to the modern Mac.
Rewriting old titles in Swift but using the original data files that assume use of non-rounded corners Windows.
Many of these games require all the Window space of a 90 degree cornered Window.
Can anyone point me at some useful workarounds or Is Apple simply deaf to the needs of this type of product?
Hello,
I am trying to use the subdivision mesh rendering option.
I can see it working in RealityComposerPro:
But not when loading asset and displaying in Simulator:
Using this code:
import SwiftUI
import RealityKit
import RealityKitContent
struct AirspaceView: View {
// MARK: - VIEW BODY
var body: some View {
RealityView { content in
if let a = try? await Entity(named: "Models/Test/Test.usdc", in: realityKitContentBundle) {
content.add(a)
}
}
}
}
Any ideas why?
Is there any support pr plans for support for for raytraced reflections in RealityKit on the Vision Pro M5? I cannot find any documentation regarding this topic.
I'm updating our app to support metal 4, but the metal 4 types don't seem to get recognized when targeting simulator. Is it known if metal 4 will be supported in the near future, or am I setting up the app wrong?
Is it possible to start screen recording (through Control Center) without user prompt?
I mean to ask user permission for the first time and after that to start and stop recording programmatically only?
I need to record screen only for specific events.
I am rewriting an unfinished SceneKit project as RealityKit (NonAR). As far as I can see, RealityKit is missing basic fog functionality?
Fog was simple & easy to implement in SCeneKit (fogStartDistance / fogEndDistance / fogDensityExponent / fogColor). Are there any plans to implement something like this in RealityKit?
Are there any simple workarounds?
Topic:
Graphics & Games
SubTopic:
RealityKit
Hi everyone
I'm experiencing an issue with iPadOS 26 regarding multi-touch gesture detection. When performing a quick four-finger gesture (tap and swipe), the system often fails to recognize the input. This especially affects multi-touch gestures, such as rhythm games with difficult levels.
Steps to Reproduce:
Place four fingers on the screen.
Perform a quick tap or a quick horizontal swipe (like the one used to switch apps).
Observe whether the gesture is ignored or detected inconsistently.
Expected Behavior:
4-finger multitouch gestures should be recognized regardless of gesture speed, just like previous iPadOS versions.
Actual Behavior:
Gestures fail to be detected when executed quickly—same gestures still work, and miss notes in rhythm games.
You can check out my posts on Twitter/x and Facebook: [https://x.com/kokona_fwa/status/1978131164104728949?s=61]
Facebook: [https://m.facebook.com/groups/idipad/permalink/24438964899058806/?]
I am currently working on a game that involves earning achievements, which I am using the Apple Unity Plug-Ins to display. I have found that occasionally opening the Game Center Dashboard the last achievement earned will not be displayed until the game is closed and reopened. I am using GKAccessPoint.Shared.Trigger to display the Achievements screen, which occasionally seems to open a cached version of the dashboard. I've found that it seems to consistently happen when earning multiple achievements within one minute, but this is not always the case. Does anybody have any experience with something like this in the past?
I am trying to learn the new Metal Peformance Primitives APIs. I have added the MetalPeformancePrimitives framework and included the header in my shader code as per documentation
#include <MetalPeformancePrimitives/MetalPeformancePrimitives.h>
Unfortunately, Xcode complains that the header cannot be found. How do I include it properly?
I am using Xcode 26 on Tahoe. The MetalPeformancePrimitives framework is present on my machine and I can inspect the headers in the filesystem.
Topic:
Graphics & Games
SubTopic:
Metal
I'm trying to use MTLBinaryArchive. I collected a BinaryArchive from one device and used metal-tt to translate it for all supported iPhone devices, ranging from iPhone 7 Plus to iPhone 16.
However, this BinaryArchive is quite large, around 1.5GB uncompressed, and about 500MB compressed in the IPA. I'm wondering how to address the size issue.
I watched the WWDC 2022 video, which mentioned that the operating system or app installation process would handle compatibility. Does this compatibility support different GPU chips? I tried installing an IPA with a BinaryArchive collected only from an iPhone 12 on an iPhone 13, but the BinaryArchive didn't take effect.
I also saw that Apple supports App Thinning. However, it seems that resources in the Asset Catalog cannot be accessed via URL, and creating an MTLBinaryArchive requires a URL. Is it possible for MTLBinaryArchive to be distributed through App Thinning?
The WWDC 2022 video also mentioned using the -Os optimization flag to reduce size. Can this give an estimate of how much compression it would achieve? Are there any methods to solve the BinaryArchive size issue without impacting performance?
Topic:
Graphics & Games
SubTopic:
Metal
Hi,
Since iOS 26 introduced the new Games app, I’ve noticed a problem when using a Nintendo Switch Pro Controller in wired USB-C mode, and also with third-party controllers that emulate it (like the GameSir X5 Lite).
In the Games app interface, only the L/R buttons respond, but the D-Pad and analog sticks don’t work at all. Once inside actual games, the controller works fine — the issue only affects the Games app UI.
What I’ve tested so far:
Xbox / PlayStation controllers → work fine in both wired and Bluetooth, including inside the Games app.
Switch Pro Controller (Bluetooth) → works fine, including in the Games app.
Switch Pro Controller (wired) → same issue as the X5 Lite, D-Pad and sticks don’t work in the Games app.
This makes it hard to use the new Games app launcher with these controllers, even though they work perfectly once a game is launched.
My question: is this an iOS bug (Apple needs to add proper support for wired Switch Pro controllers in the Games app), or something that Nintendo / GameSir would need to address?
Thanks in advance to anyone who can confirm this or provide more info.
Hi,
I am working with a large project. We are compiling each material to its own .metallib. They all include many common files full of inline functions. Finally we link it all together at the end with a single big pathtrace kernel. Everything works as expected, however the compile times have gotten completely out of hand and it takes multiple minutes to compile at runtime (to native code). I have gathered that I can do this offline by using metal-tt however if I am wondering if there is a way to reduce the compile times in such a scenario, and how to investigate what the root cause of the problem is. I suspect it could have to do with the fact that every materials metallib contains duplications of all the inline functions. Any ideas on how to profile and debug this?
Thanks,
Rasmus
Hello
If you add a ModelEntity to a world inside a portal, the drawing of the model will be occluded properly to the portal bounds.
However the invisible shape of the InputTargetComponent and CollisionComponent are not occluded. They are able to cross the portal, and if you have gestures on your ModelEntity you can trigger them in areas outside the portal bounds. This happens even if the ModelEntity has no PortalCrossingComponent.
Topic:
Graphics & Games
SubTopic:
RealityKit
I want to use SwiftUI and RealityView to get AR scene understanding data (ARMeshAnchor) on iOS devices with LiDAR. The only way we can do that is by using ARSession (unless there is another way).
However in previous iOS 18 builds there was this function:
https://developer.apple.com/documentation/realitykit/spatialtrackingsession/run(_:session:arconfiguration:)
, which worked with SpatialTrackingSession and a custom ARSession together. This function in the the latest iOS and Xcode has since been removed in the RealityKit framework but still there on documentation.
I also wanted to get ARFaceAnchor data which I still cannot get without ARSession, the closest I can get is by using:
let target = AnchoringComponent.Target.face
let anchoringComponent = AnchoringComponent(target, trackingMode: .predicted)
entity = Entity()
entity!.components.set(anchoringComponent)
But I still can't find a way to get the current frame (ARFrame) or the anchors ([ARAnchor]) in the view.
Alternatively if I use if I use this function: https://developer.apple.com/documentation/realitykit/spatialtrackingsession/run(_:) and start the ARSession separately. The session (didUpdate and didAdd) only runs for a few frames before getting interrupted.
And if I completely remove SpatialTrackingConfiguration and just run the ARSession. There still is a valid tracked entity for the AnchoringComponent.Target.face component. IF in the configuration for the ARSession I use the ARWorldTrackingConfiguration with face tracking. And I still get updated facial data each frame. But the ARSession didUpdate or didAdd functions don't get called passed the first few frames.
Interestingly if I switch the RealityViewCameraContent.RealityViewCamera to .virtual. I get ARMeshAnchor and ARFaceAnchor data, but no camera feed (as expected). This with or without SpatialTrackingConfiguration.
My overarching question is what is the proper way to access ARMeshAnchors and other ARAnchors created by the system and track them live while also using SwiftUI.
GitHub Repo with sample project can be found here: https://github.com/bpate75/RealityViewTesting
I have a visionOS app that I’m adding support for IOS and will like to keep using RealityView.
I know there are the following modifiers to add some navigation
.realityViewCameraControls(.orbit)
.realityViewCameraControls(.dolly)
.realityViewCameraControls(.pan)
But how can I add more than one? For example I would like to orbit with one finger, Pan with 2 fingers and dolly by pinching. Is this possible and if so can someone share some sample code on how to achieve that?
Thanks,
Guillermo
Hi all,
I'm encountering an issue with Metal raytracing on my M5 MacBook Pro regarding Instance Acceleration Structure (IAS).
Intersection tests suddenly stop working after a certain point in the sampling loop.
Situation
I implemented an offline GPU path tracer that runs the same kernel multiple times per pixel (sampleCount) using metal::raytracing.
Intersection tests are performed using an IAS.
Since this is an offline path tracer, geometries inside the IAS never changes across samples (no transforms or updates).
As sampleCount increases, there comes a point where the number of intersections drops to zero, and remains zero for all subsequent samples.
Here's a code sketch:
let sampleCount: UInt16 = 1024
for sampleIndex: UInt16 in 0..<sampleCount {
// ...
do {
let commandBuffer = commandQueue.makeCommandBuffer()
// Dispatch the intersection kernel.
await commandBuffer.completed()
}
do {
let commandBuffer = commandQueue.makeCommandBuffer()
// Use the intersection test results from the previous command buffer.
await commandBuffer.completed()
}
// ...
}
kernel void intersectAlongRay(
const metal::uint32_t threadIndex [[thread_position_in_grid]],
// ...
const metal::raytracing::instance_acceleration_structure accelerationStructure [[buffer(2)]],
// ...
)
{
// ...
const auto result = intersector.intersect(ray, accelerationStructure);
switch (result.type) {
case metal::raytracing::intersection_type::triangle: {
// Write intersection result to device buffers.
break;
}
default:
break;
}
Observations
Encoding both the intersection kernel and the subsequent result usage in the same command buffer does not resolve the problem.
Switching from IAS to Primitive Acceleration Structure (PAS) fixes the problem.
Rebuilding the IAS for each sample also resolves the issue.
Intersections produce inconsistent results even though the IAS and rays are identical — Image 1 shows a hit, while Image 2 shows a miss.
Questions
Am I misusing IAS in some way ?
Could this be a Metal bug ?
Any guidance or confirmation would be greatly appreciated.
I have tried every combination of suggestions to get a skybox to appear. Using swiftUI, realityKit and iOS. Non immersive environment. Does anyone have code that works to display a skybox.
When i use a do/catch loop i get environmentResource not found. I have checked the syntax, ensured the folder is referencing the target, used the same name for the folder as the file, the file is a .hdr (i assume this is supported), i have moved the file folder to the top level - no change.