Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

tensorflow 2.20 broken support
Hi, testing latest tensorflow-metal plugin with tensorflow 2.20 doesn't work.. using python Python 3.12.11 (main, Jun 3 2025, 15:41:47) [Clang 17.0.0 (clang-1700.0.13.3)] on darwin simple testing shows error: import tensorflow as tf Traceback (most recent call last): File "", line 1, in File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/init.py", line 438, in _ll.load_library(_plugin_dir) File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib Reason: tried: '/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file) tf.config.experimental.list_physical_devices('GPU') Traceback (most recent call last): File "", line 1, in NameError: name 'tf' is not defined I fixed this error by copying _pywrap_tensorflow_internal.so where it's searched.. 1)mkdir /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64 2)mkdir /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/ 3)cp /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/ then fails symbol not found: Symbol not found: __ZN10tensorflow28_AttrValue_default_instance_E in libmetal_plugin.dylib full log: with import tensorflow as tf Traceback (most recent call last): File "", line 1, in File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/init.py", line 438, in _ll.load_library(_plugin_dir) File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: __ZN10tensorflow28_AttrValue_default_instance_E Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib Expected in: <2FF91C8B-0CB6-3E66-96B7-092FDF36772E> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so
2
0
976
14h
Problem running NLContextualEmbeddingModel in simulator
Environment MacOC 26 Xcode Version 26.0 beta 7 (17A5305k) simulator: iPhone 16 pro iOS: iOS 26 Problem NLContextualEmbedding.load() fails with the following error In simulator Failed to load embedding from MIL representation: filesystem error: in create_directories: Permission denied ["/var/db/com.apple.naturallanguaged/com.apple.e5rt.e5bundlecache"] filesystem error: in create_directories: Permission denied ["/var/db/com.apple.naturallanguaged/com.apple.e5rt.e5bundlecache"] Failed to load embedding model 'mul_Latn' - '5C45D94E-BAB4-4927-94B6-8B5745C46289' assetRequestFailed(Optional(Error Domain=NLNaturalLanguageErrorDomain Code=7 "Embedding model requires compilation" UserInfo={NSLocalizedDescription=Embedding model requires compilation})) in #Playground I'm new to this embedding model. Not sure if it's caused by my code or environment. Code snippet import Foundation import NaturalLanguage import Playgrounds #Playground { // Prefer initializing by script for broader coverage; returns NLContextualEmbedding? guard let embeddingModel = NLContextualEmbedding(script: .latin) else { print("Failed to create NLContextualEmbedding") return } print(embeddingModel.hasAvailableAssets) do { try embeddingModel.load() print("Model loaded") } catch { print("Failed to load model: \(error)") } }
3
3
2.2k
1w
Genmoji API — are local edits and non-text usage (e.g., widgets) allowed?
Hi, I'm integrating the Genmoji API (NSAdaptiveImageGlyph) into my app and would like to confirm two things: Local editing of generated Genmoji. After the user creates a Genmoji, can the app apply edits to the resulting image (e.g., pixelation)? The edited image would be stored only on-device within the app and never shared externally. Use outside text contexts. Can a generated Genmoji be used in other parts of the app, such as a home screen widget? Apple's documentation and the WWDC24 session focus on inline text, stickers, and Tapbacks, but I couldn't find explicit guidance on widget or other UI usage. I checked the Human Interface Guidelines, WWDC24 session "Bring expression to your app with Genmoji," and the App Store Review Guidelines, but couldn't find clear answers. Any guidance or pointers would be appreciated. Thanks!
0
0
1.5k
1w
MPS backend reports ~40 GiB 'other allocations' on 48 GB M5 Pro under macOS 26.4.1, blocking large tensor operations (PyTorch)
Product macOS Version macOS 26.4.1 (public release) Hardware Apple M5 Pro, 48 GB unified memory Summary On macOS 26.4.1, the MPS backend consistently reports approximately 40 GiB of “other allocations” on a 48 GB M5 Pro machine, even on a freshly rebooted system with minimal user applications running. This leaves insufficient memory for large GPU tensor operations that previously succeeded on earlier macOS versions. The failure manifests as: RuntimeError: MPS backend out of memory (MPS allocated: 17.60 GiB, other allocations: 40.17 GiB, max allowed: 63.65 GiB). Tried to allocate 7.63 GiB on private pool. The “other allocations: 40.17 GiB” value is consistent across reboots and does not change materially when user applications are quit. This suggests macOS 26.4.1 has increased its baseline GPU/unified memory consumption compared to prior releases in a way that is visible to the MPS allocator. Steps to Reproduce Fresh reboot of M5 Pro, 48 GB, macOS 26.4.1 Launch a PyTorch 2.11.0 application using MPS as the compute device Load a large model into MPS memory (~17 GiB, e.g. a VAE encoder in bfloat16) Attempt to allocate an additional ~7.6 GiB workspace tensor for a matrix multiplication operation (torch.bmm) Result: RuntimeError: MPS backend out of memory, with “other allocations” reported at ~40 GiB despite no large user processes holding GPU memory. Expected: The operation should succeed. 17.60 + 7.63 = 25.23 GiB, which is well within the 48 GiB physical memory of the machine. Additional Observations • vm_stat on a clean boot shows ~24 GB of free system RAM before the PyTorch application launches, consistent with normal OS usage. The 40 GiB figure reported by the MPS allocator as “other allocations” does not correspond to identifiable user processes. • The max allowed: 63.65 GiB ceiling reported by MPS exceeds the physical 48 GiB of the machine, suggesting MPS is using a memory limit calculation that does not account for actual physical constraints on unified memory architectures. • macOS 26.4 introduced a related regression (deterministic RuntimeError: MPSGraph does not support tensor dims larger than INT_MAX) in the same MPS buffer stride arithmetic path. That specific error was resolved in 26.4.1, but the OOM regression described here persists. • This operation succeeded on the same hardware under earlier macOS releases. The increased “other allocations” baseline appears to be specific to macOS 26.x. Impact Machine learning workloads that previously ran successfully on 48 GB Apple Silicon machines are failing on macOS 26.4.1 due to this increased baseline GPU memory consumption. Applications using PyTorch MPS, Core ML, and potentially Metal Performance Shaders directly may be affected. Workaround None identified. Reducing application model size or splitting operations into smaller chunks does not resolve the issue because the constraint is in the “other allocations” baseline, not in the application’s own allocations.
1
0
1.6k
1w
VNRecognizeTextRequest .accurate model failing to load
When I try to use VNRecognizeTextRequest in a simple program on apple silicon .accurate works, but when I add the same code to a helper process in a larger project, .accurate doesn’t return any results while only .fast works. This happens on apple silicon machines but not older intel ones. When I call VNRecognizeTextRequest I see the error [Espresso::handle_ex_plan] exception= in the logs along with (TextRecognition) Error loading network 0, -1. And when I catch the exception in lldb and print it I see Null bundleID. In the code, [[NSBundle mainBundle] returns null even though plutil -p on the helper process binary shows an embedded plist, as well as on the process that spawns the helper.
0
0
382
2w
Core Spotlight Semantic Search - still non-functional for 1+ year after WWDC24?
After more than a year since the announcement, I'm still unable to get this feature working properly and wondering if there are known issues or missing implementation details. Current Setup: Device: iPhone 16 Pro Max iOS: 26 beta 3 Development: Tested on both Xcode 16 and Xcode 26 Implementation: Following the official documentation examples The Problem: Semantic search simply doesn't work. Lexical search functions normally, but enabling semantic search produces identical results to having it disabled. It's as if the feature isn't actually processing. Error Output (Xcode 26): [QPNLU][qid=5] Error Domain=com.apple.SpotlightEmbedding.EmbeddingModelError Code=-8007 "Text embedding generation timeout (timeout=100ms)" [CSUserQuery][qid=5] got a nil / empty embedding data dictionary [CSUserQuery][qid=5] semanticQuery failed to generate, using "(false)" In Xcode 16, there are no error messages at all - the semantic search just silently fails. Missing Resources: The sample application mentioned during the WWDC24 presentation doesn't appear to have been released, which makes it difficult to verify if my implementation is correct. Would really appreciate any guidance or clarification on the current status of this feature. Has anyone in the community successfully implemented this?
6
5
2.9k
2w
Will the upcomming Mac Book Pro M6 Max has at least 256GB RAM
Hi Guys, I want to use the newest Mac Book Pro M6 (Max or Ultra) with at least 256GB RAM for AI development. Will my wish may come true? What do you think? One of Apples most advantage here is unified memory and with the privacy first approach, i want to run local modells and show it to my customer just on the macbook. That has much more magic then first plug the power supply for a sparc, connect a network cable and fiddling around. The perfect match would be a Max Book Pro, M6 Ultra, 512GB. But I guess this is just a dream :-(. Please let me know what you think abou that. Thanks
1
0
636
2w
Official One-Click Local LLM Deployment for 2019 Mac Pro (7,1) Dual W6900X
I am a professional user of the 2019 Mac Pro (7,1) with dual AMD Radeon Pro W6900X MPX modules (32GB VRAM each). This hardware is designed for high-performance compute, but it is currently crippled for modern local LLM/AI workloads under Linux due to Apple's EFI/PCIe routing restrictions. Core Issue: rocminfo reports "No HIP GPUs available" when attempting to use ROCm/amdgpu on Linux Apple's custom EFI firmware blocks full initialization of professional GPU compute assets The dual W6900X GPUs have 64GB combined VRAM and high-bandwidth Infinity Fabric Link, but cannot be fully utilized for local AI inference/training My Specific Request: Apple should provide an official, one-click deployable application that enables full utilization of dual W6900X GPUs for local large language model (LLM) inference and training under Linux. This application must: Fully initialize both W6900X GPUs via HIP/ROCm, establishing valid compute contexts Bypass artificial EFI/PCIe routing restrictions that block access to professional GPU resources Provide a stable, user-friendly one-click deployment experience (similar to NVIDIA's AI Enterprise or AMD's ROCm Hub) Why This Matters: The 2019 Mac Pro is Apple's flagship professional workstation, marketed for compute-intensive workloads. Its high-cost W6900X GPUs should not be locked down for modern AI/LLM use cases. An official one-click deployment solution would demonstrate Apple's commitment to professional AI and unlock significant value for professional users. I look forward to Apple's response and a clear roadmap for enabling this critical capability. #MacPro #Linux #ROCm #LocalLLM #W6900X #CoreML
3
0
1.2k
2w
AI framework usage without user session
We are evaluating various AI frameworks to use within our code, and are hoping to use some of the build-in frameworks in macOS including CoreML and Vision. However, we need to use these frameworks in a background process (system extension) that has no user session attached to it. (To be pedantic, we'll be using an XPC service that is spawned by the system extension, but neither would have an associated user session). Saying the daemon-safe frameworks list has not been updated in a while is an understatement, but it's all we have to go on. CoreGraphics isn't even listed--back then it part of ApplicationServices (I think?) and ApplicationServices is a no go. Vision does use CoreGraphics symbols and data types so I have doubts. We do have a POC that uses both frameworks and they seem to function fine but obviously having something official is better. Any Apple engineers that can comment on this?
6
0
1.2k
3w
backDeploy SystemLanguageModel.tokenCount
SystemLanguageModel.contextSize is back-deployed, but SystemLanguageModel.tokenCount is not. The custom adapter toolkit ships with a ~2.7MB tokenizer with a ~150,000 vocabulary size, but the LICENSE.rtf exclusively permits it's use for training LoRAs. Is it possible to back-deploy tokenCount or for Apple to permit the use of the tokenizer.model for counting tokens? This is important to avoiding context overflow errors.
0
1
570
3w
Python 3.13
Hello, Are there any plans to compile a python 3.13 version of tensorflow-metal? Just got my new Mac mini and the automatically installed version of python installed by brew is python 3.13 and while if I was in a hurry, I could manage to get python 3.12 installed and use the corresponding tensorflow-metal version but I'm not in a hurry. Many thanks, Alan
5
7
2.3k
3w
SystemLanguageModel.Adapter leaks ~100MB of irrecoverable APFS disk space per call
FoundationModels framework, macOS Tahoe 26.4.1, MacBook Air M4. Loading a LoRA adapter via SystemLanguageModel.Adapter(fileURL:) leaks ~100MB of APFS disk space per invocation. The space is permanently consumed at the APFS block level with no corresponding file. Calls without an adapter show zero space loss. Running ~300 adapter calls in a benchmark loop leaked ~30GB and nearly filled a 500GB drive. The total unrecoverable phantom space is now ~239GB (461GB allocated on Data volume, 222GB visible to du). Reproduction: Build a CLI tool that loads a .fmadapter and runs one generation Measure before/after with df and du: Before: df free = 9.1 GB, du -smx /System/Volumes/Data = 227,519 MB After: df free = 9.0 GB, du -smx /System/Volumes/Data = 227,529 MB df delta: ~100 MB consumed du delta: +10 MB (background system activity) Phantom: ~90 MB -- no corresponding file anywhere on disk Without --adapter (same code, same model): zero space change du was run with sudo -x. Files modified during the call were checked with sudo find -mmin -10 -- only Spotlight DBs, diagnostics logs, and a 7MB InferenceProviderService vocab cache. Nothing accounts for the ~90MB loss. fs_usage shows TGOnDeviceInferenceProviderService writing hundreds of APFS metadata blocks (RdMeta on /dev/disk3) per adapter call. Recovery Mode diagnostics: fsck_apfs -o -y -s: no overallocations, bitmap consistent (118.6M blocks counted = spaceman allocated) fsck_apfs -o -y -T -s: B-tree repair found nothing fsck_apfs -o -y -T -F -s: "error: container keybag (39003576+1): failed to get keybag data: Inappropriate file type or format. Encryption key structures are invalid." No fsck_apfs flag combination reclaims the space. The leaked blocks are validly allocated in the APFS bitmap and referenced in the extent tree, but not associated with any file visible to du, find, stat, or lsof. Has anyone else observed space loss when using SystemLanguageModel.Adapter? If I am missing something obvious, I would love to know.
6
1
577
3w
Apple managed asset pack for FoundationModels adapter on Testflight does not download (statusUpdates silent)
Hi, I'm stuck distributing a custom FoundationModels adapter as an Apple-hosted managed asset pack via TestFlight. Everything looks correctly configured end to end but the download just never starts and the statusUpdates sequence is silent. Here's my configuration: App Info.plist: <key>BAHasManagedAssetPacks</key><true/> <key>BAUsesAppleHosting</key><true/> <key>BAAppGroupID</key><string>group.com.fiuto.shared</string> Entitlement com.apple.developer.foundation-model-adapter on both the app and the asset downloader extension. The asset downloader extension uses StoreDownloaderExtension , returning SystemLanguageModel.Adapter.isCompatible(assetPack) from shouldDownload , and the app group on app and asset download extension is the same. I have exported the adapter with toolkit 26.0.0, obtaining: adapterIdentifier = fmadapter-FiutoAdapter-1234567 I have packaged the asset pack using xcrun ba-package and uploaded it to App Store Connect via Transporter, and I get the "ready for internal and external testing" state on App Store Connect, and I have uploaded my app build on TestFlight after the asset pack was marked as ready. I used this code: let adapter = try SystemLanguageModel.Adapter(name: "FiutoAdapter") let ids = SystemLanguageModel.Adapter.compatibleAdapterIdentifiers(name: "FiutoAdapter") // ids == ["fmadapter-FiutoAdapter-1234567"] for await status in AssetPackManager.shared.statusUpdates(forAssetPackWithID: ids.first!) { } I expect the download to start and the stream to yield first .began, then .downloading(progress) and .finished. Actually, compatibleAdapterIdentifiers returns the correct ID, the stream is correctly acquired but i get zero events, so no .began/.downloading/.failed/.finished. Important things: I don't get any error in Console as well; I tested this as an internal tester on TestFlight Tested on iPhone 16 Pro, running iOS 26.3.1 - more than 50GB of free space Apple Intelligence is enabled and set in Italian Background downloads are enabled. I've already checked if the adapter identifier matches regex fmadapter-\w+-\w+ , i tried to reinstall the build, rebooting the device, reupload the asset pack, and also checked that the foundation models adapter entitlement is present on both targets. Is there a known way to diagnose why statusUpdates is silent (no log subsystem seems to show why) in this exact configuration? Is there maybe any delay between asset pack approval on App Store Connect and availability to TestFlight internal testers that I do not know of? I've checked other threads for applicable solutions and I've found that this is similar to the symptom reported in this thread: https://developer.apple.com/forums/thread/805140 / (FB20865802) and also i'm internal tester and on stable iOS 26.3.1, so the limitations from this thread: https://developer.apple.com/forums/thread/793565 shouldn't apply. Thanks
2
0
367
3w
Does using Vision API offline to label a custom dataset for Core ML training violate DPLA?
Hello everyone, I am currently developing a smart camera app for iOS that recommends optimal zoom and exposure values on-device using a custom Core ML model. I am still waiting for an official response from Apple Support, but I wanted to ask the community if anyone has experience with a similar workflow regarding App Review and the DPLA. Here is my training methodology: I gathered my own proprietary dataset of original landscape photos. I generated multiple variants of these photos with different zoom and exposure settings offline on my Mac. I used the CalculateImageAestheticsScoresRequest (Vision framework) via a local macOS command-line tool to evaluate and score each variant. Based on those scores, I labeled the "best" zoom and exposure parameters for each original photo. I used this labeled dataset to train my own independent neural network using PyTorch, and then converted it to a Core ML model to ship inside my app. Since the app uses my own custom model on-device and does not send any user data to a server, the privacy aspect is clear. However, I am curious if using the output of Apple's Vision API strictly offline to label my own dataset could be interpreted as "reverse engineering" or a violation of the Developer Program License Agreement (DPLA). Has anyone successfully shipped an app using a similar knowledge distillation or automated dataset labeling approach with Apple's APIs? Did you face any pushback during App Review? Any insights or shared experiences would be greatly appreciated!
1
0
335
Apr ’26
After loading my custom model - unsupportedTokenizer error
In Oct25, using mlx_lm.lora I created an adapter and a fused model uploaded to Huggingface. I was able to incorporate this model into my SwiftUI app using the mlx package. MLX-libraries 2.25.8. My base LLM was mlx-community/Mistral-7B-Instruct-v0.3-4bit. Looking at LLMModelFactory.swift the current version 2.29.1 the only changes are the addition of a few models. The earlier model was called: pharmpk/pk-mistral-7b-v0.3-4bit The new model is called: pharmpk/pk-mistral-2026-03-29 The base model (mlx-community/Mistral-7B-Instruct-v0.3-4bit.) must still be available. Could the error 'unsupportedTokenizer' be related to changes in the mlx package? I noticed mention of splitting the package into two parts but don't see anything at github. Feeling rather lost. Does anone have any thoguths and/or suggestions. Thanks, David
3
0
460
Apr ’26
26.4 Foundation Model rejects most topics
I have an iOS app, "Spatial Agents" which ran great in 26.3. It creates dashboards around a topic. It can also decompose a topic into sub-topics, and explore those. All based on web articles and web article headlines. In iOS 26.4 almost every topic - even "MIT Innovation" are rejected with an apology of "I apologize I can not fulfill this request". I've tried softening all my prompts, and I can get only really benign very simple topics to respond, but not anything with any significance. It ran great on lots of topics in 26.3. My published App, is now useless, and all my users are unhappy. HELP!
3
0
448
Apr ’26
tensorflow 2.20 broken support
Hi, testing latest tensorflow-metal plugin with tensorflow 2.20 doesn't work.. using python Python 3.12.11 (main, Jun 3 2025, 15:41:47) [Clang 17.0.0 (clang-1700.0.13.3)] on darwin simple testing shows error: import tensorflow as tf Traceback (most recent call last): File "", line 1, in File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/init.py", line 438, in _ll.load_library(_plugin_dir) File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib Reason: tried: '/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file) tf.config.experimental.list_physical_devices('GPU') Traceback (most recent call last): File "", line 1, in NameError: name 'tf' is not defined I fixed this error by copying _pywrap_tensorflow_internal.so where it's searched.. 1)mkdir /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64 2)mkdir /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/ 3)cp /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/ then fails symbol not found: Symbol not found: __ZN10tensorflow28_AttrValue_default_instance_E in libmetal_plugin.dylib full log: with import tensorflow as tf Traceback (most recent call last): File "", line 1, in File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/init.py", line 438, in _ll.load_library(_plugin_dir) File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: __ZN10tensorflow28_AttrValue_default_instance_E Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib Expected in: <2FF91C8B-0CB6-3E66-96B7-092FDF36772E> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so
Replies
2
Boosts
0
Views
976
Activity
14h
Problem running NLContextualEmbeddingModel in simulator
Environment MacOC 26 Xcode Version 26.0 beta 7 (17A5305k) simulator: iPhone 16 pro iOS: iOS 26 Problem NLContextualEmbedding.load() fails with the following error In simulator Failed to load embedding from MIL representation: filesystem error: in create_directories: Permission denied ["/var/db/com.apple.naturallanguaged/com.apple.e5rt.e5bundlecache"] filesystem error: in create_directories: Permission denied ["/var/db/com.apple.naturallanguaged/com.apple.e5rt.e5bundlecache"] Failed to load embedding model 'mul_Latn' - '5C45D94E-BAB4-4927-94B6-8B5745C46289' assetRequestFailed(Optional(Error Domain=NLNaturalLanguageErrorDomain Code=7 "Embedding model requires compilation" UserInfo={NSLocalizedDescription=Embedding model requires compilation})) in #Playground I'm new to this embedding model. Not sure if it's caused by my code or environment. Code snippet import Foundation import NaturalLanguage import Playgrounds #Playground { // Prefer initializing by script for broader coverage; returns NLContextualEmbedding? guard let embeddingModel = NLContextualEmbedding(script: .latin) else { print("Failed to create NLContextualEmbedding") return } print(embeddingModel.hasAvailableAssets) do { try embeddingModel.load() print("Model loaded") } catch { print("Failed to load model: \(error)") } }
Replies
3
Boosts
3
Views
2.2k
Activity
1w
Genmoji API — are local edits and non-text usage (e.g., widgets) allowed?
Hi, I'm integrating the Genmoji API (NSAdaptiveImageGlyph) into my app and would like to confirm two things: Local editing of generated Genmoji. After the user creates a Genmoji, can the app apply edits to the resulting image (e.g., pixelation)? The edited image would be stored only on-device within the app and never shared externally. Use outside text contexts. Can a generated Genmoji be used in other parts of the app, such as a home screen widget? Apple's documentation and the WWDC24 session focus on inline text, stickers, and Tapbacks, but I couldn't find explicit guidance on widget or other UI usage. I checked the Human Interface Guidelines, WWDC24 session "Bring expression to your app with Genmoji," and the App Store Review Guidelines, but couldn't find clear answers. Any guidance or pointers would be appreciated. Thanks!
Replies
0
Boosts
0
Views
1.5k
Activity
1w
MPS backend reports ~40 GiB 'other allocations' on 48 GB M5 Pro under macOS 26.4.1, blocking large tensor operations (PyTorch)
Product macOS Version macOS 26.4.1 (public release) Hardware Apple M5 Pro, 48 GB unified memory Summary On macOS 26.4.1, the MPS backend consistently reports approximately 40 GiB of “other allocations” on a 48 GB M5 Pro machine, even on a freshly rebooted system with minimal user applications running. This leaves insufficient memory for large GPU tensor operations that previously succeeded on earlier macOS versions. The failure manifests as: RuntimeError: MPS backend out of memory (MPS allocated: 17.60 GiB, other allocations: 40.17 GiB, max allowed: 63.65 GiB). Tried to allocate 7.63 GiB on private pool. The “other allocations: 40.17 GiB” value is consistent across reboots and does not change materially when user applications are quit. This suggests macOS 26.4.1 has increased its baseline GPU/unified memory consumption compared to prior releases in a way that is visible to the MPS allocator. Steps to Reproduce Fresh reboot of M5 Pro, 48 GB, macOS 26.4.1 Launch a PyTorch 2.11.0 application using MPS as the compute device Load a large model into MPS memory (~17 GiB, e.g. a VAE encoder in bfloat16) Attempt to allocate an additional ~7.6 GiB workspace tensor for a matrix multiplication operation (torch.bmm) Result: RuntimeError: MPS backend out of memory, with “other allocations” reported at ~40 GiB despite no large user processes holding GPU memory. Expected: The operation should succeed. 17.60 + 7.63 = 25.23 GiB, which is well within the 48 GiB physical memory of the machine. Additional Observations • vm_stat on a clean boot shows ~24 GB of free system RAM before the PyTorch application launches, consistent with normal OS usage. The 40 GiB figure reported by the MPS allocator as “other allocations” does not correspond to identifiable user processes. • The max allowed: 63.65 GiB ceiling reported by MPS exceeds the physical 48 GiB of the machine, suggesting MPS is using a memory limit calculation that does not account for actual physical constraints on unified memory architectures. • macOS 26.4 introduced a related regression (deterministic RuntimeError: MPSGraph does not support tensor dims larger than INT_MAX) in the same MPS buffer stride arithmetic path. That specific error was resolved in 26.4.1, but the OOM regression described here persists. • This operation succeeded on the same hardware under earlier macOS releases. The increased “other allocations” baseline appears to be specific to macOS 26.x. Impact Machine learning workloads that previously ran successfully on 48 GB Apple Silicon machines are failing on macOS 26.4.1 due to this increased baseline GPU memory consumption. Applications using PyTorch MPS, Core ML, and potentially Metal Performance Shaders directly may be affected. Workaround None identified. Reducing application model size or splitting operations into smaller chunks does not resolve the issue because the constraint is in the “other allocations” baseline, not in the application’s own allocations.
Replies
1
Boosts
0
Views
1.6k
Activity
1w
Does the new API: BNNSGraph support quantization
Hello, I spent some time going through the documentation and videos. I did not see how to implement quantized arithmetic for my neural network using BNNSGraph. Could someone please help me.
Replies
0
Boosts
0
Views
641
Activity
1w
VNRecognizeTextRequest .accurate model failing to load
When I try to use VNRecognizeTextRequest in a simple program on apple silicon .accurate works, but when I add the same code to a helper process in a larger project, .accurate doesn’t return any results while only .fast works. This happens on apple silicon machines but not older intel ones. When I call VNRecognizeTextRequest I see the error [Espresso::handle_ex_plan] exception= in the logs along with (TextRecognition) Error loading network 0, -1. And when I catch the exception in lldb and print it I see Null bundleID. In the code, [[NSBundle mainBundle] returns null even though plutil -p on the helper process binary shows an embedded plist, as well as on the process that spawns the helper.
Replies
0
Boosts
0
Views
382
Activity
2w
Core Spotlight Semantic Search - still non-functional for 1+ year after WWDC24?
After more than a year since the announcement, I'm still unable to get this feature working properly and wondering if there are known issues or missing implementation details. Current Setup: Device: iPhone 16 Pro Max iOS: 26 beta 3 Development: Tested on both Xcode 16 and Xcode 26 Implementation: Following the official documentation examples The Problem: Semantic search simply doesn't work. Lexical search functions normally, but enabling semantic search produces identical results to having it disabled. It's as if the feature isn't actually processing. Error Output (Xcode 26): [QPNLU][qid=5] Error Domain=com.apple.SpotlightEmbedding.EmbeddingModelError Code=-8007 "Text embedding generation timeout (timeout=100ms)" [CSUserQuery][qid=5] got a nil / empty embedding data dictionary [CSUserQuery][qid=5] semanticQuery failed to generate, using "(false)" In Xcode 16, there are no error messages at all - the semantic search just silently fails. Missing Resources: The sample application mentioned during the WWDC24 presentation doesn't appear to have been released, which makes it difficult to verify if my implementation is correct. Would really appreciate any guidance or clarification on the current status of this feature. Has anyone in the community successfully implemented this?
Replies
6
Boosts
5
Views
2.9k
Activity
2w
What Should the iOS Deployment Target Be Set to?
Originally, I set my iOS deployment target to 18.1, but now that I'm integrating Foundational Models, I set it to iOS 26.0. Is this ok?
Replies
1
Boosts
0
Views
757
Activity
2w
Will the upcomming Mac Book Pro M6 Max has at least 256GB RAM
Hi Guys, I want to use the newest Mac Book Pro M6 (Max or Ultra) with at least 256GB RAM for AI development. Will my wish may come true? What do you think? One of Apples most advantage here is unified memory and with the privacy first approach, i want to run local modells and show it to my customer just on the macbook. That has much more magic then first plug the power supply for a sparc, connect a network cable and fiddling around. The perfect match would be a Max Book Pro, M6 Ultra, 512GB. But I guess this is just a dream :-(. Please let me know what you think abou that. Thanks
Replies
1
Boosts
0
Views
636
Activity
2w
Official One-Click Local LLM Deployment for 2019 Mac Pro (7,1) Dual W6900X
I am a professional user of the 2019 Mac Pro (7,1) with dual AMD Radeon Pro W6900X MPX modules (32GB VRAM each). This hardware is designed for high-performance compute, but it is currently crippled for modern local LLM/AI workloads under Linux due to Apple's EFI/PCIe routing restrictions. Core Issue: rocminfo reports "No HIP GPUs available" when attempting to use ROCm/amdgpu on Linux Apple's custom EFI firmware blocks full initialization of professional GPU compute assets The dual W6900X GPUs have 64GB combined VRAM and high-bandwidth Infinity Fabric Link, but cannot be fully utilized for local AI inference/training My Specific Request: Apple should provide an official, one-click deployable application that enables full utilization of dual W6900X GPUs for local large language model (LLM) inference and training under Linux. This application must: Fully initialize both W6900X GPUs via HIP/ROCm, establishing valid compute contexts Bypass artificial EFI/PCIe routing restrictions that block access to professional GPU resources Provide a stable, user-friendly one-click deployment experience (similar to NVIDIA's AI Enterprise or AMD's ROCm Hub) Why This Matters: The 2019 Mac Pro is Apple's flagship professional workstation, marketed for compute-intensive workloads. Its high-cost W6900X GPUs should not be locked down for modern AI/LLM use cases. An official one-click deployment solution would demonstrate Apple's commitment to professional AI and unlock significant value for professional users. I look forward to Apple's response and a clear roadmap for enabling this critical capability. #MacPro #Linux #ROCm #LocalLLM #W6900X #CoreML
Replies
3
Boosts
0
Views
1.2k
Activity
2w
AI framework usage without user session
We are evaluating various AI frameworks to use within our code, and are hoping to use some of the build-in frameworks in macOS including CoreML and Vision. However, we need to use these frameworks in a background process (system extension) that has no user session attached to it. (To be pedantic, we'll be using an XPC service that is spawned by the system extension, but neither would have an associated user session). Saying the daemon-safe frameworks list has not been updated in a while is an understatement, but it's all we have to go on. CoreGraphics isn't even listed--back then it part of ApplicationServices (I think?) and ApplicationServices is a no go. Vision does use CoreGraphics symbols and data types so I have doubts. We do have a POC that uses both frameworks and they seem to function fine but obviously having something official is better. Any Apple engineers that can comment on this?
Replies
6
Boosts
0
Views
1.2k
Activity
3w
backDeploy SystemLanguageModel.tokenCount
SystemLanguageModel.contextSize is back-deployed, but SystemLanguageModel.tokenCount is not. The custom adapter toolkit ships with a ~2.7MB tokenizer with a ~150,000 vocabulary size, but the LICENSE.rtf exclusively permits it's use for training LoRAs. Is it possible to back-deploy tokenCount or for Apple to permit the use of the tokenizer.model for counting tokens? This is important to avoiding context overflow errors.
Replies
0
Boosts
1
Views
570
Activity
3w
Python 3.13
Hello, Are there any plans to compile a python 3.13 version of tensorflow-metal? Just got my new Mac mini and the automatically installed version of python installed by brew is python 3.13 and while if I was in a hurry, I could manage to get python 3.12 installed and use the corresponding tensorflow-metal version but I'm not in a hurry. Many thanks, Alan
Replies
5
Boosts
7
Views
2.3k
Activity
3w
SystemLanguageModel.Adapter leaks ~100MB of irrecoverable APFS disk space per call
FoundationModels framework, macOS Tahoe 26.4.1, MacBook Air M4. Loading a LoRA adapter via SystemLanguageModel.Adapter(fileURL:) leaks ~100MB of APFS disk space per invocation. The space is permanently consumed at the APFS block level with no corresponding file. Calls without an adapter show zero space loss. Running ~300 adapter calls in a benchmark loop leaked ~30GB and nearly filled a 500GB drive. The total unrecoverable phantom space is now ~239GB (461GB allocated on Data volume, 222GB visible to du). Reproduction: Build a CLI tool that loads a .fmadapter and runs one generation Measure before/after with df and du: Before: df free = 9.1 GB, du -smx /System/Volumes/Data = 227,519 MB After: df free = 9.0 GB, du -smx /System/Volumes/Data = 227,529 MB df delta: ~100 MB consumed du delta: +10 MB (background system activity) Phantom: ~90 MB -- no corresponding file anywhere on disk Without --adapter (same code, same model): zero space change du was run with sudo -x. Files modified during the call were checked with sudo find -mmin -10 -- only Spotlight DBs, diagnostics logs, and a 7MB InferenceProviderService vocab cache. Nothing accounts for the ~90MB loss. fs_usage shows TGOnDeviceInferenceProviderService writing hundreds of APFS metadata blocks (RdMeta on /dev/disk3) per adapter call. Recovery Mode diagnostics: fsck_apfs -o -y -s: no overallocations, bitmap consistent (118.6M blocks counted = spaceman allocated) fsck_apfs -o -y -T -s: B-tree repair found nothing fsck_apfs -o -y -T -F -s: "error: container keybag (39003576+1): failed to get keybag data: Inappropriate file type or format. Encryption key structures are invalid." No fsck_apfs flag combination reclaims the space. The leaked blocks are validly allocated in the APFS bitmap and referenced in the extent tree, but not associated with any file visible to du, find, stat, or lsof. Has anyone else observed space loss when using SystemLanguageModel.Adapter? If I am missing something obvious, I would love to know.
Replies
6
Boosts
1
Views
577
Activity
3w
Apple managed asset pack for FoundationModels adapter on Testflight does not download (statusUpdates silent)
Hi, I'm stuck distributing a custom FoundationModels adapter as an Apple-hosted managed asset pack via TestFlight. Everything looks correctly configured end to end but the download just never starts and the statusUpdates sequence is silent. Here's my configuration: App Info.plist: <key>BAHasManagedAssetPacks</key><true/> <key>BAUsesAppleHosting</key><true/> <key>BAAppGroupID</key><string>group.com.fiuto.shared</string> Entitlement com.apple.developer.foundation-model-adapter on both the app and the asset downloader extension. The asset downloader extension uses StoreDownloaderExtension , returning SystemLanguageModel.Adapter.isCompatible(assetPack) from shouldDownload , and the app group on app and asset download extension is the same. I have exported the adapter with toolkit 26.0.0, obtaining: adapterIdentifier = fmadapter-FiutoAdapter-1234567 I have packaged the asset pack using xcrun ba-package and uploaded it to App Store Connect via Transporter, and I get the "ready for internal and external testing" state on App Store Connect, and I have uploaded my app build on TestFlight after the asset pack was marked as ready. I used this code: let adapter = try SystemLanguageModel.Adapter(name: "FiutoAdapter") let ids = SystemLanguageModel.Adapter.compatibleAdapterIdentifiers(name: "FiutoAdapter") // ids == ["fmadapter-FiutoAdapter-1234567"] for await status in AssetPackManager.shared.statusUpdates(forAssetPackWithID: ids.first!) { } I expect the download to start and the stream to yield first .began, then .downloading(progress) and .finished. Actually, compatibleAdapterIdentifiers returns the correct ID, the stream is correctly acquired but i get zero events, so no .began/.downloading/.failed/.finished. Important things: I don't get any error in Console as well; I tested this as an internal tester on TestFlight Tested on iPhone 16 Pro, running iOS 26.3.1 - more than 50GB of free space Apple Intelligence is enabled and set in Italian Background downloads are enabled. I've already checked if the adapter identifier matches regex fmadapter-\w+-\w+ , i tried to reinstall the build, rebooting the device, reupload the asset pack, and also checked that the foundation models adapter entitlement is present on both targets. Is there a known way to diagnose why statusUpdates is silent (no log subsystem seems to show why) in this exact configuration? Is there maybe any delay between asset pack approval on App Store Connect and availability to TestFlight internal testers that I do not know of? I've checked other threads for applicable solutions and I've found that this is similar to the symptom reported in this thread: https://developer.apple.com/forums/thread/805140 / (FB20865802) and also i'm internal tester and on stable iOS 26.3.1, so the limitations from this thread: https://developer.apple.com/forums/thread/793565 shouldn't apply. Thanks
Replies
2
Boosts
0
Views
367
Activity
3w
Is anyone working on jax-metal?
Hi, I think many of us would love to be able to use our GPUs for Jax on the new Apple Silicon devices, but currently, the Jax-metal plugin is, for all effects and purposes, broken. Is it still under active development? Is there a planned release for a new version? thanks!
Replies
5
Boosts
8
Views
1.7k
Activity
Apr ’26
Does using Vision API offline to label a custom dataset for Core ML training violate DPLA?
Hello everyone, I am currently developing a smart camera app for iOS that recommends optimal zoom and exposure values on-device using a custom Core ML model. I am still waiting for an official response from Apple Support, but I wanted to ask the community if anyone has experience with a similar workflow regarding App Review and the DPLA. Here is my training methodology: I gathered my own proprietary dataset of original landscape photos. I generated multiple variants of these photos with different zoom and exposure settings offline on my Mac. I used the CalculateImageAestheticsScoresRequest (Vision framework) via a local macOS command-line tool to evaluate and score each variant. Based on those scores, I labeled the "best" zoom and exposure parameters for each original photo. I used this labeled dataset to train my own independent neural network using PyTorch, and then converted it to a Core ML model to ship inside my app. Since the app uses my own custom model on-device and does not send any user data to a server, the privacy aspect is clear. However, I am curious if using the output of Apple's Vision API strictly offline to label my own dataset could be interpreted as "reverse engineering" or a violation of the Developer Program License Agreement (DPLA). Has anyone successfully shipped an app using a similar knowledge distillation or automated dataset labeling approach with Apple's APIs? Did you face any pushback during App Review? Any insights or shared experiences would be greatly appreciated!
Replies
1
Boosts
0
Views
335
Activity
Apr ’26
How Is useful AI
I want to introduce how is usefully AI
Replies
0
Boosts
0
Views
154
Activity
Apr ’26
After loading my custom model - unsupportedTokenizer error
In Oct25, using mlx_lm.lora I created an adapter and a fused model uploaded to Huggingface. I was able to incorporate this model into my SwiftUI app using the mlx package. MLX-libraries 2.25.8. My base LLM was mlx-community/Mistral-7B-Instruct-v0.3-4bit. Looking at LLMModelFactory.swift the current version 2.29.1 the only changes are the addition of a few models. The earlier model was called: pharmpk/pk-mistral-7b-v0.3-4bit The new model is called: pharmpk/pk-mistral-2026-03-29 The base model (mlx-community/Mistral-7B-Instruct-v0.3-4bit.) must still be available. Could the error 'unsupportedTokenizer' be related to changes in the mlx package? I noticed mention of splitting the package into two parts but don't see anything at github. Feeling rather lost. Does anone have any thoguths and/or suggestions. Thanks, David
Replies
3
Boosts
0
Views
460
Activity
Apr ’26
26.4 Foundation Model rejects most topics
I have an iOS app, "Spatial Agents" which ran great in 26.3. It creates dashboards around a topic. It can also decompose a topic into sub-topics, and explore those. All based on web articles and web article headlines. In iOS 26.4 almost every topic - even "MIT Innovation" are rejected with an apology of "I apologize I can not fulfill this request". I've tried softening all my prompts, and I can get only really benign very simple topics to respond, but not anything with any significance. It ran great on lots of topics in 26.3. My published App, is now useless, and all my users are unhappy. HELP!
Replies
3
Boosts
0
Views
448
Activity
Apr ’26