Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

What special features does Apple officially have that use ML or AI?
I am a App designer and I am curious about what specific ML or AI Apple used to develop those features in the system. As far as I know, Apple's hand-raising detection, destination recommendations in maps, and exercise types in fitness all use ML. Are there more specific application examples of ML or AI? Does Apple have a document specifically introducing examples of specific applications of ML or AI technology in the system?
1
0
605
Feb ’25
Unexpectedly slow CreateML text classifier training (limited GPU/CPU usage)
While training a text classifier model with a few thousand samples completes in seconds, when using 100,000 or 1 million samples, CreateML's training time increases exponentially (to hours or days). During these hours/days, GPU usage is low and almost every CPU core is idle. When using the Swift APIs for model training, resource utilization does not increase. I'm using Xcode 16.2, macOS 15.2 on either an M2 Ultra 64 GB or an M3 Max 48 GB laptop (both using built-in SSD with ~500 GB free) running no other applications. Is there a setting I've missed to allow training to take over more of my computing resources? Is this expected of CreateML (i.e., when looking to exploit a larger corpus, I should move to other tooling)? I'd love to speed up my iteration cycle time.
1
0
619
Feb ’25
Apple Intelligence stuck on "preparing" for 6 days.
On the October 10/28 release day of Apple Intelligence I opted in. My iPhone and iPad immediately went to "waitlist" and within 2 to 3 hours were ready to initialize Apple Intelligence. My MacBook Pro 14" with M3 Pro processor and 18 GB or RAM has been stuck on "preparing" since release day (6 days now). I've tried numerous workarounds that I found on forums as well as talking to Apple support, who basically had me repeat the workarounds that I found on forums. I've tried changing region to an area that does not have Apple Intelligence and then back to the US, I've changed Siri language to an unsupported one and back to a supported one, and I have tried disabling background/startup Apps, I've disabled and reenabled Siri. Oh, I've restarted a bunch and let the Mac alone for hours at a time. I've noticed that my selected Siri voice seems to not download. Finally, after several chats and calls with Apple support, I was told that it's Beta software, they can't help me, and I should try the developer forums.... so here I am. Any advice?
9
1
3.0k
Feb ’25
The yolo11 object detection model I exported to coreml stopped working in macOS15.2 beta.
After updating to macOS15.2beta, the Yolo11 object detection model exported to coreml outputs incorrect and abnormal bounding boxes. It also doesn't work in iOS apps built on a 15.2 mac. The same model worked fine on macOS14.1. When training a Yolo11 custom model in Python, exporting it to coreml, and testing it in the preview tab of mlpackage on macOS15.2 and Xcode16.0, the above result is obtained.
6
1
1.4k
Feb ’25
Issues with using ClassifyImageRequest() on an Xcode simulator
Hello, I am developing an app for the Swift Student challenge; however, I keep encountering an error when using ClassifyImageRequest from the Vision framework in Xcode: VTEST: error: perform(_:): inside 'for await result in resultStream' error: internalError("Error Domain=NSOSStatusErrorDomain Code=-1 \"Failed to create espresso context.\" UserInfo={NSLocalizedDescription=Failed to create espresso context.}") It works perfectly when testing it on a physical device, and I saw on another thread that ClassifyImageRequest doesn't work on simulators. Will this cause problems with my submission to the challenge? Thanks
5
1
773
Feb ’25
Efficient Clustering of Images Using VNFeaturePrintObservation.computeDistance
Hi everyone, I'm working with VNFeaturePrintObservation in Swift to compute the similarity between images. The computeDistance function allows me to calculate the distance between two images, and I want to cluster similar images based on these distances. Current Approach Right now, I'm using a brute-force approach where I compare every image against every other image in the dataset. This results in an O(n^2) complexity, which quickly becomes a bottleneck. With 5000 images, it takes around 10 seconds to complete, which is too slow for my use case. Question Are there any efficient algorithms or data structures I can use to improve performance? If anyone has experience with optimizing feature vector clustering or has suggestions on how to scale this efficiently, I'd really appreciate your insights. Thanks!
0
0
524
Feb ’25
Swift playgrounds (.swiftpm) and CoreML
Hey guys, I've been having difficulties transferring my Xcode project to a Swift playground (.swiftpm) for the Swift Student Challenge. I keep getting these errors as well as none of the views being able to find the model in scope: "TrashDetector 1.mlmodel: No predominant language detected. Set COREML_CODEGEN_LANGUAGE to preferred language." Unexpected duplicate tasks: Target 'TrashQuest' (project 'TrashQuest') has write command with output /Users/kmcph3/Library/Developer/Xcode/DerivedData/TrashQuest-glvzskunedgtakfrdmsxdoplondj/Build/Intermediates.noindex/TrashQuest.build/Debug-iphonesimulator/TrashQuest.build/0a4ef2429d66360920ddb4f16e65e233.sb I've gone through multiple post with these exact problems, but they all seem to be talking about ".playground" files due to the "Resources" folder (mind you I did try exactly what they said). Is there anyone that can help??? (Quick side note, why does it need to be a swiftpm file for the SSC??? Like why can't we just send the zip of our Xcode project??)
2
0
780
Feb ’25
Metal GPU Work Won't Stop
Is there any way to stop GPU work running that is scheduled using metal? Long shader calculations don't stop when application is stopped in Xcode and continue to take up GPU time and affect the display. Why is this functionality not available when Swift Tasks are able to be canceled?
2
0
742
Feb ’25
Using the Apple Neural Engine for MLTensor operations
Based on the documentation, it appears that MLTensor can be used to perform tensor operations using the ANE (Apple Neural Engine) by wrapping the tensor operations with withMLTensorComputePolicy with a MLComputePolicy initialized with MLComputeUnits.cpuAndNeuralEngine (it can also be initialized with MLComputeUnits.all to let the OS spread the load between the Neural Engine, GPU and CPU). However, when using the Instruments app, it appears that the tensor operations never get executed on the Neural Engine. It would be helpful if someone can guide me on the correct way to ensure that the Nerual Engine is used to perform the tensor operations (not as part of a CoreML model file). based on this example, I've created a simple code to try it: import Foundation import CoreML print("Starting...") let semaphore = DispatchSemaphore(value: 0) Task { await withMLTensorComputePolicy(.init(MLComputeUnits.cpuAndNeuralEngine)) { let v1 = MLTensor([1.0, 2.0, 3.0, 4.0]) let v2 = MLTensor([5.0, 6.0, 7.0, 8.0]) let v3 = v1.matmul(v2) await v3.shapedArray(of: Float.self) // is 70.0 let m1 = MLTensor(shape: [2, 3], scalars: [ 1, 2, 3, 4, 5, 6 ], scalarType: Float.self) let m2 = MLTensor(shape: [3, 2], scalars: [ 7, 8, 9, 10, 11, 12 ], scalarType: Float.self) let m3 = m1.matmul(m2) let result = await m3.shapedArray(of: Float.self) // is [[58, 64], [139, 154]] // Supports broadcasting let m4 = MLTensor(randomNormal: [3, 1, 1, 4], scalarType: Float.self) let m5 = MLTensor(randomNormal: [4, 2], scalarType: Float.self) let m6 = m4.matmul(m5) print("Done") return result; } semaphore.signal() } semaphore.wait() Here's what I get on the Instruments app: Notice how the Neural Engine line shows no usage. Ive run this test on an M1 Max MacBook Pro.
2
4
762
Mar ’25
Creating .mlmodel with Create ML Components
I have rewatched WWDC22 a few times , but still not getting full understanding how to get .mlmodel model file type from components . Example with banana ripeness is cool , but what need to be added to actually have output of .mlmodel , is somewhere full sample code for this type of modular project ? Code is from [https://developer.apple.com/videos/play/wwdc2022/10019) import CoreImage import CreateMLComponents struct ImageRegressor { static let trainingDataURL = URL(fileURLWithPath: "~/Desktop/bananas") static let parametersURL = URL(fileURLWithPath: "~/Desktop/parameters") static func train() async throws -> some Transformer<CIImage, Float> { let estimator = ImageFeaturePrint() .appending(LinearRegressor()) // File name example: banana-5.jpg let data = try AnnotatedFiles(labeledByNamesAt: trainingDataURL, separator: "-", index: 1, type: .image) .mapFeatures(ImageReader.read) .mapAnnotations({ Float($0)! }) let (training, validation) = data.randomSplit(by: 0.8) let transformer = try await estimator.fitted(to: training, validateOn: validation) try estimator.write(transformer, to: parametersURL) return transformer } } I have tried to run it in Mac OS command line type app, Swift-UI but most what I had as output was .pkg with "pipeline.json, parameters, optimizer.json, optimizer"
3
0
539
Mar ’25
missing CreateML frameworks
I have reinstalled everything including command line tools but the CreateML frameworks fail to install, I need the framework so that I can train my auto-categorzation model which predicts category based on descriptions. I need that framework because I want to use reviision 4. please suggest advice on how do I proceed
4
0
738
Mar ’25
Making a model in MLLinearRegressor works with Sonoma, but on upgrading to 15.3.1 it no longer does "anything"
I was generating models using the code:- import Foundation import CreateML import TabularData import CoreML .... func makeTheModel(columntopredict:String,training:DataFrame,colstouse:[String],numberofmodels:Int) -> [MLLinearRegressor] { var returnmodels = [MLLinearRegressor]() var result = 0.0 for i in 0...numberofmodels { let pms = MLLinearRegressor.ModelParameters(validation: .split(strategy: .automatic)) do { let tm = try MLLinearRegressor(trainingData: training, targetColumn: columntopredict) returnmodels.append(tm) } catch let error as NSError { print("Error: \(error.localizedDescription)") } } return returnmodels } Which worked absolutely fine with Sonoma, but upon upgrading the OS to 15.3.1, it does absolutely nothing. I get no error messages, I get nothing, the code just pauses. If I look at CPU usage, as soon as it hits the line let tm = try MLLinearRegressor(trainingData: training, targetColumn: columntopredict) the CPU usage drops to 0% What am I doing wrong? Is there a flag I need to set somewhere in Xcode? This is on an M1 MacBook Pro Any help would be greatly appreciated
2
1
454
Mar ’25
Tensor Flow Metal 1.2.0 on M2 Fails to converge on common toy models
I've been trying to get some basic models to work on an M2 with tensor metal 1.2 and keras 2.15 and 2.18 and they all fail to work as expected. I'm running models copy/pasted from common tutorials like Jason Brownlee ML Mastery Object Classification tutorial using CIFAR-10. When run with the GPU I can't get any reasonable results. Under keras 2.15 the best validation accuracy ends up being around 10-15%. Under keras 2.18, the validation goes off the rails around epoch 5 with wildly low accuracy and loss values that are reported as "nan". Epoch 4/25 782/782: 19s 24ms/step - accuracy: 0.3450 - loss: 2.8925 - val_accuracy: 0.2992 - val_loss: 1.9869 Epoch 5/25 782/782: 19s 24ms/step - accuracy: 0.2553 - loss: nan - val_accuracy: 0.0000e+00 - val_loss: nan Running the same code on the CPU using keras 2.15 using tf.config.experimental.set_visible_devices([], 'GPU') yields a reasonable result with the validation accuracy around 75% as expected. Running the same code on keras 2.15 on a linux instance with just the CPU provides similar results. The tutorial can be found here: https://machinelearningmastery.com/object-recognition-convolutional-neural-networks-keras-deep-learning-library/ The only places I've deviated from the provided tutorial is using sdg = tf.keras.optimizers.legacy.SGD(learning_rate=lrate, momentum=0.9, nesterov=False) I did this at the advice of the warning: WARNING:absl:At this time, the v2.11+ optimizer `tf.keras.optimizers.SGD` runs slowly on M1/M2 Macs, please use the legacy Keras optimizer instead, located at `tf.keras.optimizers.legacy.SGD`. Is there something special that I need to do to make this work? I've followed the instructions here: https://developer.apple.com/metal/tensorflow-plugin/ I've purged the venv a few times and started from scratch, but all with similarly terrible results. Here are my platform details: Chip: Apple M2 Memory: 16 GB macOS : Sequoia 15.2 Python venv: 3.11 Jupyter Lab Version: 4.3.3 TensorFlow versions: 2.15, 2.18 tensorflow-metal: 1.2.0 Thanks for any assistance or advice.
8
3
881
Mar ’25