Apple Intelligence

RSS for tag

Apple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac and powers incredible new features to help users communicate, work, and express themselves.

Posts under Apple Intelligence tag

95 Posts

Post

Replies

Boosts

Views

Activity

Xcode 26.4: Regressions in Intelligence features
Just installed the new Xcode 26.4 RC build (17E192) after happily using 26.3 for a few months. I'm noticing some immediate regressions in the Intelligence features: Frequent losing of OAuth token (Claude Agent). This had previously been fixed and now is back. Agent "Thinking" is constrained to thought bubble windows, which are (a) too small to read, (b) not scrollable when thinking goes beyond a few paragraphs. Yes they are tappable when thinking is finished, but this doesn't help. Due to (2), in deep thinking, it's impossible to follow progress beyond the visible window, so impossible to know if the agent is going off the rails. I'm noticing just more general slowness to complete tasks. Not just complete them -- it seems like it takes longer to START tasks, which is really weird. It sits there thinking for longer. Same project, same model as before. Every time you tap "New Chat" it presents both Claude and Codex choices, even if you're only signed into one. This turned a simple single tap into now a required two taps. Overall this feels like a frustrating setback after a very positive user experience in 26.3.
0
0
37
8h
Building Real-Time Voice Input on macOS 26 with SpeechAnalyzer + ScreenCaptureKit
We built an open-source macOS menu bar app that turns speech into text and pastes it into the active app — using SpeechAnalyzer for on-device transcription, ScreenCaptureKit + Vision for screen-aware context, and FluidAudio for speaker diarization in meeting mode. Here's what we learned shipping it on macOS 26. GitHub: github.com/Marvinngg/ambient-voice Architecture The app has two modes: hotkey dictation (press to talk, release to inject) and meeting recording (continuous transcription with a floating panel). Dictation Mode Audio capture uses AVCaptureSession (more on why below). The captured audio feeds into SpeechAnalyzer via an AsyncStream: let transcriber = SpeechTranscriber( locale: locale, transcriptionOptions: [], reportingOptions: [.volatileResults, .alternativeTranscriptions], attributeOptions: [.audioTimeRange, .transcriptionConfidence] ) let analyzer = SpeechAnalyzer(modules: [transcriber]) let (inputSequence, inputBuilder) = AsyncStream.makeStream() try await analyzer.start(inputSequence: inputSequence) While recording, we capture a screenshot of the focused window using ScreenCaptureKit, run Vision OCR (VNRecognizeTextRequest), extract keywords, and inject them into SpeechAnalyzer as contextual bias: let context = AnalysisContext() context.contextualStrings[.general] = ocrKeywords try await analyzer.setContext(context) This improves accuracy for technical terms and proper nouns visible on screen. If your screen shows "SpeechAnalyzer", saying it out loud is more likely to be transcribed correctly. After transcription, an optional L2 step sends the text through a local LLM (ollama) for spoken-to-written cleanup, then CGEvent simulates Cmd+V to paste into the active app. Meeting Mode Meeting mode forks the same audio stream to two consumers: SpeechAnalyzer — real-time streaming transcription, displayed in a floating NSPanel FluidAudio buffer — accumulates 16kHz Float32 mono samples for batch speaker diarization after recording stops When the user ends the meeting, FluidAudio's performCompleteDiarization() runs on the accumulated audio. We align transcription segments with speaker segments using audioTimeRange overlap matching — each transcription segment gets assigned the speaker ID with the most time overlap. Results export to Markdown. Pitfalls We Hit on macOS 26 1. AVAudioEngine installTap doesn't fire with Bluetooth devices We started with AVAudioEngine.inputNode.installTap() for audio capture. It worked fine with built-in mics but the tap callback never fired with Bluetooth devices (tested with vivo TWS 4 Hi-Fi). Fix: switched to AVCaptureSession. The delegate callback captureOutput(_:didOutput:from:) fires reliably regardless of audio device. The tradeoff is you get CMSampleBuffer instead of AVAudioPCMBuffer, so you need a conversion step. 2. NSEvent addGlobalMonitorForEvents crashes Our global hotkey listener used NSEvent.addGlobalMonitorForEvents. On macOS 26, this crashes with a Bus error inside GlobalObserverHandler — appears to be a Swift actor runtime issue. Fix: switched to CGEventTap. Works reliably, but the callback runs on a CFRunLoop context, which Swift doesn't recognize as MainActor. 3. CGEventTap callbacks aren't on MainActor If your CGEventTap callback touches any @MainActor state, you'll get concurrency violations. The callback runs on whatever thread owns the CFRunLoop. Fix: bridge with DispatchQueue.main.async {} inside the tap callback before touching any MainActor state. 4. CGPreflightScreenCaptureAccess doesn't request permission We used CGPreflightScreenCaptureAccess() as a guard before calling ScreenCaptureKit. If it returned false, we'd bail out. The problem: this function only checks — it never triggers macOS to add your app to the Screen Recording permission list. Chicken-and-egg: you can't get permission because you never ask for it. Fix: call CGRequestScreenCaptureAccess() at app startup. This adds your app to System Settings → Screen Recording. Then let ScreenCaptureKit calls proceed without the preflight guard — SCShareableContent will also trigger the permission prompt on first use. 5. Ad-hoc signing breaks TCC permissions on every rebuild During development, codesign --sign - (ad-hoc) generates a different code directory hash on every build. macOS TCC tracks permissions by this hash, so every rebuild = new app identity = all permissions reset. Fix: sign with a stable certificate. If you have an Apple Development certificate, use that. The TeamIdentifier stays constant across rebuilds, so TCC permissions persist. We also discovered that launching via open WE.app (LaunchServices) instead of directly executing the binary is required — otherwise macOS attributes TCC permissions to Terminal, not your app. Benchmarks We ran end-to-end benchmarks on public datasets (Mac Mini M4 16GB, macOS 26): Transcription (SpeechAnalyzer, AliMeeting Chinese): • Near-field CER 34% (excluding outliers ~25%) • Far-field CER 40% (single channel, no beamforming, >30% overlap) • Processing speed 74-89x real-time Speaker diarization (FluidAudio offline): • AMI English 16 meetings: avg DER 23.2% (collar=0.25s, ignoreOverlap=True) • AliMeeting Chinese 8 meetings: DER 48.5% (including overlap regions) • Memory: RSS ~500MB, peak 730-930MB Full evaluation methodology, scripts, and raw results are in the repo. Open Source The project is MIT licensed: github.com/Marvinngg/ambient-voice It includes the macOS client (Swift 6.2, SPM), server-side distillation/training scripts (Python), and a complete evaluation framework with reproducible benchmarks. Feedback and contributions welcome.
0
0
160
1d
Best approach for animating a speaking avatar in a macOS/iOS SwiftUI application
I am developing a macOS application using SwiftUI (with an iOS version as well). One feature we are exploring is displaying an avatar that reads or speaks dynamically generated text produced by an AI service. The basic flow would be: Text generated by an AI service Text converted to speech using a TTS engine An avatar (2D or 3D) rendered in the app that animates lip movement synchronized with the speech Ideally the avatar would render locally on the device. Questions: What Apple frameworks would be most appropriate for implementing a speaking avatar? SceneKit RealityKit SpriteKit (for 2D avatars) Is there any recommended way to drive lip-sync animation from speech audio using Apple frameworks? Does AVSpeechSynthesizer expose phoneme or viseme timing information that could be used for avatar animation? If such timing information is not available, what is the recommended approach for synchronizing character mouth animation with speech audio on macOS/iOS? Are there examples of real-time character animation synchronized with speech on macOS/iOS? Any architectural guidance or references would be greatly appreciated.
0
0
510
1w
How can I use private AI agents in Xcode 26.3?
I work on some proprietary codebases and can only use private AI services with them (currently MiniMax M2.1 and GLM 4.7). It all works great with both Claude Code and OpenCode agents, and I'd like to leverage the new agentic capabilities that are now in Xcode 26.3. I'm not seeing any option to connect to OpenCode, and both the Anthropic and OpenAI providers require an enterprise account (which I don't have access to). Are there any options that I'm missing here?
10
5
1.2k
1w
Foundation models not detectable in Xcode simulator
I'm building an app which runs around the Foundation model framework. My expected output is generated when testing on a real device or in preview in Xcode but it throws Foundation Model error when I try running it on the simulator. I'm using a Macbook M1 air and have apple intelligence turned on and my simulator run destination is also an iPad Pro M5 (26.0). Any solution for this as this is my submission for the SSC so I need to make it work on the simulator iPad. Thank you👾
6
0
373
2w
Xcode 26.3 MCP server output missing `structredContent`
I am using official MCP SDK. According to official guide, Servers MUST provide structured results that conform to this schema. https://modelcontextprotocol.io/specification/draft/server/tools#output-schema I could see output schema defined, but result have no structured content. Current output schema: { name: "XcodeListWindows", title: "List Windows", description: "Lists the current Xcode windows and their workspace information", inputSchema: { type: "object", properties: { }, required: [ ], }, outputSchema: { type: "object", properties: { message: { description: "Description of all open Xcode windows", type: "string", }, }, required: [ "message", ], }, } Current response: { "result": { "content": [ { type: "text", text: "{\"message\":\"* tabIdentifier: windowtab1, workspacePath: \\xxx\\n* tabIdentifier: windowtab2, workspacePath: \\xxx\\n\"}", }, ] } } Expected: { "result": { "content": [ { type: "text", text: "{\"message\":\"* tabIdentifier: windowtab1, workspacePath: \\xxx\\n* tabIdentifier: windowtab2, workspacePath: \\xxx\\n\"}", }, ], "structuredContent": { "message": "* tabIdentifier: windowtab1, workspacePath: \\xxx\\n* tabIdentifier: windowtab2, workspacePath: \\xxx\\n", } } }
1
0
125
2w
Foundation Models support in Swift Playgrounds | SSC26
Hello. I'm building my Swift Student Challenge Project. But I noticed that the FoundationModels Framework isn't supported in Swift Playgrounds. My app uses ARKit, so it definitely should be tested on real device. So, is it possible to somehow implement a foundation models to my Swift Playgrounds app? Or is it possible to ask judges to test the app on real device launching it from xcode. Your answer is really important for because the functionality of my app directly depends on Foundation Models availability. Thank you!
1
0
150
3w
Foundation Models support in Swift Playgrounds | SSC26
Hello. I'm building my Swift Student Challenge Project. But I noticed that the FoundationModels Framework isn't supported in Swift Playgrounds. My app uses ARKit, so it definitely should be tested on real device. So, is it possible to somehow implement a foundation models to my Swift Playgrounds app? Or is it possible to ask judges to test the app on real device launching it from xcode. Your answer is really important for because the functionality of my app directly depends on Foundation Models availability. Thank you!
1
0
199
3w
Xcode Simulator error
Following is an error I encounter when trying to run a foundation model function in simulator. Error Domain=FoundationModels.LanguageModelSession.GenerationError Code=-1 "(null)" UserInfo={NSMultipleUnderlyingErrorsKey=( "Error Domain=ModelManagerServices.ModelManagerError Code=1026 "(null)" UserInfo={NSMultipleUnderlyingErrorsKey=(\n)}" )} Its a swift playground I'm building in xcode and works fine in the preview and on a real device too but since it's my submission for the swift student challenge I need it to run on the simulator. I have updated my macOS to latest version of Tahoe(26.3) and Xcode is also on latest version. The simulator I run the playground is also on ios and iPadOS 26. I have set my region to US and have turned on Apple Intelligence on my mac too. Any suggestions on how to fix the issue?👾
1
0
232
Feb ’26
Big suggestion for the improvements to “Filter unknown calls”:
Hi, I have a big suggestion for the next update! My suggestion is to upgrade the call filtering system : - “Block suspicious calls” It automatically block numbers that are likely to be unwanted or spam. These calls should not ring, should not appear in the call history, and should be silently discarded. - “Block unknown callers/hidden numbers” Instead of receiving a call from a hidden or unknown caller and seeing it appear in the missed calls list, the call should be completely blocked and not recorded at all, no notification, no sight of call. - “Filter numbers that are not from contacts” If the same unknown number calls too frequently within a week and the user never answers (for example 10 times per day, or 5 times per day if considered suspicious or too much spam in a little time), the iPhone should automatically block this number. The device would display a message such as: “This number has called too many times and has been automatically blocked for 3 day(or 1 week).” The block could also apply to the entire IP range or source associated with the unknown caller. It is very helpful for most of unwanted calls.
0
0
71
Feb ’26
Apple Intelligence cant change Xcode build target...?!
› It is reasonable to assume Xcode AI integration means AI being able to change the build target... “Why can Xcode Intelligence see source files but not .xcodeproj/project.pbxproj for direct edit?” “Is this a known limitation/bug in Xcode 26.3 RC Project Context or MCP Xcode Tools?” “Any required entitlement/setting beyond Intelligence > Xcode Tools for build-setting edits?”
2
0
79
Feb ’26
Error with guardrailViolation and underlyingErrors
Hi, I am a new IOS developer, trying to learn to integrate the Apple Foundation Model. my set up is: Mac M1 Pro MacOS 26 Beta Version 26.0 beta 3 Apple Intelligence & Siri --> On here is the code, func generate() { Task { isGenerating = true output = "⏳ Thinking..." do { let session = LanguageModelSession( instructions: """ Extract time from a message. Example Q: Golfing at 6PM A: 6PM """) let response = try await session.respond(to: "Go to gym at 7PM") output = response.content } catch { output = "❌ Error:, \(error)" print(output) } isGenerating = false } and I get these errors guardrailViolation(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Prompt may contain sensitive or unsafe content", underlyingErrors: [Asset com.apple.gm.safety_embedding_deny.all not found in Model Catalog])) Can you help me get through this?
5
0
765
Feb ’26
Some question about Swift Student Challenge 26
My playground may require that the device has downloaded some resources in advance, such as Apple's advanced voice, translation language... But this is not necessary. It is just an incidental function. If it is not downloaded, the app will prompt that this function is not available and most of the other functions can continue to be used. But I want to know whether the judge's device will download these things in advance, and if not, will the judges think that there is a problem with my app that can't be used normally, which will cause my work to be rejected directly? Because my app uses the API of iOS 26, it needs to run in Xcode, and the competition allows the Apple intelligent function, but it is stipulated that if it runs with Xcode, the app will be tested on the simulator. However, my app involves image playground and cannot run on the simulator. Does anyone have a good solution? Thank you!
1
1
270
Feb ’26
Apple Intelligence crashed/stopped working
Hi everyone, I’m currently using macOS Version 15.3 Beta (24D5034f), and I’m encountering an issue with Apple Intelligence. The image generation tools seem to work fine, but everything else shows a message saying that it’s “not available at this time.” I’ve tried restarting my Mac and double-checked my settings, but the problem persists. Is anyone else experiencing this issue on the beta version? Are there any fixes or settings I might be overlooking? Any help or insights would be greatly appreciated! Thanks in advance!
3
1
1.6k
Jan ’26
Sharing: How I Built an IPv4/IPv6 Dual-Stack Network Diagnostic Tool for iOS
Hi everyone 👋 As a network engineer and indie iOS developer, I couldn’t find a lightweight mobile tool that fully supports IPv4/IPv6 dual-stack diagnostics — so I built NetToolbox -All-In-One Utility for engineers, DevOps, and developers. Here are its core features that solve real mobile networking pain points: One-Click Full Diagnostics: Integrates ping, traceroute, and multi-type DNS queries (A/AAAA/CNAME) — no need to switch between apps IPv4/IPv6 Dual-Stack Support: Seamlessly works in IPv6-only networks, with the ability to test connectivity differences between dual-stack environments LAN Device Scanning: Quickly identifies all devices on the same network segment and checks port availability Offline Functionality: Diagnostic logic is stored locally, enabling LAN troubleshooting without an internet connection Lightweight Design: 5MB install size, no storage bloat, and low power consumption during operation Dark Mode Support: Tailored for developers who work late at night During development, I leveraged Apple Intelligence alongside Claude Code and Gemini 3 to accelerate the process, optimize iOS native networking stack adaptation and local storage logic, and significantly boost development efficiency. I’d love to hear from the community: What must-have features are missing from mobile network diagnostic tools? Do you have experience optimizing iOS workflows with Apple Intelligence? 👉 You can try the app here: https://apps.apple.com/us/app/nettoolbox-all-in-one-utility/id6757392404 Feedback is highly appreciated — I’ll keep iterating to make it better! 🚀
1
0
146
Jan ’26
Foundation Model Framework
Greetings! I was trying to get a response from the LanguageModelSession but I just keep getting the following: Error getting response: Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} This occurs both in macOS 15.5 running the new Xcode beta with an iOS 26 simulator, and also on a macOS 26 with Xcode beta. The simulators are both Pro iPhone 16s. I was wondering if anyone had any advice?
19
3
3.1k
Jan ’26
Xcode 26.4: Regressions in Intelligence features
Just installed the new Xcode 26.4 RC build (17E192) after happily using 26.3 for a few months. I'm noticing some immediate regressions in the Intelligence features: Frequent losing of OAuth token (Claude Agent). This had previously been fixed and now is back. Agent "Thinking" is constrained to thought bubble windows, which are (a) too small to read, (b) not scrollable when thinking goes beyond a few paragraphs. Yes they are tappable when thinking is finished, but this doesn't help. Due to (2), in deep thinking, it's impossible to follow progress beyond the visible window, so impossible to know if the agent is going off the rails. I'm noticing just more general slowness to complete tasks. Not just complete them -- it seems like it takes longer to START tasks, which is really weird. It sits there thinking for longer. Same project, same model as before. Every time you tap "New Chat" it presents both Claude and Codex choices, even if you're only signed into one. This turned a simple single tap into now a required two taps. Overall this feels like a frustrating setback after a very positive user experience in 26.3.
Replies
0
Boosts
0
Views
37
Activity
8h
Building Real-Time Voice Input on macOS 26 with SpeechAnalyzer + ScreenCaptureKit
We built an open-source macOS menu bar app that turns speech into text and pastes it into the active app — using SpeechAnalyzer for on-device transcription, ScreenCaptureKit + Vision for screen-aware context, and FluidAudio for speaker diarization in meeting mode. Here's what we learned shipping it on macOS 26. GitHub: github.com/Marvinngg/ambient-voice Architecture The app has two modes: hotkey dictation (press to talk, release to inject) and meeting recording (continuous transcription with a floating panel). Dictation Mode Audio capture uses AVCaptureSession (more on why below). The captured audio feeds into SpeechAnalyzer via an AsyncStream: let transcriber = SpeechTranscriber( locale: locale, transcriptionOptions: [], reportingOptions: [.volatileResults, .alternativeTranscriptions], attributeOptions: [.audioTimeRange, .transcriptionConfidence] ) let analyzer = SpeechAnalyzer(modules: [transcriber]) let (inputSequence, inputBuilder) = AsyncStream.makeStream() try await analyzer.start(inputSequence: inputSequence) While recording, we capture a screenshot of the focused window using ScreenCaptureKit, run Vision OCR (VNRecognizeTextRequest), extract keywords, and inject them into SpeechAnalyzer as contextual bias: let context = AnalysisContext() context.contextualStrings[.general] = ocrKeywords try await analyzer.setContext(context) This improves accuracy for technical terms and proper nouns visible on screen. If your screen shows "SpeechAnalyzer", saying it out loud is more likely to be transcribed correctly. After transcription, an optional L2 step sends the text through a local LLM (ollama) for spoken-to-written cleanup, then CGEvent simulates Cmd+V to paste into the active app. Meeting Mode Meeting mode forks the same audio stream to two consumers: SpeechAnalyzer — real-time streaming transcription, displayed in a floating NSPanel FluidAudio buffer — accumulates 16kHz Float32 mono samples for batch speaker diarization after recording stops When the user ends the meeting, FluidAudio's performCompleteDiarization() runs on the accumulated audio. We align transcription segments with speaker segments using audioTimeRange overlap matching — each transcription segment gets assigned the speaker ID with the most time overlap. Results export to Markdown. Pitfalls We Hit on macOS 26 1. AVAudioEngine installTap doesn't fire with Bluetooth devices We started with AVAudioEngine.inputNode.installTap() for audio capture. It worked fine with built-in mics but the tap callback never fired with Bluetooth devices (tested with vivo TWS 4 Hi-Fi). Fix: switched to AVCaptureSession. The delegate callback captureOutput(_:didOutput:from:) fires reliably regardless of audio device. The tradeoff is you get CMSampleBuffer instead of AVAudioPCMBuffer, so you need a conversion step. 2. NSEvent addGlobalMonitorForEvents crashes Our global hotkey listener used NSEvent.addGlobalMonitorForEvents. On macOS 26, this crashes with a Bus error inside GlobalObserverHandler — appears to be a Swift actor runtime issue. Fix: switched to CGEventTap. Works reliably, but the callback runs on a CFRunLoop context, which Swift doesn't recognize as MainActor. 3. CGEventTap callbacks aren't on MainActor If your CGEventTap callback touches any @MainActor state, you'll get concurrency violations. The callback runs on whatever thread owns the CFRunLoop. Fix: bridge with DispatchQueue.main.async {} inside the tap callback before touching any MainActor state. 4. CGPreflightScreenCaptureAccess doesn't request permission We used CGPreflightScreenCaptureAccess() as a guard before calling ScreenCaptureKit. If it returned false, we'd bail out. The problem: this function only checks — it never triggers macOS to add your app to the Screen Recording permission list. Chicken-and-egg: you can't get permission because you never ask for it. Fix: call CGRequestScreenCaptureAccess() at app startup. This adds your app to System Settings → Screen Recording. Then let ScreenCaptureKit calls proceed without the preflight guard — SCShareableContent will also trigger the permission prompt on first use. 5. Ad-hoc signing breaks TCC permissions on every rebuild During development, codesign --sign - (ad-hoc) generates a different code directory hash on every build. macOS TCC tracks permissions by this hash, so every rebuild = new app identity = all permissions reset. Fix: sign with a stable certificate. If you have an Apple Development certificate, use that. The TeamIdentifier stays constant across rebuilds, so TCC permissions persist. We also discovered that launching via open WE.app (LaunchServices) instead of directly executing the binary is required — otherwise macOS attributes TCC permissions to Terminal, not your app. Benchmarks We ran end-to-end benchmarks on public datasets (Mac Mini M4 16GB, macOS 26): Transcription (SpeechAnalyzer, AliMeeting Chinese): • Near-field CER 34% (excluding outliers ~25%) • Far-field CER 40% (single channel, no beamforming, >30% overlap) • Processing speed 74-89x real-time Speaker diarization (FluidAudio offline): • AMI English 16 meetings: avg DER 23.2% (collar=0.25s, ignoreOverlap=True) • AliMeeting Chinese 8 meetings: DER 48.5% (including overlap regions) • Memory: RSS ~500MB, peak 730-930MB Full evaluation methodology, scripts, and raw results are in the repo. Open Source The project is MIT licensed: github.com/Marvinngg/ambient-voice It includes the macOS client (Swift 6.2, SPM), server-side distillation/training scripts (Python), and a complete evaluation framework with reproducible benchmarks. Feedback and contributions welcome.
Replies
0
Boosts
0
Views
160
Activity
1d
Best approach for animating a speaking avatar in a macOS/iOS SwiftUI application
I am developing a macOS application using SwiftUI (with an iOS version as well). One feature we are exploring is displaying an avatar that reads or speaks dynamically generated text produced by an AI service. The basic flow would be: Text generated by an AI service Text converted to speech using a TTS engine An avatar (2D or 3D) rendered in the app that animates lip movement synchronized with the speech Ideally the avatar would render locally on the device. Questions: What Apple frameworks would be most appropriate for implementing a speaking avatar? SceneKit RealityKit SpriteKit (for 2D avatars) Is there any recommended way to drive lip-sync animation from speech audio using Apple frameworks? Does AVSpeechSynthesizer expose phoneme or viseme timing information that could be used for avatar animation? If such timing information is not available, what is the recommended approach for synchronizing character mouth animation with speech audio on macOS/iOS? Are there examples of real-time character animation synchronized with speech on macOS/iOS? Any architectural guidance or references would be greatly appreciated.
Replies
0
Boosts
0
Views
510
Activity
1w
How can I use private AI agents in Xcode 26.3?
I work on some proprietary codebases and can only use private AI services with them (currently MiniMax M2.1 and GLM 4.7). It all works great with both Claude Code and OpenCode agents, and I'd like to leverage the new agentic capabilities that are now in Xcode 26.3. I'm not seeing any option to connect to OpenCode, and both the Anthropic and OpenAI providers require an enterprise account (which I don't have access to). Are there any options that I'm missing here?
Replies
10
Boosts
5
Views
1.2k
Activity
1w
Foundation models not detectable in Xcode simulator
I'm building an app which runs around the Foundation model framework. My expected output is generated when testing on a real device or in preview in Xcode but it throws Foundation Model error when I try running it on the simulator. I'm using a Macbook M1 air and have apple intelligence turned on and my simulator run destination is also an iPad Pro M5 (26.0). Any solution for this as this is my submission for the SSC so I need to make it work on the simulator iPad. Thank you👾
Replies
6
Boosts
0
Views
373
Activity
2w
Xcode 26.3 MCP server output missing `structredContent`
I am using official MCP SDK. According to official guide, Servers MUST provide structured results that conform to this schema. https://modelcontextprotocol.io/specification/draft/server/tools#output-schema I could see output schema defined, but result have no structured content. Current output schema: { name: "XcodeListWindows", title: "List Windows", description: "Lists the current Xcode windows and their workspace information", inputSchema: { type: "object", properties: { }, required: [ ], }, outputSchema: { type: "object", properties: { message: { description: "Description of all open Xcode windows", type: "string", }, }, required: [ "message", ], }, } Current response: { "result": { "content": [ { type: "text", text: "{\"message\":\"* tabIdentifier: windowtab1, workspacePath: \\xxx\\n* tabIdentifier: windowtab2, workspacePath: \\xxx\\n\"}", }, ] } } Expected: { "result": { "content": [ { type: "text", text: "{\"message\":\"* tabIdentifier: windowtab1, workspacePath: \\xxx\\n* tabIdentifier: windowtab2, workspacePath: \\xxx\\n\"}", }, ], "structuredContent": { "message": "* tabIdentifier: windowtab1, workspacePath: \\xxx\\n* tabIdentifier: windowtab2, workspacePath: \\xxx\\n", } } }
Replies
1
Boosts
0
Views
125
Activity
2w
Foundation Models support in Swift Playgrounds | SSC26
Hello. I'm building my Swift Student Challenge Project. But I noticed that the FoundationModels Framework isn't supported in Swift Playgrounds. My app uses ARKit, so it definitely should be tested on real device. So, is it possible to somehow implement a foundation models to my Swift Playgrounds app? Or is it possible to ask judges to test the app on real device launching it from xcode. Your answer is really important for because the functionality of my app directly depends on Foundation Models availability. Thank you!
Replies
1
Boosts
0
Views
150
Activity
3w
Foundation Models support in Swift Playgrounds | SSC26
Hello. I'm building my Swift Student Challenge Project. But I noticed that the FoundationModels Framework isn't supported in Swift Playgrounds. My app uses ARKit, so it definitely should be tested on real device. So, is it possible to somehow implement a foundation models to my Swift Playgrounds app? Or is it possible to ask judges to test the app on real device launching it from xcode. Your answer is really important for because the functionality of my app directly depends on Foundation Models availability. Thank you!
Replies
1
Boosts
0
Views
199
Activity
3w
App architectures with Swift UI and Apple Intelligence
I read some article that said for using Apple Intelligence as an integrated part of an app, instead of tacked on, MVVM and VIPER are not good fits. Would that be accurate? What other architectures could work better? (The author didn’t suggest any replacements.)
Replies
0
Boosts
0
Views
54
Activity
4w
Warming Up Apple Intelligence
Whats to code to warm it up once? Saw this in a developer video but cannot find it. Prevent cold run within an application. Thank you in advance!
Replies
1
Boosts
0
Views
154
Activity
Feb ’26
Xcode Simulator error
Following is an error I encounter when trying to run a foundation model function in simulator. Error Domain=FoundationModels.LanguageModelSession.GenerationError Code=-1 "(null)" UserInfo={NSMultipleUnderlyingErrorsKey=( "Error Domain=ModelManagerServices.ModelManagerError Code=1026 "(null)" UserInfo={NSMultipleUnderlyingErrorsKey=(\n)}" )} Its a swift playground I'm building in xcode and works fine in the preview and on a real device too but since it's my submission for the swift student challenge I need it to run on the simulator. I have updated my macOS to latest version of Tahoe(26.3) and Xcode is also on latest version. The simulator I run the playground is also on ios and iPadOS 26. I have set my region to US and have turned on Apple Intelligence on my mac too. Any suggestions on how to fix the issue?👾
Replies
1
Boosts
0
Views
232
Activity
Feb ’26
Big suggestion for the improvements to “Filter unknown calls”:
Hi, I have a big suggestion for the next update! My suggestion is to upgrade the call filtering system : - “Block suspicious calls” It automatically block numbers that are likely to be unwanted or spam. These calls should not ring, should not appear in the call history, and should be silently discarded. - “Block unknown callers/hidden numbers” Instead of receiving a call from a hidden or unknown caller and seeing it appear in the missed calls list, the call should be completely blocked and not recorded at all, no notification, no sight of call. - “Filter numbers that are not from contacts” If the same unknown number calls too frequently within a week and the user never answers (for example 10 times per day, or 5 times per day if considered suspicious or too much spam in a little time), the iPhone should automatically block this number. The device would display a message such as: “This number has called too many times and has been automatically blocked for 3 day(or 1 week).” The block could also apply to the entire IP range or source associated with the unknown caller. It is very helpful for most of unwanted calls.
Replies
0
Boosts
0
Views
71
Activity
Feb ’26
Apple Intelligence cant change Xcode build target...?!
› It is reasonable to assume Xcode AI integration means AI being able to change the build target... “Why can Xcode Intelligence see source files but not .xcodeproj/project.pbxproj for direct edit?” “Is this a known limitation/bug in Xcode 26.3 RC Project Context or MCP Xcode Tools?” “Any required entitlement/setting beyond Intelligence > Xcode Tools for build-setting edits?”
Replies
2
Boosts
0
Views
79
Activity
Feb ’26
Error with guardrailViolation and underlyingErrors
Hi, I am a new IOS developer, trying to learn to integrate the Apple Foundation Model. my set up is: Mac M1 Pro MacOS 26 Beta Version 26.0 beta 3 Apple Intelligence & Siri --> On here is the code, func generate() { Task { isGenerating = true output = "⏳ Thinking..." do { let session = LanguageModelSession( instructions: """ Extract time from a message. Example Q: Golfing at 6PM A: 6PM """) let response = try await session.respond(to: "Go to gym at 7PM") output = response.content } catch { output = "❌ Error:, \(error)" print(output) } isGenerating = false } and I get these errors guardrailViolation(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Prompt may contain sensitive or unsafe content", underlyingErrors: [Asset com.apple.gm.safety_embedding_deny.all not found in Model Catalog])) Can you help me get through this?
Replies
5
Boosts
0
Views
765
Activity
Feb ’26
Some question about Swift Student Challenge 26
My playground may require that the device has downloaded some resources in advance, such as Apple's advanced voice, translation language... But this is not necessary. It is just an incidental function. If it is not downloaded, the app will prompt that this function is not available and most of the other functions can continue to be used. But I want to know whether the judge's device will download these things in advance, and if not, will the judges think that there is a problem with my app that can't be used normally, which will cause my work to be rejected directly? Because my app uses the API of iOS 26, it needs to run in Xcode, and the competition allows the Apple intelligent function, but it is stipulated that if it runs with Xcode, the app will be tested on the simulator. However, my app involves image playground and cannot run on the simulator. Does anyone have a good solution? Thank you!
Replies
1
Boosts
1
Views
270
Activity
Feb ’26
Xcode 26.2 Not Remembering Coding Intelligence Model Provider
After updated to Tahoe 26.2 and Xcode 26.2 it seems to have forgotten the Model Provider I had configured. I create a new Model Provider and it works fine, until I exit Xcode. When I open Xcode again my Model Provider is gone. It all worked fine before I did the updates of MacOS and Xcode.
Replies
20
Boosts
15
Views
1.8k
Activity
Feb ’26
Apple Intelligence crashed/stopped working
Hi everyone, I’m currently using macOS Version 15.3 Beta (24D5034f), and I’m encountering an issue with Apple Intelligence. The image generation tools seem to work fine, but everything else shows a message saying that it’s “not available at this time.” I’ve tried restarting my Mac and double-checked my settings, but the problem persists. Is anyone else experiencing this issue on the beta version? Are there any fixes or settings I might be overlooking? Any help or insights would be greatly appreciated! Thanks in advance!
Replies
3
Boosts
1
Views
1.6k
Activity
Jan ’26
xcode intelligence model provider cannot be saved
I have set up the model provider in xcode settings then I tried a request successfully but the settings of model provider will not be saved if I close and reopen the xcode . xcode version is 26.2
Replies
0
Boosts
0
Views
59
Activity
Jan ’26
Sharing: How I Built an IPv4/IPv6 Dual-Stack Network Diagnostic Tool for iOS
Hi everyone 👋 As a network engineer and indie iOS developer, I couldn’t find a lightweight mobile tool that fully supports IPv4/IPv6 dual-stack diagnostics — so I built NetToolbox -All-In-One Utility for engineers, DevOps, and developers. Here are its core features that solve real mobile networking pain points: One-Click Full Diagnostics: Integrates ping, traceroute, and multi-type DNS queries (A/AAAA/CNAME) — no need to switch between apps IPv4/IPv6 Dual-Stack Support: Seamlessly works in IPv6-only networks, with the ability to test connectivity differences between dual-stack environments LAN Device Scanning: Quickly identifies all devices on the same network segment and checks port availability Offline Functionality: Diagnostic logic is stored locally, enabling LAN troubleshooting without an internet connection Lightweight Design: 5MB install size, no storage bloat, and low power consumption during operation Dark Mode Support: Tailored for developers who work late at night During development, I leveraged Apple Intelligence alongside Claude Code and Gemini 3 to accelerate the process, optimize iOS native networking stack adaptation and local storage logic, and significantly boost development efficiency. I’d love to hear from the community: What must-have features are missing from mobile network diagnostic tools? Do you have experience optimizing iOS workflows with Apple Intelligence? 👉 You can try the app here: https://apps.apple.com/us/app/nettoolbox-all-in-one-utility/id6757392404 Feedback is highly appreciated — I’ll keep iterating to make it better! 🚀
Replies
1
Boosts
0
Views
146
Activity
Jan ’26
Foundation Model Framework
Greetings! I was trying to get a response from the LanguageModelSession but I just keep getting the following: Error getting response: Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} This occurs both in macOS 15.5 running the new Xcode beta with an iOS 26 simulator, and also on a macOS 26 with Xcode beta. The simulators are both Pro iPhone 16s. I was wondering if anyone had any advice?
Replies
19
Boosts
3
Views
3.1k
Activity
Jan ’26