Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Created

Camera launched via Camera Control is terminated with “AVCaptureEventInteraction not installed” when viewing/editing photos
I’m seeing a reproducible system-level Camera crash/termination on iPhone Air running iOS 26.4.2. Steps to reproduce: Press Camera Control to launch the Camera app. Tap the lower-left thumbnail to enter the recent photo view. Browse photos, or tap Edit and start cropping a photo. The Camera/Photos flow unexpectedly exits and returns to the Home Screen or widget view. Additional detail: The issue can happen whether or not a new photo is taken after launching Camera with Camera Control. In other words, using Camera Control as a shortcut into Camera, then tapping the lower-left thumbnail to browse photos, can trigger the issue. Sometimes it happens while only browsing photos, without entering Edit. Expected result: The photo viewer/editor should stay open and allow normal browsing or cropping. Actual result: The flow exits unexpectedly. Mac Console evidence: Around 2026-05-12 21:53:59-21:54:00, Console showed SpringBoard/RunningBoard terminating com.apple.camera. Relevant log excerpt: Capture Application Requirements Unmet: "AVCaptureEventInteraction not installed" reportType: CrashLog ReportCrash Parsing corpse data for pid 94087 com.apple.camera: Foreground: false Storage is sufficient. Restart/reset-style support steps have already been tried and did not resolve the issue. This appears specific to the Camera Control launch path, not normal Photos app browsing. Has anyone else seen this on iOS 26.x, or is this a known Camera Control / AVCaptureEventInteraction regression? Already Filed as FB22766094.
0
0
93
20h
AVCaptureSession runtime error -11800 / 'what' on startRunning() with audio input — what's holding the HAL?
AVCaptureSession.startRunning() triggers AVCaptureSessionRuntimeErrorNotification with AVError.unknown (-11800), underlying OSStatus 2003329396 → fourCC 'what', every cold launch, but only when an audio AVCaptureDeviceInput is attached. Removing only the audio input makes the error disappear. Same code in a fresh project records audio fine — bug only appears in this app's binary. AVAudioApplication.shared.recordPermission == .granted. Info.plist has NSMicrophoneUsageDescription. No interruption notifications fire. Test device: iPhone 16 Pro, iOS 26.4.2. iOS deployment target 17.1. Minimal reproducer import AVFoundation let session = AVCaptureSession() session.beginConfiguration() let camera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)! session.addInput(try AVCaptureDeviceInput(device: camera)) // Removing ONLY this line makes the error disappear: let mic = AVCaptureDevice.default(for: .audio)! session.addInput(try AVCaptureDeviceInput(device: mic)) session.addOutput(AVCaptureMovieFileOutput()) session.addOutput(AVCapturePhotoOutput()) session.commitConfiguration() NotificationCenter.default.addObserver( forName: .AVCaptureSessionRuntimeError, object: session, queue: nil ) { print($0.userInfo ?? [:]) } session.startRunning() // -11800 / 'what' fires within ~2 sec Observed state at error time AVError.unknown (-11800) underlyingError = NSError(NSOSStatusErrorDomain, 2003329396) userInfo[AVErrorFourCharCode] = 'what' captureSession.isRunning = false ← never came up captureSession.isInterrupted = false captureSession.preset = .high captureSession.inputs = [Back Triple Camera, iPhone Microphone] AVAudioSession.sharedInstance(): category = .playAndRecord mode = .videoRecording sampleRate = 48000.0 isInputAvailable = true isOtherAudioPlaying = false availableInputs = [MicrophoneBuiltIn] (no BT/Continuity/AirPods) currentRoute.inputs = [] ← EMPTY currentRoute.outputs = [Speaker|Speaker] 2003329396 = 0x77686174 = 'what'. From a few SO threads this maps to AURemoteIO::StartIO returning a HAL-bring-up failure. The smoking gun: currentRoute.inputs is empty even though availableInputs contains the built-in mic, isInputAvailable is true, the category is .playAndRecord, and isOtherAudioPlaying is false. The HAL never routes the mic into the session, then 'what' follows. Nothing observable from AVAudioSession indicates a competing client. Environment / SDKs linked Firebase (SPM: Crashlytics, Performance, Messaging, Analytics, AppCheck, RemoteConfig, DynamicLinks), FBSDK, Kingfisher, MetalPetal. Multiple Google ad mediation pods present, but their audio session takeover is already disabled (audioVideoManager.isAudioSessionApplicationManaged = true, IMSdk.shouldAutoManageAVAudioSession(false)). What I've ruled out (all still produce 'what') Audio session config: .playAndRecord/.videoRecording, .playAndRecord/.default, .record/.measurement, .record/.default. With/without .defaultToSpeaker, .allowBluetooth, .allowBluetoothA2DP, .mixWithOthers. setActive(true) before vs. after attaching audio input. setPreferredInput(builtInMic) (verified accepted). 200ms Thread.sleep between setActive(true) and startRunning(). Setting usesApplicationAudioSession = false swaps the fourCC to '!rec' but produces the same outcome. Topology: sessionPreset = .high / .hd1920x1080 / .hd1280x720 / .medium. Camera = .builtInTripleCamera / .builtInDualWideCamera / .builtInWideAngleCamera. AVCam-style always-attached graph. Setting sessionPreset before vs. after adding inputs. Threading: All session mutations on a single dedicated DispatchQueue (vs. Swift actor). 1× and 2× full stopRunning()+startRunning() recovery cycles ("do it twice" pattern) — both re-fail with 'what'. SDK takeover prevention: GoogleMobileAdsMediation pods (Vungle, Mintegral, Pangle, Unity, InMobi), Google-Mobile-Ads-SDK, MediaPipeTasksVision removed via full pod uninstall + clean build — 'what' persists. Notifications during the failure window: 3 × AVAudioSession.routeChangeNotification reason categoryChange before the error fires, even though category stays .playAndRecord/.videoRecording. Disabling automaticallyConfiguresApplicationAudioSession drops this to 1, but the runtime error still fires. No AVAudioSession.interruptionNotification. No AVCaptureSessionWasInterruptedNotification. Symbol audit otool -L and nm of the bundle confirm none of the linked frameworks reference AVAudioRecorder, AudioComponentInstanceNew, AURemoteIO, or AudioUnitInitialize in their symbol tables. Only the app's own files reference any audio API. Yet adding AVCaptureDeviceInput(.audio) reproduces 100% in this binary and 0% in a fresh project. My questions Who is most likely holding the audio HAL in a process where no linked framework references the AudioUnit / HAL APIs directly? Are there framework load-time audio initializations that don't show up in symbol tables (e.g., dynamic dlopen, CFBundleLoadExecutable) that could grab the HAL? Is there an os_log subsystem / category that surfaces the underlying AURemoteIO::StartIO failure reason at runtime? com.apple.coreaudio shows 'what' but not the originating cause. currentRoute.inputs is empty at error time even though availableInputs = [MicrophoneBuiltIn], isInputAvailable = true, and the category is .playAndRecord. What does an empty input route under those conditions imply, and what other system-level holders could be preventing the HAL from routing the mic in? Has anyone seen 'what' resolve with a device reboot, an iOS update, or by removing a specific framework? Happy to share a sysdiagnose. Thanks!
1
0
263
3d
RotationCoordinator returns angles 90 degrees lower on iPhone 17 Pro front camera — clarification on contract with AVSampleBufferDisplayLayer
Hi AVFoundation team, I'm seeing a uniform 90° offset in AVCaptureDevice.RotationCoordinator's reported angles between iPhone 17 Pro and iPhone 14 Pro using the front-facing .builtInWideAngleCamera (Center Stage on 17 Pro), and I'd like to confirm whether this is by design and what the recommended consumption pattern is when the rendering surface is an AVSampleBufferDisplayLayer rather than an AVCaptureVideoPreviewLayer. Here is the github repo of sample project. Setup Devices: iPhone 14 Pro (iOS 26.5) and iPhone 17 Pro (iOS 26.4.2) Camera: front, AVCaptureDeviceTypeBuiltInWideAngleCamera Active format: 1920×1080 Three RotationCoordinator instances are created on the same AVCaptureDevice, varying only the previewLayer: argument: - previewLayer: nil - previewLayer: AVSampleBufferDisplayLayer (the surface receiving frames from AVCaptureVideoDataOutput) - previewLayer: AVCaptureVideoPreviewLayer (with .session = captureSession, not displayed) Each instance is KVO-observed for videoRotationAngleForHorizonLevelPreview and videoRotationAngleForHorizonLevelCapture. Observed angles Device / Orientation: 14 Pro · Portrait (interface=1) RC[nil] prev / cap: 0° / 90° RC[AVSampleBufferLayer] prev / cap: 90° / 90° RC[AVCaptureVideoPreviewLayer] prev / cap: 0° / 90° ──────────────────────────────────────── Device / Orientation: 14 Pro · LandscapeRight (interface=3) RC[nil] prev / cap: 0° / 180° RC[AVSampleBufferLayer] prev / cap: 180° / 180° RC[AVCaptureVideoPreviewLayer] prev / cap: 0° / 180° ──────────────────────────────────────── Device / Orientation: 17 Pro · Portrait (interface=1) RC[nil] prev / cap: 0° / 0° RC[AVSampleBufferLayer] prev / cap: 0° / 0° RC[AVCaptureVideoPreviewLayer] prev / cap: 0° / 0° ──────────────────────────────────────── Device / Orientation: 17 Pro · LandscapeRight (interface=3) RC[nil] prev / cap: 0° / 90° RC[AVSampleBufferLayer] prev / cap: 90° / 90° RC[AVCaptureVideoPreviewLayer] prev / cap: 0° / 90° The −90° offset on 17 Pro is uniform: it appears in every RC variant, in both the preview-angle and the capture-angle properties, at every orientation tested. It is not specific to the previewLayer: argument.
1
0
244
3d
Electron app + Apple Music playback: queue works, playback does not start. Looking for guidance.
Hi everyone. I’m building a macOS-first desktop app where music drives the app's behavior loop. The app is currently an Electron prototype. The blocker: we’re testing Apple Music inside an Electron app. MusicKit JS authorization works, catalog search works, and setting the queue works, but playback does not actually start in Electron. What we tried: Created Apple Developer / MusicKit credentials. Generated Apple Music developer tokens successfully. Retrieved a Music User Token through MusicKit JS. Confirmed Apple Music API calls work. Confirmed /v1/test and /me/storefront return 200 OK. Built a local HTTP auth/playback window inside Electron instead of using file://. Tested music.setQueue() with both: { song: songId } { url: catalogUrl } In Electron, the queue loads correctly: queueEmpty=false queueLength=1 volume=1 playbackRate=1 But after music.play(), playbackTime stays at 0 and no audio plays. Then we ran the same MusicKit playback test in normal Chrome using the same token, same local origin, same catalog track, and same queue descriptor. Chrome played successfully and playbackTime advanced. We also checked Electron directly and found navigator.requestMediaKeySystemAccess is missing, so our current theory is that stock Electron lacks the protected media / EME support Apple Music web playback needs. Important: we are not trying to bypass DRM or extract audio. We just want a legitimate way for a user-authorized macOS app to control Apple Music playback or observe playback state. What we’re considering next: Use the native macOS Music app as the playback engine and control it from our app. Test AppleScript / Automation permissions for play, pause, next, current track, player state, etc. Later, possibly build a native Swift helper using Apple Music / MediaPlayer APIs and communicate with Electron over IPC. Avoid relying on Electron MusicKit JS playback if this is a known dead end. Questions: Has anyone successfully made Apple Music / MusicKit JS playback work inside Electron? Is the missing EME/protected-media layer the expected blocker here? Is controlling the native macOS Music app the more realistic path? Any gotchas with AppleScript, MusicKit native APIs, or Electron + native helper architecture for this use case? Any pointers from people who have dealt with Electron + Apple Music / protected media would be appreciated.
0
0
39
3d
Radiometric interpretation of Apple ProRAW and Bayer RAW access via AVFoundation
I am working on a computational photography research project involving multi-exposure HDR reconstruction using Bayer RAW and Apple ProRAW captures. I would like to clarify the radiometric interpretation of Apple ProRAW and the availability of Bayer RAW capture through AVFoundation. My questions are: 1.On current iPhone Pro devices, is it possible for third-party apps to capture and export true Bayer-pattern RAW DNG files through AVFoundation, rather than Apple ProRAW linear DNG files? If so, which availableRawPhotoPixelFormatTypes correspond to Bayer RAW, and what device or format restrictions apply? 2.Apple ProRAW appears to be demosaiced and computationally processed, and may include multi-frame fusion. Is the decoded ProRAW image intended to be radiometrically linear and scene-referred? 3.For a bracketed ProRAW sequence captured with fixed ISO, white balance, lens, and focus, but different exposure times, can one assume that the decoded linear pixel values Y_i(p) satisfy an exposure-proportional model in non-saturated regions, such as Y_i(p) ≈ t_i R(p), across brackets? This question is about radiometric consistency for algorithmic use, not about visual editing or tone mapping. Thank you for your help.
0
0
144
4d
Radiometric interpretation of Apple ProRAW and Bayer RAW access via AVFoundation
I am working on a computational photography research project involving multi-exposure HDR reconstruction using Bayer RAW and Apple ProRAW captures. I would like to clarify the radiometric interpretation of Apple ProRAW and the availability of Bayer RAW capture through AVFoundation. My questions are: On current iPhone Pro devices, is it possible for third-party apps to capture and export true Bayer-pattern RAW DNG files through AVFoundation, rather than Apple ProRAW linear DNG files? If so, which availableRawPhotoPixelFormatTypes correspond to Bayer RAW, and what device or format restrictions apply? Apple ProRAW appears to be demosaiced and computationally processed, and may include multi-frame fusion. Is the decoded ProRAW image intended to be radiometrically linear and scene-referred? For a bracketed ProRAW sequence captured with fixed ISO, white balance, lens, and focus, but different exposure times, can one assume that the decoded linear pixel values Y_i(p) satisfy an exposure-proportional model in non-saturated regions, such as Y_i(p) ≈ t_i R(p), across brackets? This question is about radiometric consistency for algorithmic use, not about visual editing or tone mapping. Thank you for your help.
0
0
145
4d
How to Monitor Any USB Audio or Video Device on macOS
USB cameras, microphones, HDMI capture cards, and audio interfaces are supposed to "just work" on macOS. In reality, it's often difficult to quickly access or monitor them without opening large and complicated software. Sometimes you simply want to see whether a USB camera is active. Sometimes you want to check an HDMI source connected through a capture card. And in other cases, you may want to use a Mac mini without a dedicated monitor by viewing its HDMI output through a USB capture device directly on another Mac. macOS supports many modern USB AV devices out of the box, but it surprisingly lacks a simple built-in utility for live monitoring and recording. Most users end up using oversized streaming or editing applications just to preview a video signal or monitor audio input. That becomes especially noticeable with: USB webcams HDMI capture adapters USB microphones audio interfaces secondary computers headless Mac mini setups A lightweight monitor utility is often much more practical when you only need real-time access to a device, want to record a stream, or quickly switch between multiple AV inputs. That's one of the reasons I built AV Monitor Pro  -  a native macOS app designed for monitoring and recording connected audio/video devices in real time. It can preview USB cameras, capture cards, microphones, and HDMI sources with minimal setup, and it's especially useful for workflows like running a Mac mini without a monitor, monitoring external devices, or recording live AV input directly on macOS.
0
0
166
5d
AudioHardwareCreateProcessTap delivers all-zero buffers while system audio is audible
Summary Using AudioHardwareCreateProcessTap + AudioHardwareCreateAggregateDevice for system audio capture. During long sessions, the AudioDeviceIOProc callback continues firing normally but every PCM sample is exactly 0.0f — while the system is producing audible output. Environment Field Value macOS 26.5 Beta Hardware MacBook Air (M2) API AudioHardwareCreateProcessTap + AudioHardwareCreateAggregateDevice Tap CATapDescription, processes = [], .unmuted, private Format 48,000 Hz, Float32, interleaved stereo Aggregate anchor kAudioAggregateDeviceMainSubDeviceKey = current default output UID Observed behavior After running normally for several minutes, the stream transitions into an all-zero state: AudioDeviceIOProc continues to fire at expected cadence Frame count, timestamps (mHostTime, mSampleTime), and mDataByteSize all look normal AudioBufferList pointers are valid Every sample in every buffer is exactly 0.0f Other apps are still producing audible output through the same output device The condition may self-recover or persist until the session is stopped Confirmed via RMS logging both inside the IOProc and after the ring buffer consumer — data is zero on delivery, not introduced downstream. Example: 51-minute session on MacBook Air M2 Segment 1 (~7 min): Three all-zero periods: 60 s, 53 s, 141 s. Real PCM briefly returned between them. Segment 2 (~44 min): Two all-zero periods: 16 min 3 s, 3 min 8 s. IOProc cadence, timestamp deltas, default output UID, and kAudioDevicePropertyDeviceIsRunningSomewhere all remained normal throughout. What I have ruled out Actual silence: User was in an active video call and could hear participants through the output device. Default output device change: Monitored kAudioHardwarePropertyDefaultOutputDevice — no change during affected periods. IOProc stall: Heartbeat and kAudioDevicePropertyDeviceIsRunningSomewhere remained normal. Aggregate device destroyed: AudioObjectGetPropertyData on the aggregate UID continued returning the expected device. Tap descriptor misconfiguration: The same tap produces valid PCM earlier in the same session, so this is not a startup-time issue. Why detection is hard All-zero buffers from a broken tap are indistinguishable from legitimate silence (muted participant, waiting room, paused media). kAudioProcessPropertyIsRunningOutput reports whether a process has active output IO, not whether it is contributing non-zero samples — a muted Zoom call still reports true. Possible correlations Sample-rate renegotiation on the output device (44.1 kHz ↔ 48 kHz) when another app changes output Bluetooth device state changes (AirPods sleep/wake) where UID stays the same MacBook Air more frequently affected than MacBook Pro Always occurs after extended uptime — first few minutes are consistently clean Current workaround Full teardown and rebuild restores real PCM. Restarting the IOProc alone or recreating only the aggregate device is not reliable — both the Process Tap and Aggregate Device must be destroyed and recreated. 1. AudioDeviceStop 2. AudioDeviceDestroyIOProcID 3. AudioHardwareDestroyAggregateDevice 4. AudioHardwareDestroyProcessTap 5. AudioHardwareCreateProcessTap 6. AudioHardwareCreateAggregateDevice 7. Create + start new IOProc Applying this automatically is risky because it cannot be reliably distinguished from legitimate silence. Questions Expected failure mode? Can a Process Tap continue delivering zero-filled buffers while the system output is audible? Is this expected under certain device or routing conditions? Detection signal? Is there any HAL property, notification, or diagnostic counter that distinguishes "sources are genuinely silent" from "the tap data path has stopped receiving the real mix"? Targeted recovery? Is there a supported way to re-anchor or reset the tap data path without destroying and recreating both objects? Full rebuild as intended workaround? If so, it would help to confirm this so developers can converge on a consistent approach. Mixer activity signal? kAudioProcessPropertyIsRunningOutput reflects IO registration, not sample contribution. Is there any AudioProcess property that indicates a process is currently delivering non-zero audio to the system mixer?
0
0
223
5d
CarPlay HID transport buttons remap to call-control during continuous mic capture (no opt-out API)
Hello, I am developing Uniq Intercom, a voice-only group communication app for motorcyclists (always-on intercom over WebRTC, used continuously for multi-hour rides). I am seeking guidance on an iOS audio session and CarPlay HID interaction I have not been able to resolve through documented APIs. Problem: As soon as my app activates the microphone (yellow recording indicator visible), iOS appears to classify the app as a real-time communication participant and CarPlay re-routes the steering-wheel / handlebar HID transport buttons (left / right / ok) from the media-control role to the call-control role (answer/decline). Because I do not register a CallKit / LiveCommunicationKit call (the session is a continuous group voice channel, not a discrete telephony call), there is no call object for those buttons to act upon — they effectively become inert. Why this matters: Motorcyclists rely on the intercom for 4–6 hour rides. CarPlay is now built into a growing number of modern motorcycles and with aftermarket display units virtually any bike, and any rider who uses any voice communication platform alongside it — Uniq Intercom, WhatsApp Call currently runs into this same handlebar button remap. With the buttons inert, the rider's only remaining option is to reach for the motorcycle's touchscreen to skip a track or change navigation — this is unsafe. The exact same remap behavior occurs during a real WhatsApp or Phone call — but for those the remap is correct (answer/decline maps to a real call). For continuous voice apps without a CallKit-style discrete call, no equivalent path exists today. As both an iOS developer and a motorcyclist, I would very much like to see this resolved — solving it would meaningfully improve safety for every rider using an iPhone with CarPlay. Configurations I have tested (all reproduce the symptom on iOS 18+ / 26 with wireless CarPlay): AVAudioSession.Category.playAndRecord + .voiceChat mode + various option combinations (duckOthers, mixWithOthers, allowBluetoothHFP, allowBluetoothA2DP, defaultToSpeaker) Same category with .videoChat mode (which @livekit/react-native defaults to) Same category with .default mode (re-applied after setAudioModeAsync to defeat any framework override) — confirmed Mode = Default for ~2 s window in audiomxd log before WebRTC's setActive cycle returned mode to .voiceChat. Buttons remained remapped during the .default window. Disabling MPRemoteCommandCenter and clearing MPNowPlayingInfoCenter.default().nowPlayingInfo JS-side override of WebRTC's global RTCAudioSessionConfiguration via @livekit/react-native's AudioSession.setAppleAudioConfiguration({audioMode: 'default'}) bridge, applied both before connect and after setAudioModeAsync to defeat library overrides In every case the audiomxd system log confirms our session goes active (Mode = VoiceChat or Default, Recording = YES), and CarPlay HID buttons are immediately remapped to call-control. The middle "OK" button remains functional because it is not part of the call-control mapping — confirming the buttons are not blocked, only re-purposed. The remap occurs the instant the iOS recording indicator appears, regardless of audio session mode. This led me to conclude the trigger is not audio session mode but the combination of microphone permission + active session + (likely) the AUVoiceIO unit instantiated by WebRTC. I cannot find a public API path to suppress this classification while maintaining the always-on continuous voice channel. My questions: Is there an entitlement or API that allows an app with active microphone capture to declare itself as a non-call media participant, keeping CarPlay HID transport buttons in the media role? Is AVAudioSession.setPrefersEchoCancelledInput(_:) (iOS 18+) the intended path for retaining platform AEC under .default mode without the focus-engine "communication priority" marking? Documentation is sparse on its CarPlay arbitration implications. Does the PushToTalk framework affect HID arbitration differently from playAndRecord + voiceChat? Our continuous-channel UX does not fit the PTT transmit-on-press model, but understanding the contrast would help. If no current API exists, is this something the iOS Audio team would consider for future SDKs? Solving this would meaningfully improve safety for motorcycle and adventure-sport users on iOS. Thank you for your time and any guidance you can offer. — Emre Erkaya / Uniq Intercom
1
0
126
6d
PHPickerConfiguration.preselectedAssetIdentifiers not work
let authStatus = PHPhotoLibrary.authorizationStatus(for: .readWrite) let fetchResult = PHAsset.fetchAssets(withLocalIdentifiers: selectedAssetIDs, options: nil) print("[AlbumCreation] authStatus=\(authStatus.rawValue) IDs=\(selectedAssetIDs.count) PHAsset匹配=\(fetchResult.count)") // result is: [AlbumCreation] authStatus=3 IDs=3 PHAsset匹配=3 var config = PHPickerConfiguration(photoLibrary: .shared()) config.preselectedAssetIdentifiers = selectedAssetIDs config.selectionLimit = 0 let picker = PHPickerViewController(configuration: config) picker.delegate = self present(picker, animated: true)
1
0
170
1w
Mac (Designed for iPad) cannot access microphone
I have an application that is a VOIP application of sorts that needs access to the microphone. I am using the Mac (Designed for iPad) support to not have to do huge amounts of conditional building and support for all the many iOS specific things my app includes. I never get prompted to allow microphone permissions and I never see my app name appear in Privacy & Security -> Microphone permissions setup. So is it that Mac is just a dead end for any form of an application that needs a microphone and is running under Mac (Designed for iPad) compatibility mode? Why doesn't TCC have some mechanism to notice and grant access to mic use?
3
0
432
1w
MusicKit playback completely broken after Apple Music “What’s New?” update screen until native app is opened
I’m developing a third-party Apple Music streaming app using MusicKit (ApplicationMusicPlayer + catalog requests). Issue: Whenever Apple releases an Apple Music update that shows the “What’s New?” onboarding/modal screen in the native Apple Music app, MusicKit in our app completely breaks for all users. Attempts to play anything (queue, prepareToPlay, etc.) fail silently or with service-related errors. Playback and most MusicKit operations remain broken until the user opens the native Apple Music app, dismisses the “What’s New?” screen, and returns to our app. After that single native interaction (we deliberately stopped users from going any further within Apple Music to verify this), everything works perfectly again. Reproduction Steps: Apple Music receives an update with “What’s New?” screen. User launches our third-party app and attempts playback. MusicKit fails. User opens Apple Music → dismisses modal → returns to our app. MusicKit works again. Expected Behavior: Third-party MusicKit apps should not become non-functional because the native Apple Music app has a pending onboarding screen. Shared backend services (account readiness, tokens, subscription state, etc.) should initialize independently. Environment: iOS 26.4.2 Devices verified to be affected: iPhone 13 Pro iPhone XR iPhone 15 Workarounds attempted: Re-requesting MusicAuthorization Recreating ApplicationMusicPlayer Stopping/re-queuing Background/foreground app None resolve it without the native Apple Music interaction. This appears to be a recurring integration fragility with shared Apple Music services. Has anyone else seen this? Any recommended recovery path or API to force service initialization? Thanks!
1
2
215
1w
AVMutableComposition audio silently drops on iOS 26 when streaming over HTTP/2 (FB22696516)
We've discovered a regression in iOS 26 where AVMutableComposition silently drops audio when the source asset is streamed over HTTP/2. The same file served over HTTP/1.1 plays audio correctly through the same composition code. Direct AVPlayer playback (without composition) works fine on HTTP/2. This did not occur on iOS 18.x. It happens on physical devices only. It does not reproduce on a simulator or on macOS. Tested conditions (same MP4 file, different CDNs): CloudFront (HTTP/2) + Composition → ❌ Audio silent Cloudflare (HTTP/2) + Composition → ❌ Audio silent Akamai (HTTP/1.1) + Composition → ✅ Audio works Apple TS (HTTP/1.1) + Composition → ✅ Audio works Downloaded locally, then composed → ✅ Audio works Direct playback, no composition (HTTP/2) → ✅ Audio works The CloudFront and Akamai URLs serve the identical file — same S3 object, different CDN edge. CDN vendor doesn't matter; any HTTP/2 source triggers it. Minimal reproduction: let asset = AVURLAsset(url: http2URL) let videoTrack = try await asset.loadTracks(withMediaType: .video).first! let audioTrack = try await asset.loadTracks(withMediaType: .audio).first! let duration = try await asset.load(.duration) let composition = AVMutableComposition() let fullRange = CMTimeRange(start: .zero, end: duration) let compVideo = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)! try compVideo.insertTimeRange(fullRange, of: videoTrack, at: .zero) let compAudio = composition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)! try compAudio.insertTimeRange(fullRange, of: audioTrack, at: .zero) let item = AVPlayerItem(asset: composition.copy() as! AVComposition) player.replaceCurrentItem(with: item) player.play() // Video plays, audio goes silent after a while Playing the same asset directly works fine: player.replaceCurrentItem(with: AVPlayerItem(asset: asset)) player.play() // Both video and audio work Filed as FB22696516 Sample project: https://github.com/karlingen/AVCompositionBug
2
9
223
1w
AVAudioEngineConfigurationChangeNotification received while engine is running
The documentation for AVAudioEngineConfigurationChangeNotification states When the audio engine’s I/O unit observes a change to the audio input or output hardware’s channel count or sample rate, the audio engine stops, uninitializes itself, and issues this notification. A user of my framework has reported a crash during notification processing on iOS 26.4 when the main mixer node is disconnected from the output node in order to reestablish the connection with a different format. The failing precondition is com.apple.coreaudio.avfaudio: required condition is false: !IsRunning(). The report was observed on iPhone 16 / iOS 26.4.2, ARM64, TestFlight build. The backtrace contains: [Last Exception Backtrace] 3 AVFAudio AVAudioEngineGraph::_DisconnectInput AVAudioEngineGraph.mm:2728 4 AVFAudio -[AVAudioEngine disconnectNodeInput:bus:] AVAudioEngine.mm:155 5 SFB sfb::AudioPlayer::handleAudioEngineConfigurationChange AudioPlayer.mm:2247 [Thread 18 Crashed] 9 SFB sfb::AudioPlayer::handleAudioEngineConfigurationChange AudioPlayer.mm:2212 … 14 AVFAudio IOUnitConfigurationChanged Has the behavior for AVAudioEngineConfigurationChangeNotification changed in iOS 26.4? It's simple enough to call [engine_ stop] in the notification handler but the documentation states this shouldn't be necessary. I've not observed a similar crash on previous iOS versions.
0
1
143
1w
VNDetectTrajectoriesRequest not "seeing" ball in video
I am attempting to write an app which captures the flight of a ball from the iPhone's video preview, but I need some help. I am using the following code: request = VNDetectTrajectoriesRequest(frameAnalysisSpacing: frameCnt, trajectoryLength: trajLength, completionHandler: completionHandler)` to initiate a request to capture a "ball" from a videoPreview. In the "completionHandler" I use: guard let observations = request.results as? [VNTrajectoryObservation] else { //print("observations not set up#######") return } to capture observations. In the video capture setup I am using captureSession!.sessionPreset = .hd1920x1080 In the AVCaptureVideoDataOutputSampleBufferDelegate, I am using trajectoryQueue.async { [self] in do { try sequenceHandler.perform([request], on: sampleBuffer, orientation: .right) } catch { print("VNSequenceRequestHandler perform error: \(error)") } } I have also tried using VNImageRequestHandler to "capture" observations in the Delegate. A ball is "seen" only if the "ball" is rolling on the ground. If the ball in "flying" or "bouncing" no "observations" are provided. I have tried different FrameCounts & trajectory lengths with no effect. I am now developing the app primarily using an iPhone 14Pro running iOS 26.3.1. It should be noted that I started development using an old iPhone 6plus running iOS 15.7 with captureSession!.sessionPreset = .vga640x480. and I did get some good results. If I try the VGA resolution on the iPhone 14pro, I still see no ball flight. The basis for my app is software from 5 years ago, so I'm hoping that there has been some development on ball tracking since then. Thanks in advance for any help/suggestions.
0
0
284
1w
HLS Tools - hlsreport critical error cause
Hi, I'm currently experiencing issues with HLS streams created by FFmpeg running on Safari. When I pass the stream to the mediastreamvalidator tool and then run hlsreport on the output, I get a critical error reported: Media Entry discontinuity value does not match previous playlist for MEDIA-SEQUENCE 1 If I let the stream finish (it's a live stream from an IoT device) and then perform the stream validation again I no longer receive the critical error. My assumption is that this critical error is contributing to the HLS stall on iOS. I have also noticed that if I let the stream continue and then re-load the video control in Safari the stream starts Is there a resource with explanations or remediation paths relevant to the possible output of the hlsreport? My m3u8 output looks like this (I have redacted the server host) #EXTM3U #EXT-X-VERSION:6 #EXT-X-TARGETDURATION:2 #EXT-X-MEDIA-SEQUENCE:1 #EXT-X-PLAYLIST-TYPE:EVENT #EXT-X-INDEPENDENT-SEGMENTS #EXT-X-DISCONTINUITY #EXTINF:2.000000, https://redacted.com/segment-00001.ts #EXTINF:2.000011, https://redacted.com/segment-00002.ts #EXTINF:2.000011, https://redacted.com/segment-00003.ts #EXTINF:2.000011, https://redacted.com/segment-00004.ts #EXTINF:2.000011, #EXT-X-ENDLIST Thanks for any advice or guidance possible - if I can provide isolated code snippets I will do. Andy
1
0
607
1w
Working with kCVPixelFormatType_96VersatileBayerPacked12
Whilst AVCaptureSession is setup to capture ProRes RAW video, is it possible to get video pixel data which can read and processed, such as using CIImage(cvPixelBuffer: ) AVCaptureVideoDataOutput outputs ProRes RAW in kCVPixelFormatType_96VersatileBayerPacked12 pixel format. Is there a provided way to debayer this pixel format into something more usable?
0
0
102
1w
External UVC controls beyond resolution and frame rate
I want to clarify iPadOS AVFoundation behavior for external UVC cameras. Specifically: • Does iPadOS support external UVC controls beyond resolution and frame rate? • Or is support effectively limited to those two in practice (for non-Apple external UVC cameras)? • If other controls are supported (exposure, focus, white balance, zoom, etc.), what is the expected criteria for them to appear via AVCapture​Device? Runtime capability output from our external UVC camera: === Capabilities for VCI-AR0822-C === DeviceType: AVCaptureDeviceTypeExternal, position: 0, external: true UniqueID: 00000000-0020-0000-3407-000008220000 Active format media subtype: 420v Exposure modes supported: none Current exposure mode: locked Manual exposure (.custom) available: false Current exposure duration: nan s Current ISO: 0.0 Exposure target bias supported range: -8.0 ... 8.0 Current exposure target bias: 0.0 Focus modes supported: none Current focus mode: locked White balance modes supported: none Current white balance mode: locked Torch available: false, torch mode: 0, torch active: false Format[0]: 640x480, PF: 420v FPS Range: 30.0 - 60.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[1]: 640x480, PF: 420f FPS Range: 30.0 - 60.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[2]: 1280x720, PF: 420v FPS Range: 30.0 - 60.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[3]: 1280x720, PF: 420f FPS Range: 30.0 - 60.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[4]: 1920x1080, PF: 420v <-- ACTIVE FPS Range: 30.0 - 60.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[5]: 1920x1080, PF: 420f FPS Range: 30.0 - 60.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[6]: 2560x1440, PF: 420v FPS Range: 15.0 - 30.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[7]: 2560x1440, PF: 420f FPS Range: 30.0 - 60.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[8]: 3840x2160, PF: 420v FPS Range: 8.0 - 15.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[9]: 3840x2160, PF: 420f FPS Range: 15.0 - 30.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s =============================== We are looking for general platform guidance (not only device-specific debugging), especially whether external UVC control support on iPadOS is expected beyond resolution/fps and how to interpret “none/0.0” capability outputs. Thank you.
0
0
218
2w
Camera launched via Camera Control is terminated with “AVCaptureEventInteraction not installed” when viewing/editing photos
I’m seeing a reproducible system-level Camera crash/termination on iPhone Air running iOS 26.4.2. Steps to reproduce: Press Camera Control to launch the Camera app. Tap the lower-left thumbnail to enter the recent photo view. Browse photos, or tap Edit and start cropping a photo. The Camera/Photos flow unexpectedly exits and returns to the Home Screen or widget view. Additional detail: The issue can happen whether or not a new photo is taken after launching Camera with Camera Control. In other words, using Camera Control as a shortcut into Camera, then tapping the lower-left thumbnail to browse photos, can trigger the issue. Sometimes it happens while only browsing photos, without entering Edit. Expected result: The photo viewer/editor should stay open and allow normal browsing or cropping. Actual result: The flow exits unexpectedly. Mac Console evidence: Around 2026-05-12 21:53:59-21:54:00, Console showed SpringBoard/RunningBoard terminating com.apple.camera. Relevant log excerpt: Capture Application Requirements Unmet: "AVCaptureEventInteraction not installed" reportType: CrashLog ReportCrash Parsing corpse data for pid 94087 com.apple.camera: Foreground: false Storage is sufficient. Restart/reset-style support steps have already been tried and did not resolve the issue. This appears specific to the Camera Control launch path, not normal Photos app browsing. Has anyone else seen this on iOS 26.x, or is this a known Camera Control / AVCaptureEventInteraction regression? Already Filed as FB22766094.
Replies
0
Boosts
0
Views
93
Activity
20h
AVCaptureSession runtime error -11800 / 'what' on startRunning() with audio input — what's holding the HAL?
AVCaptureSession.startRunning() triggers AVCaptureSessionRuntimeErrorNotification with AVError.unknown (-11800), underlying OSStatus 2003329396 → fourCC 'what', every cold launch, but only when an audio AVCaptureDeviceInput is attached. Removing only the audio input makes the error disappear. Same code in a fresh project records audio fine — bug only appears in this app's binary. AVAudioApplication.shared.recordPermission == .granted. Info.plist has NSMicrophoneUsageDescription. No interruption notifications fire. Test device: iPhone 16 Pro, iOS 26.4.2. iOS deployment target 17.1. Minimal reproducer import AVFoundation let session = AVCaptureSession() session.beginConfiguration() let camera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)! session.addInput(try AVCaptureDeviceInput(device: camera)) // Removing ONLY this line makes the error disappear: let mic = AVCaptureDevice.default(for: .audio)! session.addInput(try AVCaptureDeviceInput(device: mic)) session.addOutput(AVCaptureMovieFileOutput()) session.addOutput(AVCapturePhotoOutput()) session.commitConfiguration() NotificationCenter.default.addObserver( forName: .AVCaptureSessionRuntimeError, object: session, queue: nil ) { print($0.userInfo ?? [:]) } session.startRunning() // -11800 / 'what' fires within ~2 sec Observed state at error time AVError.unknown (-11800) underlyingError = NSError(NSOSStatusErrorDomain, 2003329396) userInfo[AVErrorFourCharCode] = 'what' captureSession.isRunning = false ← never came up captureSession.isInterrupted = false captureSession.preset = .high captureSession.inputs = [Back Triple Camera, iPhone Microphone] AVAudioSession.sharedInstance(): category = .playAndRecord mode = .videoRecording sampleRate = 48000.0 isInputAvailable = true isOtherAudioPlaying = false availableInputs = [MicrophoneBuiltIn] (no BT/Continuity/AirPods) currentRoute.inputs = [] ← EMPTY currentRoute.outputs = [Speaker|Speaker] 2003329396 = 0x77686174 = 'what'. From a few SO threads this maps to AURemoteIO::StartIO returning a HAL-bring-up failure. The smoking gun: currentRoute.inputs is empty even though availableInputs contains the built-in mic, isInputAvailable is true, the category is .playAndRecord, and isOtherAudioPlaying is false. The HAL never routes the mic into the session, then 'what' follows. Nothing observable from AVAudioSession indicates a competing client. Environment / SDKs linked Firebase (SPM: Crashlytics, Performance, Messaging, Analytics, AppCheck, RemoteConfig, DynamicLinks), FBSDK, Kingfisher, MetalPetal. Multiple Google ad mediation pods present, but their audio session takeover is already disabled (audioVideoManager.isAudioSessionApplicationManaged = true, IMSdk.shouldAutoManageAVAudioSession(false)). What I've ruled out (all still produce 'what') Audio session config: .playAndRecord/.videoRecording, .playAndRecord/.default, .record/.measurement, .record/.default. With/without .defaultToSpeaker, .allowBluetooth, .allowBluetoothA2DP, .mixWithOthers. setActive(true) before vs. after attaching audio input. setPreferredInput(builtInMic) (verified accepted). 200ms Thread.sleep between setActive(true) and startRunning(). Setting usesApplicationAudioSession = false swaps the fourCC to '!rec' but produces the same outcome. Topology: sessionPreset = .high / .hd1920x1080 / .hd1280x720 / .medium. Camera = .builtInTripleCamera / .builtInDualWideCamera / .builtInWideAngleCamera. AVCam-style always-attached graph. Setting sessionPreset before vs. after adding inputs. Threading: All session mutations on a single dedicated DispatchQueue (vs. Swift actor). 1× and 2× full stopRunning()+startRunning() recovery cycles ("do it twice" pattern) — both re-fail with 'what'. SDK takeover prevention: GoogleMobileAdsMediation pods (Vungle, Mintegral, Pangle, Unity, InMobi), Google-Mobile-Ads-SDK, MediaPipeTasksVision removed via full pod uninstall + clean build — 'what' persists. Notifications during the failure window: 3 × AVAudioSession.routeChangeNotification reason categoryChange before the error fires, even though category stays .playAndRecord/.videoRecording. Disabling automaticallyConfiguresApplicationAudioSession drops this to 1, but the runtime error still fires. No AVAudioSession.interruptionNotification. No AVCaptureSessionWasInterruptedNotification. Symbol audit otool -L and nm of the bundle confirm none of the linked frameworks reference AVAudioRecorder, AudioComponentInstanceNew, AURemoteIO, or AudioUnitInitialize in their symbol tables. Only the app's own files reference any audio API. Yet adding AVCaptureDeviceInput(.audio) reproduces 100% in this binary and 0% in a fresh project. My questions Who is most likely holding the audio HAL in a process where no linked framework references the AudioUnit / HAL APIs directly? Are there framework load-time audio initializations that don't show up in symbol tables (e.g., dynamic dlopen, CFBundleLoadExecutable) that could grab the HAL? Is there an os_log subsystem / category that surfaces the underlying AURemoteIO::StartIO failure reason at runtime? com.apple.coreaudio shows 'what' but not the originating cause. currentRoute.inputs is empty at error time even though availableInputs = [MicrophoneBuiltIn], isInputAvailable = true, and the category is .playAndRecord. What does an empty input route under those conditions imply, and what other system-level holders could be preventing the HAL from routing the mic in? Has anyone seen 'what' resolve with a device reboot, an iOS update, or by removing a specific framework? Happy to share a sysdiagnose. Thanks!
Replies
1
Boosts
0
Views
263
Activity
3d
RotationCoordinator returns angles 90 degrees lower on iPhone 17 Pro front camera — clarification on contract with AVSampleBufferDisplayLayer
Hi AVFoundation team, I'm seeing a uniform 90° offset in AVCaptureDevice.RotationCoordinator's reported angles between iPhone 17 Pro and iPhone 14 Pro using the front-facing .builtInWideAngleCamera (Center Stage on 17 Pro), and I'd like to confirm whether this is by design and what the recommended consumption pattern is when the rendering surface is an AVSampleBufferDisplayLayer rather than an AVCaptureVideoPreviewLayer. Here is the github repo of sample project. Setup Devices: iPhone 14 Pro (iOS 26.5) and iPhone 17 Pro (iOS 26.4.2) Camera: front, AVCaptureDeviceTypeBuiltInWideAngleCamera Active format: 1920×1080 Three RotationCoordinator instances are created on the same AVCaptureDevice, varying only the previewLayer: argument: - previewLayer: nil - previewLayer: AVSampleBufferDisplayLayer (the surface receiving frames from AVCaptureVideoDataOutput) - previewLayer: AVCaptureVideoPreviewLayer (with .session = captureSession, not displayed) Each instance is KVO-observed for videoRotationAngleForHorizonLevelPreview and videoRotationAngleForHorizonLevelCapture. Observed angles Device / Orientation: 14 Pro · Portrait (interface=1) RC[nil] prev / cap: 0° / 90° RC[AVSampleBufferLayer] prev / cap: 90° / 90° RC[AVCaptureVideoPreviewLayer] prev / cap: 0° / 90° ──────────────────────────────────────── Device / Orientation: 14 Pro · LandscapeRight (interface=3) RC[nil] prev / cap: 0° / 180° RC[AVSampleBufferLayer] prev / cap: 180° / 180° RC[AVCaptureVideoPreviewLayer] prev / cap: 0° / 180° ──────────────────────────────────────── Device / Orientation: 17 Pro · Portrait (interface=1) RC[nil] prev / cap: 0° / 0° RC[AVSampleBufferLayer] prev / cap: 0° / 0° RC[AVCaptureVideoPreviewLayer] prev / cap: 0° / 0° ──────────────────────────────────────── Device / Orientation: 17 Pro · LandscapeRight (interface=3) RC[nil] prev / cap: 0° / 90° RC[AVSampleBufferLayer] prev / cap: 90° / 90° RC[AVCaptureVideoPreviewLayer] prev / cap: 0° / 90° The −90° offset on 17 Pro is uniform: it appears in every RC variant, in both the preview-angle and the capture-angle properties, at every orientation tested. It is not specific to the previewLayer: argument.
Replies
1
Boosts
0
Views
244
Activity
3d
Electron app + Apple Music playback: queue works, playback does not start. Looking for guidance.
Hi everyone. I’m building a macOS-first desktop app where music drives the app's behavior loop. The app is currently an Electron prototype. The blocker: we’re testing Apple Music inside an Electron app. MusicKit JS authorization works, catalog search works, and setting the queue works, but playback does not actually start in Electron. What we tried: Created Apple Developer / MusicKit credentials. Generated Apple Music developer tokens successfully. Retrieved a Music User Token through MusicKit JS. Confirmed Apple Music API calls work. Confirmed /v1/test and /me/storefront return 200 OK. Built a local HTTP auth/playback window inside Electron instead of using file://. Tested music.setQueue() with both: { song: songId } { url: catalogUrl } In Electron, the queue loads correctly: queueEmpty=false queueLength=1 volume=1 playbackRate=1 But after music.play(), playbackTime stays at 0 and no audio plays. Then we ran the same MusicKit playback test in normal Chrome using the same token, same local origin, same catalog track, and same queue descriptor. Chrome played successfully and playbackTime advanced. We also checked Electron directly and found navigator.requestMediaKeySystemAccess is missing, so our current theory is that stock Electron lacks the protected media / EME support Apple Music web playback needs. Important: we are not trying to bypass DRM or extract audio. We just want a legitimate way for a user-authorized macOS app to control Apple Music playback or observe playback state. What we’re considering next: Use the native macOS Music app as the playback engine and control it from our app. Test AppleScript / Automation permissions for play, pause, next, current track, player state, etc. Later, possibly build a native Swift helper using Apple Music / MediaPlayer APIs and communicate with Electron over IPC. Avoid relying on Electron MusicKit JS playback if this is a known dead end. Questions: Has anyone successfully made Apple Music / MusicKit JS playback work inside Electron? Is the missing EME/protected-media layer the expected blocker here? Is controlling the native macOS Music app the more realistic path? Any gotchas with AppleScript, MusicKit native APIs, or Electron + native helper architecture for this use case? Any pointers from people who have dealt with Electron + Apple Music / protected media would be appreciated.
Replies
0
Boosts
0
Views
39
Activity
3d
Radiometric interpretation of Apple ProRAW and Bayer RAW access via AVFoundation
I am working on a computational photography research project involving multi-exposure HDR reconstruction using Bayer RAW and Apple ProRAW captures. I would like to clarify the radiometric interpretation of Apple ProRAW and the availability of Bayer RAW capture through AVFoundation. My questions are: 1.On current iPhone Pro devices, is it possible for third-party apps to capture and export true Bayer-pattern RAW DNG files through AVFoundation, rather than Apple ProRAW linear DNG files? If so, which availableRawPhotoPixelFormatTypes correspond to Bayer RAW, and what device or format restrictions apply? 2.Apple ProRAW appears to be demosaiced and computationally processed, and may include multi-frame fusion. Is the decoded ProRAW image intended to be radiometrically linear and scene-referred? 3.For a bracketed ProRAW sequence captured with fixed ISO, white balance, lens, and focus, but different exposure times, can one assume that the decoded linear pixel values Y_i(p) satisfy an exposure-proportional model in non-saturated regions, such as Y_i(p) ≈ t_i R(p), across brackets? This question is about radiometric consistency for algorithmic use, not about visual editing or tone mapping. Thank you for your help.
Replies
0
Boosts
0
Views
144
Activity
4d
Radiometric interpretation of Apple ProRAW and Bayer RAW access via AVFoundation
I am working on a computational photography research project involving multi-exposure HDR reconstruction using Bayer RAW and Apple ProRAW captures. I would like to clarify the radiometric interpretation of Apple ProRAW and the availability of Bayer RAW capture through AVFoundation. My questions are: On current iPhone Pro devices, is it possible for third-party apps to capture and export true Bayer-pattern RAW DNG files through AVFoundation, rather than Apple ProRAW linear DNG files? If so, which availableRawPhotoPixelFormatTypes correspond to Bayer RAW, and what device or format restrictions apply? Apple ProRAW appears to be demosaiced and computationally processed, and may include multi-frame fusion. Is the decoded ProRAW image intended to be radiometrically linear and scene-referred? For a bracketed ProRAW sequence captured with fixed ISO, white balance, lens, and focus, but different exposure times, can one assume that the decoded linear pixel values Y_i(p) satisfy an exposure-proportional model in non-saturated regions, such as Y_i(p) ≈ t_i R(p), across brackets? This question is about radiometric consistency for algorithmic use, not about visual editing or tone mapping. Thank you for your help.
Replies
0
Boosts
0
Views
145
Activity
4d
How to Monitor Any USB Audio or Video Device on macOS
USB cameras, microphones, HDMI capture cards, and audio interfaces are supposed to "just work" on macOS. In reality, it's often difficult to quickly access or monitor them without opening large and complicated software. Sometimes you simply want to see whether a USB camera is active. Sometimes you want to check an HDMI source connected through a capture card. And in other cases, you may want to use a Mac mini without a dedicated monitor by viewing its HDMI output through a USB capture device directly on another Mac. macOS supports many modern USB AV devices out of the box, but it surprisingly lacks a simple built-in utility for live monitoring and recording. Most users end up using oversized streaming or editing applications just to preview a video signal or monitor audio input. That becomes especially noticeable with: USB webcams HDMI capture adapters USB microphones audio interfaces secondary computers headless Mac mini setups A lightweight monitor utility is often much more practical when you only need real-time access to a device, want to record a stream, or quickly switch between multiple AV inputs. That's one of the reasons I built AV Monitor Pro  -  a native macOS app designed for monitoring and recording connected audio/video devices in real time. It can preview USB cameras, capture cards, microphones, and HDMI sources with minimal setup, and it's especially useful for workflows like running a Mac mini without a monitor, monitoring external devices, or recording live AV input directly on macOS.
Replies
0
Boosts
0
Views
166
Activity
5d
AudioHardwareCreateProcessTap delivers all-zero buffers while system audio is audible
Summary Using AudioHardwareCreateProcessTap + AudioHardwareCreateAggregateDevice for system audio capture. During long sessions, the AudioDeviceIOProc callback continues firing normally but every PCM sample is exactly 0.0f — while the system is producing audible output. Environment Field Value macOS 26.5 Beta Hardware MacBook Air (M2) API AudioHardwareCreateProcessTap + AudioHardwareCreateAggregateDevice Tap CATapDescription, processes = [], .unmuted, private Format 48,000 Hz, Float32, interleaved stereo Aggregate anchor kAudioAggregateDeviceMainSubDeviceKey = current default output UID Observed behavior After running normally for several minutes, the stream transitions into an all-zero state: AudioDeviceIOProc continues to fire at expected cadence Frame count, timestamps (mHostTime, mSampleTime), and mDataByteSize all look normal AudioBufferList pointers are valid Every sample in every buffer is exactly 0.0f Other apps are still producing audible output through the same output device The condition may self-recover or persist until the session is stopped Confirmed via RMS logging both inside the IOProc and after the ring buffer consumer — data is zero on delivery, not introduced downstream. Example: 51-minute session on MacBook Air M2 Segment 1 (~7 min): Three all-zero periods: 60 s, 53 s, 141 s. Real PCM briefly returned between them. Segment 2 (~44 min): Two all-zero periods: 16 min 3 s, 3 min 8 s. IOProc cadence, timestamp deltas, default output UID, and kAudioDevicePropertyDeviceIsRunningSomewhere all remained normal throughout. What I have ruled out Actual silence: User was in an active video call and could hear participants through the output device. Default output device change: Monitored kAudioHardwarePropertyDefaultOutputDevice — no change during affected periods. IOProc stall: Heartbeat and kAudioDevicePropertyDeviceIsRunningSomewhere remained normal. Aggregate device destroyed: AudioObjectGetPropertyData on the aggregate UID continued returning the expected device. Tap descriptor misconfiguration: The same tap produces valid PCM earlier in the same session, so this is not a startup-time issue. Why detection is hard All-zero buffers from a broken tap are indistinguishable from legitimate silence (muted participant, waiting room, paused media). kAudioProcessPropertyIsRunningOutput reports whether a process has active output IO, not whether it is contributing non-zero samples — a muted Zoom call still reports true. Possible correlations Sample-rate renegotiation on the output device (44.1 kHz ↔ 48 kHz) when another app changes output Bluetooth device state changes (AirPods sleep/wake) where UID stays the same MacBook Air more frequently affected than MacBook Pro Always occurs after extended uptime — first few minutes are consistently clean Current workaround Full teardown and rebuild restores real PCM. Restarting the IOProc alone or recreating only the aggregate device is not reliable — both the Process Tap and Aggregate Device must be destroyed and recreated. 1. AudioDeviceStop 2. AudioDeviceDestroyIOProcID 3. AudioHardwareDestroyAggregateDevice 4. AudioHardwareDestroyProcessTap 5. AudioHardwareCreateProcessTap 6. AudioHardwareCreateAggregateDevice 7. Create + start new IOProc Applying this automatically is risky because it cannot be reliably distinguished from legitimate silence. Questions Expected failure mode? Can a Process Tap continue delivering zero-filled buffers while the system output is audible? Is this expected under certain device or routing conditions? Detection signal? Is there any HAL property, notification, or diagnostic counter that distinguishes "sources are genuinely silent" from "the tap data path has stopped receiving the real mix"? Targeted recovery? Is there a supported way to re-anchor or reset the tap data path without destroying and recreating both objects? Full rebuild as intended workaround? If so, it would help to confirm this so developers can converge on a consistent approach. Mixer activity signal? kAudioProcessPropertyIsRunningOutput reflects IO registration, not sample contribution. Is there any AudioProcess property that indicates a process is currently delivering non-zero audio to the system mixer?
Replies
0
Boosts
0
Views
223
Activity
5d
CarPlay HID transport buttons remap to call-control during continuous mic capture (no opt-out API)
Hello, I am developing Uniq Intercom, a voice-only group communication app for motorcyclists (always-on intercom over WebRTC, used continuously for multi-hour rides). I am seeking guidance on an iOS audio session and CarPlay HID interaction I have not been able to resolve through documented APIs. Problem: As soon as my app activates the microphone (yellow recording indicator visible), iOS appears to classify the app as a real-time communication participant and CarPlay re-routes the steering-wheel / handlebar HID transport buttons (left / right / ok) from the media-control role to the call-control role (answer/decline). Because I do not register a CallKit / LiveCommunicationKit call (the session is a continuous group voice channel, not a discrete telephony call), there is no call object for those buttons to act upon — they effectively become inert. Why this matters: Motorcyclists rely on the intercom for 4–6 hour rides. CarPlay is now built into a growing number of modern motorcycles and with aftermarket display units virtually any bike, and any rider who uses any voice communication platform alongside it — Uniq Intercom, WhatsApp Call currently runs into this same handlebar button remap. With the buttons inert, the rider's only remaining option is to reach for the motorcycle's touchscreen to skip a track or change navigation — this is unsafe. The exact same remap behavior occurs during a real WhatsApp or Phone call — but for those the remap is correct (answer/decline maps to a real call). For continuous voice apps without a CallKit-style discrete call, no equivalent path exists today. As both an iOS developer and a motorcyclist, I would very much like to see this resolved — solving it would meaningfully improve safety for every rider using an iPhone with CarPlay. Configurations I have tested (all reproduce the symptom on iOS 18+ / 26 with wireless CarPlay): AVAudioSession.Category.playAndRecord + .voiceChat mode + various option combinations (duckOthers, mixWithOthers, allowBluetoothHFP, allowBluetoothA2DP, defaultToSpeaker) Same category with .videoChat mode (which @livekit/react-native defaults to) Same category with .default mode (re-applied after setAudioModeAsync to defeat any framework override) — confirmed Mode = Default for ~2 s window in audiomxd log before WebRTC's setActive cycle returned mode to .voiceChat. Buttons remained remapped during the .default window. Disabling MPRemoteCommandCenter and clearing MPNowPlayingInfoCenter.default().nowPlayingInfo JS-side override of WebRTC's global RTCAudioSessionConfiguration via @livekit/react-native's AudioSession.setAppleAudioConfiguration({audioMode: 'default'}) bridge, applied both before connect and after setAudioModeAsync to defeat library overrides In every case the audiomxd system log confirms our session goes active (Mode = VoiceChat or Default, Recording = YES), and CarPlay HID buttons are immediately remapped to call-control. The middle "OK" button remains functional because it is not part of the call-control mapping — confirming the buttons are not blocked, only re-purposed. The remap occurs the instant the iOS recording indicator appears, regardless of audio session mode. This led me to conclude the trigger is not audio session mode but the combination of microphone permission + active session + (likely) the AUVoiceIO unit instantiated by WebRTC. I cannot find a public API path to suppress this classification while maintaining the always-on continuous voice channel. My questions: Is there an entitlement or API that allows an app with active microphone capture to declare itself as a non-call media participant, keeping CarPlay HID transport buttons in the media role? Is AVAudioSession.setPrefersEchoCancelledInput(_:) (iOS 18+) the intended path for retaining platform AEC under .default mode without the focus-engine "communication priority" marking? Documentation is sparse on its CarPlay arbitration implications. Does the PushToTalk framework affect HID arbitration differently from playAndRecord + voiceChat? Our continuous-channel UX does not fit the PTT transmit-on-press model, but understanding the contrast would help. If no current API exists, is this something the iOS Audio team would consider for future SDKs? Solving this would meaningfully improve safety for motorcycle and adventure-sport users on iOS. Thank you for your time and any guidance you can offer. — Emre Erkaya / Uniq Intercom
Replies
1
Boosts
0
Views
126
Activity
6d
PHPickerConfiguration.preselectedAssetIdentifiers not work
let authStatus = PHPhotoLibrary.authorizationStatus(for: .readWrite) let fetchResult = PHAsset.fetchAssets(withLocalIdentifiers: selectedAssetIDs, options: nil) print("[AlbumCreation] authStatus=\(authStatus.rawValue) IDs=\(selectedAssetIDs.count) PHAsset匹配=\(fetchResult.count)") // result is: [AlbumCreation] authStatus=3 IDs=3 PHAsset匹配=3 var config = PHPickerConfiguration(photoLibrary: .shared()) config.preselectedAssetIdentifiers = selectedAssetIDs config.selectionLimit = 0 let picker = PHPickerViewController(configuration: config) picker.delegate = self present(picker, animated: true)
Replies
1
Boosts
0
Views
170
Activity
1w
I have the same, iOS 26.3.0
open FB22712056
Replies
1
Boosts
0
Views
145
Activity
1w
Mac (Designed for iPad) cannot access microphone
I have an application that is a VOIP application of sorts that needs access to the microphone. I am using the Mac (Designed for iPad) support to not have to do huge amounts of conditional building and support for all the many iOS specific things my app includes. I never get prompted to allow microphone permissions and I never see my app name appear in Privacy & Security -> Microphone permissions setup. So is it that Mac is just a dead end for any form of an application that needs a microphone and is running under Mac (Designed for iPad) compatibility mode? Why doesn't TCC have some mechanism to notice and grant access to mic use?
Replies
3
Boosts
0
Views
432
Activity
1w
MusicKit playback completely broken after Apple Music “What’s New?” update screen until native app is opened
I’m developing a third-party Apple Music streaming app using MusicKit (ApplicationMusicPlayer + catalog requests). Issue: Whenever Apple releases an Apple Music update that shows the “What’s New?” onboarding/modal screen in the native Apple Music app, MusicKit in our app completely breaks for all users. Attempts to play anything (queue, prepareToPlay, etc.) fail silently or with service-related errors. Playback and most MusicKit operations remain broken until the user opens the native Apple Music app, dismisses the “What’s New?” screen, and returns to our app. After that single native interaction (we deliberately stopped users from going any further within Apple Music to verify this), everything works perfectly again. Reproduction Steps: Apple Music receives an update with “What’s New?” screen. User launches our third-party app and attempts playback. MusicKit fails. User opens Apple Music → dismisses modal → returns to our app. MusicKit works again. Expected Behavior: Third-party MusicKit apps should not become non-functional because the native Apple Music app has a pending onboarding screen. Shared backend services (account readiness, tokens, subscription state, etc.) should initialize independently. Environment: iOS 26.4.2 Devices verified to be affected: iPhone 13 Pro iPhone XR iPhone 15 Workarounds attempted: Re-requesting MusicAuthorization Recreating ApplicationMusicPlayer Stopping/re-queuing Background/foreground app None resolve it without the native Apple Music interaction. This appears to be a recurring integration fragility with shared Apple Music services. Has anyone else seen this? Any recommended recovery path or API to force service initialization? Thanks!
Replies
1
Boosts
2
Views
215
Activity
1w
AVMutableComposition audio silently drops on iOS 26 when streaming over HTTP/2 (FB22696516)
We've discovered a regression in iOS 26 where AVMutableComposition silently drops audio when the source asset is streamed over HTTP/2. The same file served over HTTP/1.1 plays audio correctly through the same composition code. Direct AVPlayer playback (without composition) works fine on HTTP/2. This did not occur on iOS 18.x. It happens on physical devices only. It does not reproduce on a simulator or on macOS. Tested conditions (same MP4 file, different CDNs): CloudFront (HTTP/2) + Composition → ❌ Audio silent Cloudflare (HTTP/2) + Composition → ❌ Audio silent Akamai (HTTP/1.1) + Composition → ✅ Audio works Apple TS (HTTP/1.1) + Composition → ✅ Audio works Downloaded locally, then composed → ✅ Audio works Direct playback, no composition (HTTP/2) → ✅ Audio works The CloudFront and Akamai URLs serve the identical file — same S3 object, different CDN edge. CDN vendor doesn't matter; any HTTP/2 source triggers it. Minimal reproduction: let asset = AVURLAsset(url: http2URL) let videoTrack = try await asset.loadTracks(withMediaType: .video).first! let audioTrack = try await asset.loadTracks(withMediaType: .audio).first! let duration = try await asset.load(.duration) let composition = AVMutableComposition() let fullRange = CMTimeRange(start: .zero, end: duration) let compVideo = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)! try compVideo.insertTimeRange(fullRange, of: videoTrack, at: .zero) let compAudio = composition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)! try compAudio.insertTimeRange(fullRange, of: audioTrack, at: .zero) let item = AVPlayerItem(asset: composition.copy() as! AVComposition) player.replaceCurrentItem(with: item) player.play() // Video plays, audio goes silent after a while Playing the same asset directly works fine: player.replaceCurrentItem(with: AVPlayerItem(asset: asset)) player.play() // Both video and audio work Filed as FB22696516 Sample project: https://github.com/karlingen/AVCompositionBug
Replies
2
Boosts
9
Views
223
Activity
1w
AVAudioEngineConfigurationChangeNotification received while engine is running
The documentation for AVAudioEngineConfigurationChangeNotification states When the audio engine’s I/O unit observes a change to the audio input or output hardware’s channel count or sample rate, the audio engine stops, uninitializes itself, and issues this notification. A user of my framework has reported a crash during notification processing on iOS 26.4 when the main mixer node is disconnected from the output node in order to reestablish the connection with a different format. The failing precondition is com.apple.coreaudio.avfaudio: required condition is false: !IsRunning(). The report was observed on iPhone 16 / iOS 26.4.2, ARM64, TestFlight build. The backtrace contains: [Last Exception Backtrace] 3 AVFAudio AVAudioEngineGraph::_DisconnectInput AVAudioEngineGraph.mm:2728 4 AVFAudio -[AVAudioEngine disconnectNodeInput:bus:] AVAudioEngine.mm:155 5 SFB sfb::AudioPlayer::handleAudioEngineConfigurationChange AudioPlayer.mm:2247 [Thread 18 Crashed] 9 SFB sfb::AudioPlayer::handleAudioEngineConfigurationChange AudioPlayer.mm:2212 … 14 AVFAudio IOUnitConfigurationChanged Has the behavior for AVAudioEngineConfigurationChangeNotification changed in iOS 26.4? It's simple enough to call [engine_ stop] in the notification handler but the documentation states this shouldn't be necessary. I've not observed a similar crash on previous iOS versions.
Replies
0
Boosts
1
Views
143
Activity
1w
MacOS system audio capture low volume with multichannel soundcards
I am building an app that uses system audio capture. This works well for 2-channel sound cards, but as soon as the interface has more than 2 outputs, the capture volume is very low. Does anyone have tips on where to look? Capturing neither before nor after the mix doesn't solve it.
Replies
0
Boosts
0
Views
131
Activity
1w
VNDetectTrajectoriesRequest not "seeing" ball in video
I am attempting to write an app which captures the flight of a ball from the iPhone's video preview, but I need some help. I am using the following code: request = VNDetectTrajectoriesRequest(frameAnalysisSpacing: frameCnt, trajectoryLength: trajLength, completionHandler: completionHandler)` to initiate a request to capture a "ball" from a videoPreview. In the "completionHandler" I use: guard let observations = request.results as? [VNTrajectoryObservation] else { //print("observations not set up#######") return } to capture observations. In the video capture setup I am using captureSession!.sessionPreset = .hd1920x1080 In the AVCaptureVideoDataOutputSampleBufferDelegate, I am using trajectoryQueue.async { [self] in do { try sequenceHandler.perform([request], on: sampleBuffer, orientation: .right) } catch { print("VNSequenceRequestHandler perform error: \(error)") } } I have also tried using VNImageRequestHandler to "capture" observations in the Delegate. A ball is "seen" only if the "ball" is rolling on the ground. If the ball in "flying" or "bouncing" no "observations" are provided. I have tried different FrameCounts & trajectory lengths with no effect. I am now developing the app primarily using an iPhone 14Pro running iOS 26.3.1. It should be noted that I started development using an old iPhone 6plus running iOS 15.7 with captureSession!.sessionPreset = .vga640x480. and I did get some good results. If I try the VGA resolution on the iPhone 14pro, I still see no ball flight. The basis for my app is software from 5 years ago, so I'm hoping that there has been some development on ball tracking since then. Thanks in advance for any help/suggestions.
Replies
0
Boosts
0
Views
284
Activity
1w
HLS Tools - hlsreport critical error cause
Hi, I'm currently experiencing issues with HLS streams created by FFmpeg running on Safari. When I pass the stream to the mediastreamvalidator tool and then run hlsreport on the output, I get a critical error reported: Media Entry discontinuity value does not match previous playlist for MEDIA-SEQUENCE 1 If I let the stream finish (it's a live stream from an IoT device) and then perform the stream validation again I no longer receive the critical error. My assumption is that this critical error is contributing to the HLS stall on iOS. I have also noticed that if I let the stream continue and then re-load the video control in Safari the stream starts Is there a resource with explanations or remediation paths relevant to the possible output of the hlsreport? My m3u8 output looks like this (I have redacted the server host) #EXTM3U #EXT-X-VERSION:6 #EXT-X-TARGETDURATION:2 #EXT-X-MEDIA-SEQUENCE:1 #EXT-X-PLAYLIST-TYPE:EVENT #EXT-X-INDEPENDENT-SEGMENTS #EXT-X-DISCONTINUITY #EXTINF:2.000000, https://redacted.com/segment-00001.ts #EXTINF:2.000011, https://redacted.com/segment-00002.ts #EXTINF:2.000011, https://redacted.com/segment-00003.ts #EXTINF:2.000011, https://redacted.com/segment-00004.ts #EXTINF:2.000011, #EXT-X-ENDLIST Thanks for any advice or guidance possible - if I can provide isolated code snippets I will do. Andy
Replies
1
Boosts
0
Views
607
Activity
1w
Working with kCVPixelFormatType_96VersatileBayerPacked12
Whilst AVCaptureSession is setup to capture ProRes RAW video, is it possible to get video pixel data which can read and processed, such as using CIImage(cvPixelBuffer: ) AVCaptureVideoDataOutput outputs ProRes RAW in kCVPixelFormatType_96VersatileBayerPacked12 pixel format. Is there a provided way to debayer this pixel format into something more usable?
Replies
0
Boosts
0
Views
102
Activity
1w
External UVC controls beyond resolution and frame rate
I want to clarify iPadOS AVFoundation behavior for external UVC cameras. Specifically: • Does iPadOS support external UVC controls beyond resolution and frame rate? • Or is support effectively limited to those two in practice (for non-Apple external UVC cameras)? • If other controls are supported (exposure, focus, white balance, zoom, etc.), what is the expected criteria for them to appear via AVCapture​Device? Runtime capability output from our external UVC camera: === Capabilities for VCI-AR0822-C === DeviceType: AVCaptureDeviceTypeExternal, position: 0, external: true UniqueID: 00000000-0020-0000-3407-000008220000 Active format media subtype: 420v Exposure modes supported: none Current exposure mode: locked Manual exposure (.custom) available: false Current exposure duration: nan s Current ISO: 0.0 Exposure target bias supported range: -8.0 ... 8.0 Current exposure target bias: 0.0 Focus modes supported: none Current focus mode: locked White balance modes supported: none Current white balance mode: locked Torch available: false, torch mode: 0, torch active: false Format[0]: 640x480, PF: 420v FPS Range: 30.0 - 60.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[1]: 640x480, PF: 420f FPS Range: 30.0 - 60.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[2]: 1280x720, PF: 420v FPS Range: 30.0 - 60.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[3]: 1280x720, PF: 420f FPS Range: 30.0 - 60.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[4]: 1920x1080, PF: 420v <-- ACTIVE FPS Range: 30.0 - 60.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[5]: 1920x1080, PF: 420f FPS Range: 30.0 - 60.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[6]: 2560x1440, PF: 420v FPS Range: 15.0 - 30.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[7]: 2560x1440, PF: 420f FPS Range: 30.0 - 60.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[8]: 3840x2160, PF: 420v FPS Range: 8.0 - 15.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s Format[9]: 3840x2160, PF: 420f FPS Range: 15.0 - 30.0 ISO Range: 0.0 - 0.0 Exposure duration range: 0.0 - 0.0 s =============================== We are looking for general platform guidance (not only device-specific debugging), especially whether external UVC control support on iPadOS is expected beyond resolution/fps and how to interpret “none/0.0” capability outputs. Thank you.
Replies
0
Boosts
0
Views
218
Activity
2w