The device is connected to Bluetooth A and Bluetooth B, currently the audio is played through Bluetooth A, click the interface button, how to realize the code to switch to Bluetooth B?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I use startCaptureWithHandler to record screen and AVAssetWriter appendSampleBuffer: to save audio and video ,but when played the saved file audio and video are out of sync.
I don t know if it s a AVAssetWriterInputr setup problem,here is my code
NSDictionary *audioCompressionSettings = @{
AVEncoderBitRatePerChannelKey : @(64000),
AVFormatIDKey : @(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey : @(2),
AVSampleRateKey : @(44100) };
AVAssetWriterInput *audioAssetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioCompressionSettings];
audioAssetWriterInput.expectsMediaDataInRealTime = YES;
[_assetWriter addInput:audioAssetWriterInput];
NSDictionary *videoCompressSetting = @{AVVideoAverageBitRateKey:@(screenWidth*screenHeight*5),
AVVideoMaxKeyFrameIntervalKey:@(30),
AVVideoProfileLevelKey : AVVideoProfileLevelH264MainAutoLevel};
NSDictionary *codecSetting = @{AVVideoCodecKey:AVVideoCodecTypeH264,
AVVideoScalingModeKey : AVVideoScalingModeResize,
AVVideoWidthKey:@(screenWidth*2),
AVVideoHeightKey:@(screenHeight*2),
AVVideoCompressionPropertiesKey:videoCompressSetting
};
AVAssetWriterInput* videoAssetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:codecSetting];
videoAssetWriterInput.expectsMediaDataInRealTime = YES;
[_assetWriter addInput:videoAssetWriterInput];
I’m currently working on a project where I capture both depth frames and RGB frames using AVCaptureDataOutputSynchronizer. Depth frames are stored as raw binary data and RGB frames are saved with AVAssetWriter.
The issue I’m facing is that AVAssetWriter enforces a fixed framerate, meaning it adds or discards frames to maintain that rate (as I understand it). This causes a desynchronization between the depth and RGB frames, which is a problem because I need each depth frame to be exactly matched with the corresponding RGB frame as they were captured.
How can I ensure that the RGB frames are saved without AVAssetWriter modifying the frame count?
Hi everyone,
I’m trying to use AVAssetResourceLoaderDelegate to handle a live radio stream (e.g. Icecast/HTTP stream). My goal is to have access to the last 30 seconds of audio data during playback, so I can analyze it for specific audio patterns in near-real-time.
I’ve implemented a custom resource loader that works fine for podcasts and static files, where the file size and content length are known. However, for infinite live streams, my current implementation stops receiving new loading requests after the first one is served. As a result, the playback either stalls or fails to continue.
Has anyone successfully used AVAssetResourceLoaderDelegate with a continuous radio stream? Or maybe you can suggest betterapproach for buffering and analyzing live audio?
Any tips, examples, or advice would be appreciated. Thanks!
No video is playing and the same error is coming. I have tried everything by resetting. Please get an update as soon as possible.
error-The operation couldn't be completed. (CoreMediaErrorDomain error -42709.)
Hello,
We're seeing an intermittent issue when playing back FairPlay-protected HLS downloads while the device is offline.
Assets are downloaded using AVAggregateAssetDownloadTask with FairPlay protection.
After download, asset.assetCache.isPlayableOffline == true.
On first playback attempt (offline), ~8% of downloads fail.
Retrying playback always works. We recreate the asset and player on each attempt.
During the playback setup, we try to load variants via:
try await asset.load(.variants)
This call sometimes fails with:
Error Domain=NSURLErrorDomain Code=-1009 “The Internet connection appears to be offline.” UserInfo={NSUnderlyingError=0x105654a00 {Error Domain=NSURLErrorDomain Code=-1009 “The Internet connection appears to be offline.” UserInfo={NSDescription=The Internet connection appears to be offline.}}, NSErrorFailingURLStringKey=file:///private/var/mobile/Containers/Data/Application/2DDF9D7C-9197-46BE-8690-C23EE75C9E90/Library/com.apple.UserManagedAssets.XVvqfh/Baggage_9DD4E2D3F9C0E68F.movpkg/, NSErrorFailingURLKey=file:///private/var/mobile/Containers/Data/Application/2DDF9D7C-9197-46BE-8690-C23EE75C9E90/Library/com.apple.UserManagedAssets.XVvqfh/Baggage_9DD4E2D3F9C0E68F.movpkg/, NSURL=file:///private/var/mobile/Containers/Data/Application/2DDF9D7C-9197-46BE-8690-C23EE75C9E90/Library/com.apple.UserManagedAssets.XVvqfh/Baggage_9DD4E2D3F9C0E68F.movpkg/, AVErrorFailedDependenciesKey=(
“assetProperty_HLSAlternates”
), NSLocalizedDescription=The Internet connection appears to be offline.}
This variant load is used to determine available audio tracks, check for Dolby support, and apply user language preferences.
After this step, the AVPlayerItem also fails via Combine’s publisher for .status.
However, retrying the entire process immediately after (same offline conditions, same asset path, new AVURLAsset) results in successful playback.
Assets are represented using the following class:
public class DownloadedAsset: AVURLAsset {
public let id: String
public let localFileUrl: URL
public let fairplayLicenseUrlString: String?
public let drmToken: String?
var isProtected: Bool {
return fairplayLicenseUrlString != nil
}
public init(id: String,
localFileUrl: URL,
fairplayLicenseUrlString: String?,
drmToken: String?) {
self.id = id
self.localFileUrl = localFileUrl
self.fairplayLicenseUrlString = fairplayLicenseUrlString
self.drmToken = drmToken
super.init(url: localFileUrl, options: nil)
}
}
We use user-selected quality levels to control bitrate and multichannel (e.g. Dolby 5.1) downloads:
let downloadQuality = UserDefaults.standard.downloadVideoQuality
let bitrate: Int
let shouldDownloadMultichannelTracks: Bool
switch downloadQuality {
case .dataSaver:
shouldDownloadMultichannelTracks = false
bitrate = 596564
case .standard:
shouldDownloadMultichannelTracks = false
bitrate = 1503844
case .best:
shouldDownloadMultichannelTracks = true
bitrate = 7038970
}
var selections = multichannelIdentifiedMediaSelections
if !shouldDownloadMultichannelTracks {
selections = selections.filter { !$0.isMultichannel }
}
let task = session.aggregateAssetDownloadTask(
with: asset,
mediaSelections: selections.map { $0.mediaSelection },
assetTitle: title,
assetArtworkData: nil,
options: [AVAssetDownloadTaskMinimumRequiredMediaBitrateKey: bitrate]
)
Seen on devices running iOS 16, iOS 17, and iOS 18.
What could cause the initial failure of an otherwise valid, offline-ready FairPlay HLS asset?
Could .load(.variants) internally trigger a failed network resolution, even when offline?
Is there an internal caching or initialization behavior in AVFoundation that might explain why the second attempt works?
Any guidance would be appreciated.
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
FairPlay Streaming
iOS
HTTP Live Streaming
AVFoundation
Dear Apple Developer Community,
I'm encountering a critical issue with the MusicLibrary.shared.createPlaylist() method in MusicKit that's affecting our app's core functionality. Despite implementing all recommended authorization checks, the app consistently freezes for some users when this method is called.
What we've already verified before calling createPlaylist():
Network connectivity is properly checked and confirmed
Apple Music authorization is explicitly requested via MusicAuthorization.request()
User subscription status is verified using MusicSubscription.current.canPlayCatalogContent
Despite these precautions, many users report that their app completely freezes when attempting to create a playlist. This is particularly concerning as playlist creation is a core feature of our application.
User-reported workarounds (with mixed success):
Some users have resolved the issue by restarting their devices or reinstalling our app
Others report success after enabling "Sync Library" in Settings → Music Unfortunately, a significant number of users are still experiencing the issue even after trying both solutions above
We've reviewed the MusicKit documentation thoroughly and ensured our implementation follows all best practices. Our app correctly handles permissions and uses the async/await pattern as required by the API.
Is there a known issue with the createPlaylist() method that might cause it to block indefinitely? Are there additional authorization steps or settings we should be checking before calling this method? Could this be related to how MusicKit communicates with Apple Music servers?
Any insights from the developer community or official guidance would be greatly appreciated as this issue is severely impacting our user experience.
Thank you for your assistance
In Final Cut Pro, keyframes for transform parameters (such as Position, Scale, and Rotation) are automatically set to “Smooth” interpolation. This often results in undesired easing between keyframes, especially when linear motion is required.
Currently, we have to manually adjust each keyframe to "Linear" using the Video Animation Editor, which can be time-consuming when working with many keyframes.
Would it be possible to add an option to set the default keyframe interpolation to "Linear"—either globally in Preferences or per parameter in the Inspector?
This would greatly streamline the animation workflow for many editors.
Thank you for considering this request!
Short summary
When setting exposureMode to .locked or .custom the brightness of a video stream still changes depending on the composition and contrast of the visible scene. These changes seem to come from contrast enhancements or dynamic range optimizations and totally break any analysis of the image that requires to assess absolute luminance. While exposure lock seems to indeed lock the physical exposure parameters of the camera (shutter speed and ISO), I cannot find any way to control these "soft" modifiers.
Details
Background
I am the developer of the app "phyphox", an educational app that makes the phone's sensors accessible to students as measurement tools in science experiments. Currently I am working on implementing photometric measurements through the camera and one very important aspect of it is luminance measurements.
This is particularly relevant since the light sensor of the phone has no publicly accessible API and the camera could to some extend make experiments available to Apple users that are otherwise only possible on Android devices.
Implementation
The app uses AVFoundation and explicitly picks individual cameras since camera groups do not support custom exposure settings. This means that it handles camera switching during zoom by itself and even implements its own auto exposure routines to optimize for the use in experiments. Therefore it always stays in custom exposure mode. The app uses YUV420 color space and the individual frames are analyzed in Metal using compute shaders.
However, the effects discussed here still occur if I remove all code to control the camera and replace it with a simple sequence of setting the exposure mode to custom, setting custom exposure values, setting a fixed white balance and then setting the exposure mode to locked as suggested on stackoverflow. This neither helps on an iPhone 14 Pro nor on an iPhone 8 despite a report on the developer forums that it would resolve the issue for older devices.
The app is open source, so the code can be seen in our current development branch (without the changes for the tests here, though) on github.
The videos below use the implementation with the suggestion from stackoverflow, but they can be reproduced in the same way with "professional" camera apps that promise manual control over the camera (like the Blackmagic cam to quote a reputable company) as well as the stock camera app after pressing and holding on the preview to enable AE/AF lock.
Demonstration
These examples were captured on an iPhone 14 Pro. The central part of the image (highlighted by the app using metal shaders after capture) should not change with fixed exposure settings, but significant changes are noticable if there are changes at the edge of the frame when I move a black piece of cardboard in from above:
https://share.icloud.com/photos/0b1f_3IB6yAQG-qSH27pm6oDQ
The graph above the camera preview is the average luminance (gamma corrected and weighted based on sRGB) across the highlighted central area and as mentioned before it should not change because of something happening at the side of the frame (worst case it should get a bit darker because of the cardboard's shadow).
In my opinion, the iPhone changes its mind on the ideal contrast as soon as it has a different exposure histogram because of the dark image part from the cardboard, but that's just me guessing.
For completeness here is the same effect in the stock camera app with AE/AF lock enabled:
https://share.icloud.com/photos/0cd7QM8ucBZKwPwE9mybnEowg
Here you can also see that the iPhone "ramps" the changes. The brightness of the gray area does not change immediately but transitions smoothly, so this is clearly deliberate postprocessing.
So...
Any suggestion on how to prevent this behavior would be highly appreciated.
I have a music app I'm developing and having a weird issue where I can see now playing info for every other platform than tvOS. As far as I can tell I have correctly configured the MPNowPlayingInfoCenter
MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo MPNowPlayingInfoCenter.default().playbackState = .playing
Are there any extra requirements to get my app's now-playing info showing in control center on tvOS? Another strange issue that might be related is I can use the apple TV remote to pause audio but not resume playback, so I feel like there's something I'm missing about registering audio playback on tvOS specifically.
Hi,
Not sure if this is the right forum to ask this question in, but could you please advise if I can use Apple Digital Masters logo (badge) in my iOS app that is playing music from Apple Music service?
Topic:
Media Technologies
SubTopic:
Audio
When I create a SFSpeechRecognizer object, I find SFLocalSpeechRecognitionClient remains in memory and never gets released.
You can create a demo with a single UIButton whose touch action is
SFSpeechRecognizer(locale: Locale(identifier: "zh_CN"))
Is there a way to permanently disable PHASE SDK logging? It seems to be a lot chattier than Apple's other SDKs.
While developing a RealityKit app that uses AudioPlaybackController, I must manually hide the PHASE SDK log output several times each day so I can see my app's log messages.
Thank you.
I'm developing an iOS app that requires continuous audio recording.
Currently, when a phone call comes in, the AVAudioSession is interrupted and recording stops completely during the ringing phase.
While I understand recording should stop if the call is answered, my app needs to continue recording while the phone is merely ringing.
I've observed that Apple's Voice Memos app maintains recording during incoming call rings. This indicates the hardware and iOS are capable of supporting this functionality.
Request
Please advise on any available AVAudioSession configurations or APIs that would allow my app to:
Continue recording during an incoming call ring
Only stop recording if/when the call is actually answered
Impact
This interruption significantly impacts the user experience and core functionality of my app. Workarounds like asking users to enable airplane mode are impractical and create a poor user experience.
Questions
Is there an approved way to maintain microphone access during call rings?
If not currently possible, could this capability be considered for addition to a future iOS SDK?
Are there any interim solutions or best practices Apple recommends for this use case?
Thank you for your help.
SUPPORT INFORMATION
Did someone from Apple ask you to submit a code-level support request?
No
Do you have a focused test project that demonstrates your issue?
Yes, I have a focused test project to submit with my request
What code level support issue are you having?
Problems with an Apple framework API in my app
I found that the aggregated device correctly obtains input channels in the standard microphone mode. However, in voice isolation mode, it only retrieves channels from the first sub-device in the aggregated device's list. If I want to properly obtain channel information in voice isolation mode, how should I do it?
In iOS 18, CarPlay shows an error: “There was a problem loading this content” after playback starts. Audio works fine, but the Now Playing screen doesn’t load. I’m using MPPlayableContentManager. This worked fine in iOS 17. Anyone else seeing this error in iOS 18?
Hello, I'm trying to create a webbrowser but currently when signed into apple music webplayer I get the following message when I attempt to play on any versions of my webbrowser:
Not available on the web
You can listen to this in the Apple Music app.
Is there a way to setup DRM (assuming this is the issue) with apple to allow my webbrowser to play this content?
I believe Apple TV is also affected.
Thank you ahead of time.
My app is a camera app that supports Picture-in-Picture (PiP) mode.
Normally, when the device rotates, I get the device orientation from iOS and use it to rotate the camera feed so that the preview stays correctly aligned.
However, when the app enters PiP mode, it is considered to be in the background, and I can no longer receive orientation updates from the system.
As a result, I can’t apply rotation corrections to the camera video in PiP mode.
Is there any way to retrieve device orientation while the app is in the background (specifically during PiP mode)?
Any guidance would be greatly appreciated.
Thank you!
According to the header file the outputVolume properties supported range is 0.0-1.0:
/*! @property outputVolume
@abstract The mixer's output volume.
@discussion
This accesses the mixer's output volume (0.0-1.0, inclusive).
@property (nonatomic) float outputVolume;
However when setting the volume to 2.0 the audio does indeed play louder. Is the header file out of date and if so, what is the supported range for outputVolume?
Thanks
Let's consider the following code.
I've created an actor that loads a list of .mp3 files from a Bundle and then makes it available for audio reproduction.
Unfortunately, I'm experiencing a memory leak.
At the play method.
player.play()
From Instruments I get
_malloc_type_malloc_outlined libsystem_malloc.dylib
start_wqthread libsystem_pthread.dylib
private actor AudioActor {
enum Failure: Error {
case soundsNotLoaded([AudioPlayerClient.Sound: Error])
}
enum Player {
case music(AVAudioPlayer)
}
var players: [Sound: Player] = [:]
let bundles: [Bundle]
init(bundles: UncheckedSendable<[Bundle]>) {
self.bundles = bundles.wrappedValue
}
func load(sounds: [Sound]) throws {
try AVAudioSession.sharedInstance().setActive(true, options: [])
var errors: [Sound: Error] = [:]
for sound in sounds {
guard let url = bundle.url(forResource: sound.name, withExtension: "mp3")
else { continue }
do {
self.players[sound] = try .music(AVAudioPlayer(contentsOf: url))
} catch {
errors[sound] = error
}
}
guard errors.isEmpty
else { throw Failure.soundsNotLoaded(errors) }
}
func play(sound: Sound, loops: Int?) throws {
guard let player = self.players[sound]
else { return }
switch player {
case let .music(player):
player.numberOfLoops = loops ?? -1
player.play()
}
}
func stop(sound: Sound) throws {
guard let player = self.players[sound]
else { throw Failure.soundsNotLoaded([:]) }
switch player {
case let .music(player):
player.stop()
}
}
}