Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

How to disable the built-in speakers and microphone on a Mac
I need to implement a solution through an API or custom driver to completely block out the built-in speakers and microphone of Mac, because I need other apps to use specified external devices as audio input and output. Is there a way to achieve this requirement? What I mean is that even in system preferences, it should not be possible to choose the built-in microphone and speakers; only my external device can be used.
0
0
116
Apr ’25
Playing periodic audio in background using AVFoundation - facing audio session startup failure
Hello everyone, I’m new to Swift development and have been working on an audio module that plays a specific sound at regular intervals - similar to a workout timer that signals switching exercises every few minutes. Following AVFoundation documentation, I’m configuring my audio session like this: let session = AVAudioSession.sharedInstance() try session.setCategory( .playback, mode: .default, options: [.interruptSpokenAudioAndMixWithOthers, .duckOthers] ) self.engine.attach(self.player) self.engine.connect(self.player, to: self.engine.outputNode, format: self.audioFormat) try? session.setActive(true) When it’s time to play cues, I schedule playback on a DispatchQueue: // scheduleAudio uses DispatchQueue self.scheduleAudio(at: interval.start) { do { try audio.engine.start() audio.node.play() for sample in interval.samples { audio.node.scheduleBuffer(sample.buffer, at: AVAudioTime(hostTime: sample.hostTime)) } } catch { print("Audio activation failed: \(error)") } } This works perfectly in the foreground. But once the app goes into the background, the scheduled callback runs, yet the audio engine fails to start, resulting in an error with code 561015905. Interestingly, if the app is already playing audio before going to the background, the scheduled sounds continue to play as expected. I have added the required background audio mode to my Info plist file by including the key UIBackgroundModes with the value audio. Is there anything else I should configure? What is the best practice to play periodic audio when the app runs in the background? How do apps like turn-by-turn navigation handle continuous audio playback in the background? Any advice or pointers would be greatly appreciated!
0
0
208
Jul ’25
Issue with Audio Sample Rate Conversion in Video Calls
Hey everyone, I'm encountering an issue with audio sample rate conversion that I'm hoping someone can help with. Here's the breakdown: Issue Description: I've installed a tap on an input device to convert audio to an optimal sample rate. There's a converter node added on top of this setup. The problem arises when joining Zoom or FaceTime calls—the converter gets deallocated from memory, causing the program to crash. Symptoms: The converter node is being deallocated during video calls. The program crashes entirely when this happens. Traditional methods of monitoring sample rate changes (tracking nominal or actual sample rates) aren't working as expected. The Big Challenge: I can't figure out how to properly monitor sample rate changes. Listeners set up to track these changes don't trigger when the device joins a Zoom or FaceTime call. Please, if anyone has experience with this or knows a solution, I'd really appreciate your help. Thanks in advance! ⁠
0
0
86
Apr ’25
ScaleTimeRange will cause noise in sound
I'm using AVFoundation to make a multi-track editor app, which can insert multiple track and clip, including scale some clip to change the speed of the clip, (also I'm not sure whether AVFoundation the best choice for me) but after making the scale with scaleTimeRange API, there is some short noise sound in play back. Also, sometimes it's fine when play AVMutableCompostion using AVPlayer with AVPlayerItem, but after exporting with AVAssetReader, will catch some short noise sounds in result file.... Not sure why. Here is the example project, which can build and run directly. https://github.com/luckysmg/daily_images/raw/refs/heads/main/TestDemo.zip
0
0
133
Jul ’25
Error -50 writing to AVAudioFile
I'm trying to write 16-bit interleaved 2-channel data captured from a LiveSwitch audio source to a AVAudioFile. The buffer and file formats match but I get a bad parameter error from the API. Does this API not support the specified format or is there some other issue? Here is the debugger output. (lldb) po audioFile.url ▿ file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201 - _url : file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201 - _parseInfo : nil - _baseParseInfo : nil (lldb) po error Error Domain=com.apple.coreaudio.avfaudio Code=-50 "(null)" UserInfo={failed call=ExtAudioFileWrite(_impl->_extAudioFile, buffer.frameLength, buffer.audioBufferList)} (lldb) po buffer.format <AVAudioFormat 0x302a12b20: 2 ch, 44100 Hz, Int16, interleaved> (lldb) po audioFile.fileFormat <AVAudioFormat 0x302a515e0: 2 ch, 44100 Hz, Int16, interleaved> (lldb) po buffer.frameLength 882 (lldb) po buffer.audioBufferList ▿ 0x0000000300941e60 - pointerValue : 12894608992 This code handles the details of converting the Live Switch frame into an AVAudioPCMBuffer. extension FMLiveSwitchAudioFrame { func convertedToPCMBuffer() -> AVAudioPCMBuffer { Self.convertToAVAudioPCMBuffer(from: self)! } static func convertToAVAudioPCMBuffer(from frame: FMLiveSwitchAudioFrame) -> AVAudioPCMBuffer? { // Retrieve the audio buffer and format details from the FMLiveSwitchAudioFrame guard let buffer = frame.buffer(), let format = buffer.format() as? FMLiveSwitchAudioFormat else { return nil } // Extract PCM format details from FMLiveSwitchAudioFormat let sampleRate = Double(format.clockRate()) let channelCount = AVAudioChannelCount(format.channelCount()) // Determine bytes per sample based on bit depth let bitsPerSample = 16 let bytesPerSample = bitsPerSample / 8 let bytesPerFrame = bytesPerSample * Int(channelCount) let frameLength = AVAudioFrameCount(Int(buffer.dataBuffer().length()) / bytesPerFrame) // Create an AVAudioFormat from the FMLiveSwitchAudioFormat guard let avAudioFormat = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: sampleRate, channels: channelCount, interleaved: true) else { return nil } // Create an AudioBufferList to wrap the existing buffer let audioBufferList = UnsafeMutablePointer<AudioBufferList>.allocate(capacity: 1) audioBufferList.pointee.mNumberBuffers = 1 audioBufferList.pointee.mBuffers.mNumberChannels = channelCount audioBufferList.pointee.mBuffers.mDataByteSize = UInt32(buffer.dataBuffer().length()) audioBufferList.pointee.mBuffers.mData = buffer.dataBuffer().data().mutableBytes // Directly use LiveSwitch buffer // Transfer ownership of the buffer to AVAudioPCMBuffer let pcmBuffer = AVAudioPCMBuffer(pcmFormat: avAudioFormat, bufferListNoCopy: audioBufferList) /* { buffer in // Ensure the buffer is freed when AVAudioPCMBuffer is deallocated buffer.deallocate() // Only call this if LiveSwitch allows manual deallocation } */ pcmBuffer?.frameLength = frameLength return pcmBuffer } } This is the handler that is invoked with every frame in order to convert it for use with AVAudioFile and optionally update a scrolling signal display on the screen. private func onRaisedFrame(obj: Any!) -> Void { // Bail out early if no one is interested in the data. guard isMonitoring else { return } // Convert LS frame to AVAudioPCMBuffer (no-copy) let frame = obj as! FMLiveSwitchAudioFrame let buffer = frame.convertedToPCMBuffer() // Hand subscribers a reference to the buffer for rendering to display. bufferPublisher?.send(buffer) // If we have and output file, store the data there, as well. guard let audioFile = self.audioFile else { return } do { try audioFile.write(from: buffer) // FIXME: This call is throwing error -50 } catch { FMLiveSwitchLog.error(withMessage: "Failed to write buffer to audio file at \(audioFile.url): \(error)") self.audioFile = nil } } This is how the audio file is being setup. static var recordingFormat: AVAudioFormat = { AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44_100, channels: 2, interleaved: true)! }() let audioFile = try AVAudioFile(forWriting: outputURL, settings: Self.recordingFormat.settings)
0
0
430
Mar ’25
How to inform Logic Pro that AU view does not have a fixed aspect ratio?
I have an AUv3 that passes all validation and can be loaded into Logic Pro without issue. The UI for the plug in can be any aspect ratio but Logic insists on presenting it in a view with a fixed aspect ratio. That is when resizing, both the height and width are resized. I have never managed to work out what it is I need to do specify to Logic to allow the user to resize width or height independently of each other. Can anyone tell me what I need to specify in the AU code that will inform Logic that the view can be resized from any side of the window/panel?
0
0
134
Apr ’25
How can I find the user's "Favorite Songs" playlist?
It sounds simple but searching for the name "Favorite Songs" is a non-starter because it's called different names in different countries, even if I specify "&l=en_us" on the query. So is there another property, relationship or combination thereof which I can use to tell me when I've found the right playlist? Properties I've looked at so far: canEdit: will always be false so narrows things down a little inFavorites: not helpful as it depends on whether the user has favourite the favourites playlist, so not relevant hasCatalog: seems always true so again may narrow things down a bit isPublic: doesn't help Adding the catalog relationship doesn't seem to show anything immediately useful either. Can anyone help? Ideally I'd like to see this as a "kind" or "type" as it has different properties to other playlists, but frankly I'll take anything at this point.
0
0
286
Jul ’25
Unstable Playlist.Entry.id causes crashes when removing duplicates
When multiple identical songs are added to a playlist, Playlist.Entry.id uses a suffix-based identifier (e.g. songID_0, songID_1, etc.). Removing one entry causes others to shift, changing their .id values. This leads to diffing errors and collection view crashes in SwiftUI or UIKit when entries are updated. Steps to Reproduce: Add the same song to a playlist multiple times. Observe .id.rawValue of entries (e.g. i.SONGID_0, i.SONGID_1). Remove one entry. Fetch playlist again — note the other IDs have shifted. FB18879062
0
0
542
Jul ’25
Intermittent Memory Leak Indicated in Simulator When Using AVAudioEngine with mainMixerNode Only
Hello, I'm observing an intermittent memory leak being reported in the iOS Simulator when initializing and starting an AVAudioEngine. Even with minimal setup—just attaching a single AVAudioPlayerNode and connecting it to the mainMixerNode—Xcode's memory diagnostics and Instruments sometimes flag a leak. Here is a simplified version of the code I'm using: // This function is called when the user taps a button in the view controller: #import "ViewController.h" @interface ViewController () @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; } - (IBAction)myButtonAction:(id)sender { NSLog(@"Test"); soundCreate(); } @end // media.m static AVAudioEngine *audioEngine = nil; void soundCreate(void) { if (audioEngine != nil) return; [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil]; [[AVAudioSession sharedInstance] setActive:YES error:nil]; audioEngine = [[AVAudioEngine alloc] init]; AVAudioPlayerNode* playerNode = [[AVAudioPlayerNode alloc] init]; [audioEngine attachNode:playerNode]; [audioEngine connect:playerNode to:(AVAudioNode *)[audioEngine mainMixerNode] format:nil]; [audioEngine startAndReturnError:nil]; } In the memory leak report, the following call stack is repeated, seemingly in a loop: ListenerMap::InsertEvent(XAudioUnitEvent const&, ListenerBinding*) AudioToolboxCore ListenerMap::AddParameter(AUListener*, void*, XAudioUnitEvent const&) AudioToolboxCore AUListenerAddParameter AudioToolboxCore addOrRemoveParameterListeners(OpaqueAudioComponentInstance*, AUListenerBase*, AUParameterTree*, bool) AudioToolboxCore 0x180178ddf
0
0
119
Apr ’25
occasional glitches and empty buffers when using AudioFileStream + AVAudioConverter
I'm streaming mp3 audio data using URLSession/AudioFileStream/AVAudioConverter and getting occasional silent buffers and glitches (little bleeps and whoops as opposed to clicks). The issues are present in an offline test, so this isn't an issue of underruns. Doing some buffering on the input coming from the URLSession (URLSessionDataTask) reduces the glitches/silent buffers to rather infrequent, but they do still happen occasionally. var bufferedData = Data() func parseBytes(data: Data) { bufferedData.append(data) // XXX: this buffering reduces glitching // to rather infrequent. But why? if bufferedData.count > 32768 { bufferedData.withUnsafeBytes { (bytes: UnsafeRawBufferPointer) in guard let baseAddress = bytes.baseAddress else { return } let result = AudioFileStreamParseBytes(audioStream!, UInt32(bufferedData.count), baseAddress, []) if result != noErr { print("❌ error parsing stream: \(result)") } } bufferedData = Data() } } No errors are returned by AudioFileStream or AVAudioConverter. func handlePackets(data: Data, packetDescriptions: [AudioStreamPacketDescription]) { guard let audioConverter else { return } var maxPacketSize: UInt32 = 0 for packetDescription in packetDescriptions { maxPacketSize = max(maxPacketSize, packetDescription.mDataByteSize) if packetDescription.mDataByteSize == 0 { print("EMPTY PACKET") } if Int(packetDescription.mStartOffset) + Int(packetDescription.mDataByteSize) > data.count { print("❌ Invalid packet: offset \(packetDescription.mStartOffset) + size \(packetDescription.mDataByteSize) > data.count \(data.count)") } } let bufferIn = AVAudioCompressedBuffer(format: inFormat!, packetCapacity: AVAudioPacketCount(packetDescriptions.count), maximumPacketSize: Int(maxPacketSize)) bufferIn.byteLength = UInt32(data.count) for i in 0 ..< Int(packetDescriptions.count) { bufferIn.packetDescriptions![i] = packetDescriptions[i] } bufferIn.packetCount = AVAudioPacketCount(packetDescriptions.count) _ = data.withUnsafeBytes { ptr in memcpy(bufferIn.data, ptr.baseAddress, data.count) } if verbose { print("handlePackets: \(data.count) bytes") } // Setup input provider closure var inputProvided = false let inputBlock: AVAudioConverterInputBlock = { packetCount, statusPtr in if !inputProvided { inputProvided = true statusPtr.pointee = .haveData return bufferIn } else { statusPtr.pointee = .noDataNow return nil } } // Loop until converter runs dry or is done while true { let bufferOut = AVAudioPCMBuffer(pcmFormat: outFormat, frameCapacity: 4096)! bufferOut.frameLength = 0 var error: NSError? let status = audioConverter.convert(to: bufferOut, error: &error, withInputFrom: inputBlock) switch status { case .haveData: if verbose { print("✅ convert returned haveData: \(bufferOut.frameLength) frames") } if bufferOut.frameLength > 0 { if bufferOut.isSilent { print("(haveData) SILENT BUFFER at frame \(totalFrames), pending: \(pendingFrames), inputPackets=\(bufferIn.packetCount), outputFrames=\(bufferOut.frameLength)") } outBuffers.append(bufferOut) totalFrames += Int(bufferOut.frameLength) } case .inputRanDry: if verbose { print("🔁 convert returned inputRanDry: \(bufferOut.frameLength) frames") } if bufferOut.frameLength > 0 { if bufferOut.isSilent { print("(inputRanDry) SILENT BUFFER at frame \(totalFrames), pending: \(pendingFrames), inputPackets=\(bufferIn.packetCount), outputFrames=\(bufferOut.frameLength)") } outBuffers.append(bufferOut) totalFrames += Int(bufferOut.frameLength) } return // wait for next handlePackets case .endOfStream: if verbose { print("✅ convert returned endOfStream") } return case .error: if verbose { print("❌ convert returned error") } if let error = error { print("error converting: \(error.localizedDescription)") } return @unknown default: fatalError() } } }
0
0
555
Jul ’25
Different behaviors of USB-C to Headphone Jack Adapters
I bought two "Apple USB-C to Headphone Jack Adapters". Upon closer inspection, they seems to be of different generations: The one with product ID 0x110a on top is working fine. The one with product ID 0x110b has two issues: There is a short but loud click noise on the headphone when I connect it to the iPad. When I play audio using AVAudioPlayer the first half of a second or so is cut off. Here's how I'm playing the audio: audioPlayer = try AVAudioPlayer(contentsOf: url) audioPlayer?.delegate = self audioPlayer?.prepareToPlay() audioPlayer?.play() Is this a known issue? Am I doing something wrong?
0
0
320
Jul ’25
What is the best approach to multi-channel, per-channel volume control.
I've got a setup using AVAudioEngine with several tone generator nodes, each with a chain of processing nodes, the chains then mixed into the main output. Generator ➡️ Effect ➡️... ➡️ .mainMixerNode ➡️ .outputNode). Generator ➡️ Effect ➡️... ⤴️ ... Generator ➡️ Effect ➡️... ⤴️ The user should be able to mute any chain individually. I've found several potential approaches to muting, but not terribly happy with any of them. Adjust the amplitudes directly in my tone generators. Issue: Consumes CPU even when completely muted. 4 generators adds ~15% cpu, even when all chains are muted. Detach/attach chains that are muted/unmuted. Issue: Causes loud clicking/popping sounds whenever muted/unmuted. Fade mixer output volume while detaching/attaching a chain (just cutting the volume immediately to 0 doesn't get rid of the clicking/popping). Issue: Causes all channels to fade during the transition, so not ideal. The rest of these ideas are variations on making volume control+detatch/attach work for individual chains, since approach #3 worked well. Add an AVAudioMixer to the end of each chain (just for volume control). Issue: Only the mixer on the final chain functions -- the others block all output. Not sure what's going on there. Use matrix mixer (for multi-input volume control). Plus detach/attach to reduce CPU if necessary. Not yet attempted, due to perceived complexity and reports of fragility in order of wiring in. A bunch of effort before I even know if it's going to work. Develop my own fader node to put on the end of each channel. Unlike the tone generator (simple AVSourceNode), developing an effect node seems complex and time consuming. Might not even fix CPU use. I'm not completely averse to the learning curve of either 5 or 6, but would rather get some guidance on best approach before diving in. They both seem likely to take more effort than I'd like for the simple behavior I'm trying to achieve.
0
0
294
Jul ’25
TTS Audio Unit Extension: File Write Access in App Group Container Denied Despite Proper Entitlements
I'm developing a TTS Audio Unit Extension that needs to write trace/log files to a shared App Group container. While the main app can successfully create and write files to the container, the extension gets sandbox denied errors despite having proper App Group entitlements configured. Setup: Main App (Flutter) and TTS Audio Unit Extension share the same App Group App Group is properly configured in developer portal and entitlements Main app successfully creates and uses files in the container Container structure shows existing directories (config/, dictionary/) with populated files Both targets have App Group capability enabled and entitlements set Current behavior: Extension can access/read the App Group container Extension can see existing directories and files All write attempts are blocked with "sandbox deny(1) file-write-create" errors Code example: const char* createSharedGroupPathWithComponent(const char* groupId, const char* component) { NSString* groupIdStr = [NSString stringWithUTF8String:groupId]; NSString* componentStr = [NSString stringWithUTF8String:component]; NSURL* url = [[NSFileManager defaultManager] containerURLForSecurityApplicationGroupIdentifier:groupIdStr]; NSURL* fullPath = [url URLByAppendingPathComponent:componentStr]; NSError *error = nil; if (![[NSFileManager defaultManager] createDirectoryAtPath:fullPath.path withIntermediateDirectories:YES attributes:nil error:&amp;error]) { NSLog(@"Unable to create directory %@", error.localizedDescription); } return [[fullPath path] UTF8String]; } Error output: Sandbox: simaromur-extension(996) deny(1) file-write-create /private/var/mobile/Containers/Shared/AppGroup/36CAFE9C-BD82-43DD-A962-2B4424E60043/trace Key questions: Are there additional entitlements required for TTS Audio Unit Extensions to write to App Group containers? Is this a known limitation of TTS Audio Unit Extensions? What is the recommended way to handle logging/tracing in TTS Audio Unit Extensions? If writing to App Group containers is not supported, what alternatives are available? Current entitlements: &lt;dict&gt; &lt;key&gt;com.apple.security.application-groups&lt;/key&gt; &lt;array&gt; &lt;string&gt;group.com.&lt;company&gt;.&lt;appname&gt;&lt;/string&gt; &lt;/array&gt; &lt;/dict&gt;
0
0
110
Apr ’25
Why Does WebView Audio Get Quiet During RTC Calls? (AVAudioSession Analysis)
I developed an educational app that implements audio-video communication through RTC, while using WebView to display course materials during classes. However, some users are experiencing an issue where the audio playback from WebView is very quiet. I've checked that the AVAudioSessionCategory is set by RTC to AVAudioSessionCategoryPlayAndRecord, and the AVAudioSessionCategoryOption also includes AVAudioSessionCategoryOptionMixWithOthers. What could be causing the WebView audio to be suppressed, and how can this be resolved?
0
0
555
Jul ’25
AVAudioEngine failing with -10877 on macOS 26 beta, no devices detected via AVFoundation but HAL works
I’m developing a macOS audio monitoring app using AVAudioEngine, and I’ve run into a critical issue on macOS 26 beta where AVFoundation fails to detect any input devices, and AVAudioEngine.start() throws the familiar error 10877. FB#: FB19024508 Strange Behavior: AVAudioEngine.inputNode shows no channels or input format on bus 0. AVAudioEngine.start() fails with -10877 (AudioUnit connection error). AVCaptureDevice.DiscoverySession returns zero audio devices. Microphone permission is granted (authorized), and the app is properly signed and sandboxed with com.apple.security.device.audio-input. However, CoreAudio HAL does detect all input/output devices: Using AudioObjectGetPropertyDataSize and AudioObjectGetPropertyData with kAudioHardwarePropertyDevices, I can enumerate 14+ devices, including AirPods, USB DACs, and BlackHole. This suggests the lower-level audio stack is functional. I have tried: Resetting CoreAudio with sudo killall coreaudiod Rebuilding and re-signing the app Clearing TCC with tccutil reset Microphone Running on Apple Silicon and testing Rosetta/native detection via sysctl.proc_translated Using a fallback mechanism that logs device info from HAL and rotates logs for submission via Feedback Assistant I have submitted logs and a reproducible test case via Feedback Assitant : FB#: FB19024508]
0
0
354
Jul ’25
MusicKit - Skipping Forwards or Backwards does not update
Hello everyone, I am working on an app that allows you to review your own music using Apple Music. Currently I am running into an issue with the skipping forwards and backwards outside of the app. How it should work: When skipping forward or backwards on the lock or home screen of an iPhone, the next or previous song on an album should play and the information should change to reflect that in the app. If you play a song in Apple Music, you can see a Now Playing view in the lock screen. When you skip forward or backwards, it will do either action and it would reflect that when you see a little frequency icon on artwork image of a song. What it's doing: When skipping forward or backwards on the lock or home screen of an iPhone, the next or previous song is reflected outside of the app, but not in the app. When skipping a song outside of the app, it works correctly to head to the next song. But when I return to the app, it is not reflected NOTE: I am not using MusicKit variables such as Track, Album to display the songs. Since I want to grab the songs and review them I need a rating so I created my own that grabs the MusicItemID, name, artist(s), etc. NOTE: I am using ApplicationMusicPlayer.shared Is there a way to get the song to reflect in my app? (If its easier, a simple example of it would be nice. No need to create an entire xprod file)
0
0
97
Apr ’25
Best Approach for Reliable Background Audio Playback with Audio Ducking on Command from Server
I am developing an iOS app that needs to play spoken audio on demand from a server, while ducking the audio of background music from another app (e.g., SoundtrackYourBrand or Apple Music). This must work even when the app is in the background, and the server dictates when and what audio is played. Ideally, the message should be played within a minute of the server requesting it. Current Attempt & Observations I initially tried using Firebase Cloud Messaging (FCM) silent notifications to send a URL to an audio file, which the app would then play using AVPlayer. This works consistently when the app is active, but in the background, it only works about 60% of the time. In cases where it fails, iOS ducks the background music (e.g., from SoundtrackYourBrand) but never plays the spoken audio. Interestingly, when I play the audio without enabling audio ducking, it seems to work 100% of the time from my limited testing, even in the background. The app has background modes enabled for Audio, Background Fetch, and Remote Notifications. Best Approach to Achieve This? I’d like guidance on the best Apple-compliant approach to reliably play audio on command from the server, even when the app is in the background. Some possible paths: Ensuring the app remains active in the background – Are there recommended ways to prevent the app from getting suspended, such as background tasks, a special background mode, or a persistent connection to the server? Alternative triggering mechanisms – Would something like VoIP, Push-to-Talk, or another background service be better suited for this use case? Built-in iOS speech synthesis (AVSpeechSynthesizer) – If playing external audio is unreliable, would generating speech dynamically from text be a more robust approach? Streaming audio instead of sending a URL – Could continuous streaming from the server keep the app active and allow playback at the right moment? I want to ensure the solution is reliable and works 100% of the time when needed. Any recommendations on the best approach for this would be greatly appreciated. Thank you for your time and guidance.
0
1
401
Feb ’25
access transport in Logic Pro
hi, i need to read wether the transport is playing or stopped but my current method that works for vst does not work for au. is there a lpx resource available for developers anywhere? if (auto* playHead = processor->getPlayHead()) { juce::AudioPlayHead::CurrentPositionInfo posInfo; if (playHead->getCurrentPosition(posInfo)) { bool isCurrentlyPlaying = posInfo.isPlaying; if (isCurrentlyPlaying != wasTransportPlaying) { if (isCurrentlyPlaying) { wasTransportPlaying = isCurrentlyPlaying; startAllTimers(); } else { wasTransportPlaying = isCurrentlyPlaying; stopAllTimers(); } } } } thanks :)
0
0
283
Mar ’25