Streaming

RSS for tag

Deep dive into the technical specifications that influence seamless playback for streaming services, including bitrates, codecs, and caching mechanisms.

Streaming Documentation

Posts under Streaming subtopic

Post

Replies

Boosts

Views

Created

How to Monitor Any USB Audio or Video Device on macOS
USB cameras, microphones, HDMI capture cards, and audio interfaces are supposed to "just work" on macOS. In reality, it's often difficult to quickly access or monitor them without opening large and complicated software. Sometimes you simply want to see whether a USB camera is active. Sometimes you want to check an HDMI source connected through a capture card. And in other cases, you may want to use a Mac mini without a dedicated monitor by viewing its HDMI output through a USB capture device directly on another Mac. macOS supports many modern USB AV devices out of the box, but it surprisingly lacks a simple built-in utility for live monitoring and recording. Most users end up using oversized streaming or editing applications just to preview a video signal or monitor audio input. That becomes especially noticeable with: USB webcams HDMI capture adapters USB microphones audio interfaces secondary computers headless Mac mini setups A lightweight monitor utility is often much more practical when you only need real-time access to a device, want to record a stream, or quickly switch between multiple AV inputs. That's one of the reasons I built AV Monitor Pro  -  a native macOS app designed for monitoring and recording connected audio/video devices in real time. It can preview USB cameras, capture cards, microphones, and HDMI sources with minimal setup, and it's especially useful for workflows like running a Mac mini without a monitor, monitoring external devices, or recording live AV input directly on macOS.
0
0
166
5d
AVMutableComposition audio silently drops on iOS 26 when streaming over HTTP/2 (FB22696516)
We've discovered a regression in iOS 26 where AVMutableComposition silently drops audio when the source asset is streamed over HTTP/2. The same file served over HTTP/1.1 plays audio correctly through the same composition code. Direct AVPlayer playback (without composition) works fine on HTTP/2. This did not occur on iOS 18.x. It happens on physical devices only. It does not reproduce on a simulator or on macOS. Tested conditions (same MP4 file, different CDNs): CloudFront (HTTP/2) + Composition → ❌ Audio silent Cloudflare (HTTP/2) + Composition → ❌ Audio silent Akamai (HTTP/1.1) + Composition → ✅ Audio works Apple TS (HTTP/1.1) + Composition → ✅ Audio works Downloaded locally, then composed → ✅ Audio works Direct playback, no composition (HTTP/2) → ✅ Audio works The CloudFront and Akamai URLs serve the identical file — same S3 object, different CDN edge. CDN vendor doesn't matter; any HTTP/2 source triggers it. Minimal reproduction: let asset = AVURLAsset(url: http2URL) let videoTrack = try await asset.loadTracks(withMediaType: .video).first! let audioTrack = try await asset.loadTracks(withMediaType: .audio).first! let duration = try await asset.load(.duration) let composition = AVMutableComposition() let fullRange = CMTimeRange(start: .zero, end: duration) let compVideo = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)! try compVideo.insertTimeRange(fullRange, of: videoTrack, at: .zero) let compAudio = composition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)! try compAudio.insertTimeRange(fullRange, of: audioTrack, at: .zero) let item = AVPlayerItem(asset: composition.copy() as! AVComposition) player.replaceCurrentItem(with: item) player.play() // Video plays, audio goes silent after a while Playing the same asset directly works fine: player.replaceCurrentItem(with: AVPlayerItem(asset: asset)) player.play() // Both video and audio work Filed as FB22696516 Sample project: https://github.com/karlingen/AVCompositionBug
2
9
219
1w
HLS Tools - hlsreport critical error cause
Hi, I'm currently experiencing issues with HLS streams created by FFmpeg running on Safari. When I pass the stream to the mediastreamvalidator tool and then run hlsreport on the output, I get a critical error reported: Media Entry discontinuity value does not match previous playlist for MEDIA-SEQUENCE 1 If I let the stream finish (it's a live stream from an IoT device) and then perform the stream validation again I no longer receive the critical error. My assumption is that this critical error is contributing to the HLS stall on iOS. I have also noticed that if I let the stream continue and then re-load the video control in Safari the stream starts Is there a resource with explanations or remediation paths relevant to the possible output of the hlsreport? My m3u8 output looks like this (I have redacted the server host) #EXTM3U #EXT-X-VERSION:6 #EXT-X-TARGETDURATION:2 #EXT-X-MEDIA-SEQUENCE:1 #EXT-X-PLAYLIST-TYPE:EVENT #EXT-X-INDEPENDENT-SEGMENTS #EXT-X-DISCONTINUITY #EXTINF:2.000000, https://redacted.com/segment-00001.ts #EXTINF:2.000011, https://redacted.com/segment-00002.ts #EXTINF:2.000011, https://redacted.com/segment-00003.ts #EXTINF:2.000011, https://redacted.com/segment-00004.ts #EXTINF:2.000011, #EXT-X-ENDLIST Thanks for any advice or guidance possible - if I can provide isolated code snippets I will do. Andy
1
0
607
1w
FairPlay SPC v3 documentation mismatch: payload length field size vs sample code
Hi, I’ve identified a discrepancy between the FairPlay Streaming SPC v3 documentation and the provided Swift reference implementation regarding the SPC payload length field. Documentation states: The SPC V3 structure defines: SPC payload length: 4 bytes However, in Apple’s Swift sample implementation: // Move local offset by 12 to adjust for padding localOffset += 12 spcContainer.spcDataSize = Int(try readBigEndianU32(spc, localOffset)) This indicates: A 16-byte field (12 bytes padding + 4-byte length) This behavior also matches the SPC sample provided in the FairPlay Streaming SDK (sample_spc_v3.b64 ). 00000003 // spc version 00000000 // reserved .... 00000000000000000000000000000f40 // spc payload length (16 bytes) Could you please confirm the correct implementation? Thanks
1
0
129
2w
FairPlay SPC with an invalid device type
Hi, I received an SPC without a device Identity TLLV and with an invalid (i.e a value that is not specified in the FairPlay programming guide) value of device type in the Device info TLLV. The info I got is the following - Apple Device Type: Type:0x555ea482e2ef0a7c, OS version:189.121.178 Does anyone know what device type it is, and why it does not conform to the Apple spec? Also, should I accept such an SPC or is it not valid? Thanks.
1
0
235
2w
AVMetricMediaResourceRequestEvent returns error but no URLSession metrics for failed HLS playlist/segment requests
Hello, I am using AVMetrics to monitor HLS playback requests from AVPlayer, specifically AVMetricHLSPlaylistRequestEvent and AVMetricHLSMediaSegmentRequestEvent. These events provide an AVMetricMediaResourceRequestEvent. For successful requests, I can read URLSession metrics. However, when a request fails, the event contains an error but no URLSession metrics. I reproduced this by intercepting HLS playlist and segment requests with Charles Proxy and forcing failures on both the simulator and a physical device. Is this expected behavior? If so, is there any supported way to get timing details for failed HLS requests? I am using code like this: for try await event in playerItem.metrics(forType: AVMetricHLSPlaylistRequestEvent.self) { // ... } for try await event in playerItem.metrics(forType: AVMetricHLSMediaSegmentRequestEvent.self) { // ... } Also, the example shown in the WWDC session does not compile for me (XCode 26.2). I get the following error: Pack expansion requires that '' and 'AVMetricEvent' have the same shape let playerItem: AVPlayerItem = ... let ltkuMetrics = item.metrics(forType: AVMetricPlayerItemLikelyToKeepUpEvent.self) let summaryMetrics = item.metrics(forType: AVMetricPlayerItemPlaybackSummaryEvent.self) for await (metricEvent, publisher) in ltkuMetrics.chronologicalMerge(with: summaryMetrics) { // send metricEvent to server }
2
1
249
4w
ScreenCaptureKit stops capturing after ~10–15 minutes unexpectedly
When using the built-in macOS screen recording feature, the recording stops automatically after approximately 10–15 minutes without any warning or error message. No manual stop action is performed. The recording simply ends silently. The same issue also occurs when using ScreenCaptureKit in a custom application, which suggests this may be a system-level issue related to screen capture rather than an app-specific problem. This issue is reproducible and happens consistently after running for a period of time.
0
0
177
Apr ’26
ScreenCaptureKit stops capturing after ~10–15 minutes unexpectedly
When using the built-in macOS screen recording feature, the recording stops automatically after approximately 10–15 minutes without any warning or error message. No manual stop action is performed. The recording simply ends silently. The same issue also occurs when using ScreenCaptureKit in a custom application, which suggests this may be a system-level issue related to screen capture rather than an app-specific problem. This issue is reproducible and happens consistently after running for a period of time.
0
0
225
Apr ’26
AVContentKeySession: Cannot re-fetch content key once obtained — expected behavior?
We are developing a video streaming app that uses AVContentKeySession with FairPlay Streaming. Our implementation supports both online playback (non-persistable keys) and offline playback (persistable keys). We have observed the following behavior: Once a content key has been obtained for a given Content Key ID, AVContentKeySession does not trigger contentKeySession(_:didProvide:) again for that same Key ID We also attempted to explicitly call processContentKeyRequest(withIdentifier:initializationData:options:) on the session to force a new key request for the same identifier, but this did not result in the delegate callback being fired again. The session appears to consider the key already resolved and silently ignores the request. This means that if a user first plays content online (receiving a non-persistable key), and later wants to download the same content for offline use (requiring a persistable key), the delegate callback is not fired again, and we have no opportunity to request a persistable key. Questions Is this the expected behavior? Specifically, is it by design that AVContentKeySession caches the key for a given Key ID and does not re-request it — even when processContentKeyRequest(withIdentifier:) is explicitly called? Should we use distinct Content Key IDs for persistable vs. non-persistable keys? For example, if the same piece of content can be played both online and offline, is the recommended approach to have the server provide different EXT-X-KEY URIs (and thus different key identifiers) for the streaming and download variants? Is there a supported way to force a fresh key request for a Key ID that has already been resolved — for example, to upgrade from a non-persistable to a persistable key? Environment iOS 18+ AVContentKeySession(keySystem: .fairPlayStreaming) Any guidance on the recommended approach for supporting both streaming and offline playback for the same content would be greatly appreciated.
1
0
389
Apr ’26
Technical guidance request: native screen capture protection on macOS with Flutter while allowing AirPlay
Hello Apple Developer Support, I am reaching out for technical guidance regarding screen capture protection behavior on macOS. We are building a desktop application using Flutter running on macOS, and we have implemented native Swift code inside the macOS Runner in order to protect sensitive content from screen recording and screen sharing. Our current implementation relies on native window-level protection and display state handling from Swift, while the main UI remains rendered by Flutter. The main challenge we are facing is the following: we need to keep a strong native anti-recording protection on macOS the application is heavily used with AirPlay and screen mirroring currently, AirPlay / mirroring is often interpreted by the system similarly to screen capture or screen recording this causes our protected content to be replaced by a gray or blank area even during legitimate AirPlay usage In practice, we would like to allow: AirPlay legitimate external display / mirroring usage while still preventing: screen recording screen sharing unauthorized screen capture We would like to know whether Apple recommends an official supported approach for this use case, preferably using public APIs. More specifically: Is there an officially supported way on macOS to distinguish AirPlay mirroring from screen recording / screen sharing? Is "NSWindow.sharingType" the recommended public API for this scenario? Is there a recommended approach when the UI surface is rendered through Flutter / Metal? Are there any best practices with ScreenCaptureKit for protecting content without affecting AirPlay? We understand that some lower-level APIs may not be officially supported, so we would greatly appreciate guidance toward a public and future-proof implementation path. Thank you very much for your time and support. Best regards, Tony
0
0
122
Apr ’26
Generating a new FPS certificate (SDK 26) alongside an existing SDK 4 certificate
Hi, Our client currently has an FPS deployment certificate generated with SDK version 4 that is still actively used in production. They would like to generate an additional certificate using SDK version 26. Before doing so, they just want to confirm: Will the existing SDK 4 certificate remain unaffected and still visible in the Apple Developer portal? Any considerations they should keep in mind? Thanks!
0
0
255
Mar ’26
Generating a new FPS certificate (SDK 26) alongside an existing SDK 4 certificate
Hi, Our client currently has an FPS deployment certificate generated with SDK version 4 that is still actively used in production. They would like to generate an additional certificate using SDK version 26. Before doing so, they just want to confirm: Will the existing SDK 4 certificate remain unaffected and still visible in the Apple Developer portal? Any considerations they should keep in mind? Thanks! :)
1
0
391
Mar ’26
On iOS 26, HLS alternate audio track selection behaves inconsistently
Summary On iOS 26, HLS alternate audio track selection behaves inconsistently on both VOD and live streams: the French track falls back to the DEFAULT=YES (English) track after manual selection, and in some cases switching to a non-default track appears to work but it is then impossible to switch back to English. Environment iOS version: 26 Players affected: native Safari on iOS 26 and THEOplayer (issue also reproducible on THEOplayer's own demo page) Stream type: HLS/CMAF with demuxed alternate audio renditions (CMFC container) Affected stream types: both VOD and live streaming Issue NOT present on iOS 17/18 Manifest #EXTM3U #EXT-X-VERSION:4 #EXT-X-INDEPENDENT-SEGMENTS #EXT-X-STREAM-INF:BANDWIDTH=8987973,AVERAGE-BANDWIDTH=8987973,VIDEO-RANGE=SDR,CODECS="avc1.640028",RESOLUTION=1920x1080,FRAME-RATE=29.970,AUDIO="program_audio" video_1080p.m3u8 #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="program_audio",LANGUAGE="en",ASSOC-LANGUAGE="en",NAME="English",AUTOSELECT=YES,DEFAULT=YES,CHANNELS="2",URI="audio_ENG.m3u8" #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="program_audio",LANGUAGE="de",ASSOC-LANGUAGE="de",NAME="Deutsch",AUTOSELECT=YES,DEFAULT=NO,CHANNELS="2",URI="audio_DEU.m3u8" #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="program_audio",LANGUAGE="fr",ASSOC-LANGUAGE="fr",NAME="Francais",AUTOSELECT=YES,DEFAULT=NO,CHANNELS="2",URI="audio_FRA.m3u8" Steps to Reproduce Load the HLS manifest (VOD or live) in Safari on iOS 26, or in any AVFoundation-backed player Start playback — English plays correctly as DEFAULT Manually select "Francais" from the audio track selector Observe that English audio continues playing (French does not play) In a separate scenario: manually select "Deutsch" — German plays correctly Attempt to switch back to English — English does not resume; audio remains on the previously selected track Expected behavior Selecting any track should immediately switch to that language Switching back to English (DEFAULT=YES) should work at any time Behavior should be consistent across VOD and live streams Actual behavior Two distinct anomalies observed, reproducible on both VOD and live streams, in both native Safari and THEOplayer: French-specific fallback: selecting the French track causes playback to fall back to English. This does not happen with German. Cannot return to English: in cases where a non-default track plays correctly, attempting to switch back to the DEFAULT=YES track (English) fails — the previous non-default track continues playing. The fact that the issue reproduces in native Safari confirms this is an AVFoundation/WebKit-level regression, not a third-party player bug. What we have already verified and ruled out LANGUAGE codes are BCP-47 compliant (en, fr, de) ✓ EXT-X-VERSION:4 is present ✓ Audio codec removed from STREAM-INF CODECS (video-only) ✓ ASSOC-LANGUAGE attribute added matching LANGUAGE value ✓ Container metadata verified via ffprobe: mdhd box correctly contains language tags (e.g. "fra") ✓ Audio segment content verified via ffplay: correct audio in each language file ✓ French audio source file contains correct French audio content ✓ Issue reproduces in native Safari on iOS 26, confirming it is not a THEOplayer-specific bug Issue does NOT reproduce on iOS 17/18 with the same manifest and segments Additional notes The VOD stream is packaged with AWS MediaConvert, CMAF output group, SEGMENTED_FILES, AAC-LC codec (mp4a.40.2), 128kbps, 48kHz stereo. English uses AudioTrackType ALTERNATE_AUDIO_AUTO_SELECT_DEFAULT; French and German use ALTERNATE_AUDIO_AUTO_SELECT. The live stream uses AWS MediaPackage with a similar CMAF/HLS output configuration. Please advise whether this is a known regression in AVFoundation on iOS 26 and whether a fix is planned.
1
0
356
Mar ’26
Unity iOS (Metal) → WebRTC (Unity WebRTC) video stream to remote Unity client: PeerConnection connects but receiver renders black frames
Hello! My name is Mason Prather. I'm a graduate student at Kennesaw State University and a Research Engineer working in XR environments through my Graduate Research Assistant role. I’m currently building a research prototype that connects a mobile companion application to a VR headset. The mobile application is built in Unity and deployed on iOS, and it streams video frames to a remote Unity client using WebRTC. Environment Device: iPhone 15 OS: iOS 26.3 (tested on physical device, not Simulator) Engine: Unity 2022.3.57f1 Graphics API: Metal Streaming Technology: WebRTC (Unity WebRTC package) Architecture: Mobile Unity app streaming video frames to a remote Unity client Receiver Device: Meta Quest Pro headset (Unity application) Networking: LAN (UDP discovery + TCP signaling) Video Source: Unity RenderTexture Goal The goal of the system is to allow a VR user to view media stored on their phone inside a VR environment. The iOS app: renders or captures media content converts frames into a WebRTC video track streams the video to the headset Current Status Connection setup works correctly. Observed behavior: Signaling connection successful ICE candidate exchange successful PeerConnection state becomes Connected Video track created successfully However, the receiving application displays black frames. iOS App Details The video source originates from a Unity RenderTexture. Inside the phone application: RenderTexture displays correctly Frames appear correct locally But the receiving peer does not display the frames. Relevant Components Unity WebRTC package iOS Metal rendering pipeline Custom TCP signaling LAN discovery via UDP Expected Behavior Rendered frames should transmit via WebRTC and appear on the remote device. Actual Behavior The remote video track is active, but the rendered frames appear black on the receiving client. Questions Are there known issues involving Unity WebRTC + iOS Metal texture capture? Are there specific pixel format requirements when streaming textures from Unity on iOS? Could the issue relate to texture readback limitations or GPU synchronization? I am more than happy to provide screenshots and console logs upon request. If anyone has experience streaming Unity video frames via WebRTC on iOS, I would greatly appreciate any guidance.
0
0
373
Mar ’26
Swift Array Out of Bounds Crash in VTFrameProcessor when using VTLowLatencyFrameInterpolationParameters
Hi everyone, Our team is encountering a reproducible crash when using VTLowLatencyFrameInterpolation on iOS 26.3 while processing a live LL-HLS input stream. 🤖 Environment Device: iPhone 16 OS: iOS 26.3 Xcode: Xcode 26.3 Framework: VideoToolbox 💥 Crash Details The application crashes with the following fatal error: Fatal error: Swift/ContiguousArrayBuffer.swift:184: Array index out of range The stack trace highlights the following: VTLowLatencyFrameInterpolationImplementation processWithParameters:frameOutputHandler: Called from VTFrameProcessor.process(parameters:) Here is the simplified implementation block where the crash occurs. (Note: PrismSampleBuffer and PrismLLFIError are our internal custom wrapper types). // Create `VTFrameProcessorFrame` for the source (previous) frame. let sourcePTS = sourceSampleBuffer.presentationTimeStamp var sourceFrame: VTFrameProcessorFrame? if let pixelBuffer = sourceSampleBuffer.imageBuffer { sourceFrame = VTFrameProcessorFrame(buffer: pixelBuffer, presentationTimeStamp: sourcePTS) } // Validate the source VTFrameProcessorFrame. guard let sourceFrame else { throw PrismLLFIError.missingImageBuffer } // Create `VTFrameProcessorFrame` for the next frame. let nextPTS = nextSampleBuffer.presentationTimeStamp var nextFrame: VTFrameProcessorFrame? if let pixelBuffer = nextSampleBuffer.imageBuffer { nextFrame = VTFrameProcessorFrame(buffer: pixelBuffer, presentationTimeStamp: nextPTS) } // Validate the next VTFrameProcessorFrame. guard let nextFrame else { throw PrismLLFIError.missingImageBuffer } // Calculate interpolation intervals and allocate destination frame buffers. let intervals = interpolationIntervals() let destinationFrames = try framesBetween(firstPTS: sourcePTS, lastPTS: nextPTS, interpolationIntervals: intervals) let interpolationPhase: [Float] = intervals.map { Float($0) } // Create VTLowLatencyFrameInterpolationParameters. // This sets up the configuration required for temporal frame interpolation between the previous and current source frames. guard let parameters = VTLowLatencyFrameInterpolationParameters( sourceFrame: nextFrame, previousFrame: sourceFrame, interpolationPhase: interpolationPhase, destinationFrames: destinationFrames ) else { throw PrismLLFIError.failedToCreateParameters } try await send(sourceSampleBuffer) // Process the frames. // Using progressive callback here to get the next processed frame as soon as it's ready, // preventing the system from waiting for the entire batch to finish. for try await readOnlyFrame in self.frameProcessor.process(parameters: parameters) { // Create an interpolated sample buffer based on the output frame. let newSampleBuffer: PrismSampleBuffer = try readOnlyFrame.frame.withUnsafeBuffer { pixelBuffer in try PrismLowLatencyFrameInterpolation.createSampleBuffer(from: pixelBuffer, readOnlyFrame.timeStamp) } // Pass the newly generated frame to the output stream. try await send(newSampleBuffer) } 🙋 Questions Are there any known limitations or bugs regarding VTLowLatencyFrameInterpolation when handling live 60fps streams? Are there any undocumented constraints we should be aware of regarding source/previous frame timing, pixel buffer attributes, or how destinationFrames and interpolationPhase arrays must be allocated? Is a "warm-up" sequence recommended after startSession() before making the first process(parameters:) call?
1
0
724
Mar ’26
Inquiry regarding CoreMediaErrorDomain Code=-15517 during LL-HLS Live Playback
Hello, I am currently developing a live streaming application using AVPlayer to play LL-HLS (Low-Latency HLS) content. During our testing phase, we consistently encountered the following error in the logs: CoreMediaErrorDomain Code=-15517 The challenge we are facing is that the error description is quite vague. It only provides cryptic messages such as "Key not found" or "No value information," which makes it extremely difficult to identify the root cause or perform a deep-dive analysis. I have searched through the official Apple Developer documentation and technical notes, but I couldn’t find any specific reference to what Code -15517 signifies in the context of LL-HLS or CoreMedia. Regarding this issue, I have the following questions: What is the specific meaning of this error code (-15517)? Does it relate to missing tags in the HLS manifest, or is it an internal state issue within the AVPlayer stack? Specifically, I would like to know if this is a critical error that disrupts playback, or if it is just a warning that can be safely ignored. Is there any additional logging or debugging tool you would recommend to further investigate "Key not found" issues in LL-HLS? Any insights or guidance from the community or Apple engineers would be greatly appreciated. Thank you in advance for your help.
1
0
301
Feb ’26
The audio of FairPlay protected content can be captured - Safari on iOS
Hi, Has anyone been able to protect the audio part of FairPlay protected content from being captured as part of screen recording on Safari/iOS (PWA and/or online web app)? We have tried many things but could not prevent the audio from being recorded. Same app and content on Safari/Mac does not allow audio to be recorded. Any tips?
1
0
425
Feb ’26
A mistake in FairPlay Streaming SDK 26 sample code on comparing ProtocolVersionUsed value?
Hello, I am reviewing the sample codes of FairPlay Streaming SDK 26 and there was a place where I think is a mistake. The codes are for the server, for both Swift and Rust codes. There is an if statement which compares "ProtocolVersionUsed"(spcData.versionUsed) and SPCVersion1 constant, though "ProtocolVersionUsed" and SPC Version is a different thing, so shouldn't it be using a different constant value? [createContentKeyPayload.swift] // Fallback to version 1 if content can have encrypted slice headers, which need to be decrypted separately. Slice headers are not encrypted when using CBCS. if serverCtx.spcContainer.spcData.versionUsed == base_constants.SPCVersion.v1.rawValue && [createContentKeyPayload.rs] // Fallback to version 1 if content can have encrypted slice headers, which need to be decrypted separately. Slice headers are not encrypted when using CBCS. if (serverCtx.spcContainer.spcData.versionUsed == SPCVersion::v1 as u32) && Thank you.
0
0
148
Feb ’26
Clarification on SPC Version 3 Availability and Requirements (SDK 26 Certificate Bundle)
Hello, I’m using a valid certificate bundle generated with SDK 26 (combined RSA‑1024 + RSA‑2048). However, all my devices currently still generate SPC v2 during playback, including my iPhone 16 under iOS 26.2. Apple staff mentioned that future iOS versions will send SPC v3 when using an SDK 26 certificate bundle. Could you please clarify: Which iOS/macOS versions will first support SPC v3? Are there any additional client‑side requirements (Safari version, playback APIs, headers, etc.) to trigger SPC v3? Is there any way to test SPC v3 today, e.g., using beta builds? Thank you!
1
1
612
Feb ’26
How to Monitor Any USB Audio or Video Device on macOS
USB cameras, microphones, HDMI capture cards, and audio interfaces are supposed to "just work" on macOS. In reality, it's often difficult to quickly access or monitor them without opening large and complicated software. Sometimes you simply want to see whether a USB camera is active. Sometimes you want to check an HDMI source connected through a capture card. And in other cases, you may want to use a Mac mini without a dedicated monitor by viewing its HDMI output through a USB capture device directly on another Mac. macOS supports many modern USB AV devices out of the box, but it surprisingly lacks a simple built-in utility for live monitoring and recording. Most users end up using oversized streaming or editing applications just to preview a video signal or monitor audio input. That becomes especially noticeable with: USB webcams HDMI capture adapters USB microphones audio interfaces secondary computers headless Mac mini setups A lightweight monitor utility is often much more practical when you only need real-time access to a device, want to record a stream, or quickly switch between multiple AV inputs. That's one of the reasons I built AV Monitor Pro  -  a native macOS app designed for monitoring and recording connected audio/video devices in real time. It can preview USB cameras, capture cards, microphones, and HDMI sources with minimal setup, and it's especially useful for workflows like running a Mac mini without a monitor, monitoring external devices, or recording live AV input directly on macOS.
Replies
0
Boosts
0
Views
166
Activity
5d
AVMutableComposition audio silently drops on iOS 26 when streaming over HTTP/2 (FB22696516)
We've discovered a regression in iOS 26 where AVMutableComposition silently drops audio when the source asset is streamed over HTTP/2. The same file served over HTTP/1.1 plays audio correctly through the same composition code. Direct AVPlayer playback (without composition) works fine on HTTP/2. This did not occur on iOS 18.x. It happens on physical devices only. It does not reproduce on a simulator or on macOS. Tested conditions (same MP4 file, different CDNs): CloudFront (HTTP/2) + Composition → ❌ Audio silent Cloudflare (HTTP/2) + Composition → ❌ Audio silent Akamai (HTTP/1.1) + Composition → ✅ Audio works Apple TS (HTTP/1.1) + Composition → ✅ Audio works Downloaded locally, then composed → ✅ Audio works Direct playback, no composition (HTTP/2) → ✅ Audio works The CloudFront and Akamai URLs serve the identical file — same S3 object, different CDN edge. CDN vendor doesn't matter; any HTTP/2 source triggers it. Minimal reproduction: let asset = AVURLAsset(url: http2URL) let videoTrack = try await asset.loadTracks(withMediaType: .video).first! let audioTrack = try await asset.loadTracks(withMediaType: .audio).first! let duration = try await asset.load(.duration) let composition = AVMutableComposition() let fullRange = CMTimeRange(start: .zero, end: duration) let compVideo = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)! try compVideo.insertTimeRange(fullRange, of: videoTrack, at: .zero) let compAudio = composition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)! try compAudio.insertTimeRange(fullRange, of: audioTrack, at: .zero) let item = AVPlayerItem(asset: composition.copy() as! AVComposition) player.replaceCurrentItem(with: item) player.play() // Video plays, audio goes silent after a while Playing the same asset directly works fine: player.replaceCurrentItem(with: AVPlayerItem(asset: asset)) player.play() // Both video and audio work Filed as FB22696516 Sample project: https://github.com/karlingen/AVCompositionBug
Replies
2
Boosts
9
Views
219
Activity
1w
HLS Tools - hlsreport critical error cause
Hi, I'm currently experiencing issues with HLS streams created by FFmpeg running on Safari. When I pass the stream to the mediastreamvalidator tool and then run hlsreport on the output, I get a critical error reported: Media Entry discontinuity value does not match previous playlist for MEDIA-SEQUENCE 1 If I let the stream finish (it's a live stream from an IoT device) and then perform the stream validation again I no longer receive the critical error. My assumption is that this critical error is contributing to the HLS stall on iOS. I have also noticed that if I let the stream continue and then re-load the video control in Safari the stream starts Is there a resource with explanations or remediation paths relevant to the possible output of the hlsreport? My m3u8 output looks like this (I have redacted the server host) #EXTM3U #EXT-X-VERSION:6 #EXT-X-TARGETDURATION:2 #EXT-X-MEDIA-SEQUENCE:1 #EXT-X-PLAYLIST-TYPE:EVENT #EXT-X-INDEPENDENT-SEGMENTS #EXT-X-DISCONTINUITY #EXTINF:2.000000, https://redacted.com/segment-00001.ts #EXTINF:2.000011, https://redacted.com/segment-00002.ts #EXTINF:2.000011, https://redacted.com/segment-00003.ts #EXTINF:2.000011, https://redacted.com/segment-00004.ts #EXTINF:2.000011, #EXT-X-ENDLIST Thanks for any advice or guidance possible - if I can provide isolated code snippets I will do. Andy
Replies
1
Boosts
0
Views
607
Activity
1w
FairPlay SPC v3 documentation mismatch: payload length field size vs sample code
Hi, I’ve identified a discrepancy between the FairPlay Streaming SPC v3 documentation and the provided Swift reference implementation regarding the SPC payload length field. Documentation states: The SPC V3 structure defines: SPC payload length: 4 bytes However, in Apple’s Swift sample implementation: // Move local offset by 12 to adjust for padding localOffset += 12 spcContainer.spcDataSize = Int(try readBigEndianU32(spc, localOffset)) This indicates: A 16-byte field (12 bytes padding + 4-byte length) This behavior also matches the SPC sample provided in the FairPlay Streaming SDK (sample_spc_v3.b64 ). 00000003 // spc version 00000000 // reserved .... 00000000000000000000000000000f40 // spc payload length (16 bytes) Could you please confirm the correct implementation? Thanks
Replies
1
Boosts
0
Views
129
Activity
2w
FairPlay SPC with an invalid device type
Hi, I received an SPC without a device Identity TLLV and with an invalid (i.e a value that is not specified in the FairPlay programming guide) value of device type in the Device info TLLV. The info I got is the following - Apple Device Type: Type:0x555ea482e2ef0a7c, OS version:189.121.178 Does anyone know what device type it is, and why it does not conform to the Apple spec? Also, should I accept such an SPC or is it not valid? Thanks.
Replies
1
Boosts
0
Views
235
Activity
2w
AVMetricMediaResourceRequestEvent returns error but no URLSession metrics for failed HLS playlist/segment requests
Hello, I am using AVMetrics to monitor HLS playback requests from AVPlayer, specifically AVMetricHLSPlaylistRequestEvent and AVMetricHLSMediaSegmentRequestEvent. These events provide an AVMetricMediaResourceRequestEvent. For successful requests, I can read URLSession metrics. However, when a request fails, the event contains an error but no URLSession metrics. I reproduced this by intercepting HLS playlist and segment requests with Charles Proxy and forcing failures on both the simulator and a physical device. Is this expected behavior? If so, is there any supported way to get timing details for failed HLS requests? I am using code like this: for try await event in playerItem.metrics(forType: AVMetricHLSPlaylistRequestEvent.self) { // ... } for try await event in playerItem.metrics(forType: AVMetricHLSMediaSegmentRequestEvent.self) { // ... } Also, the example shown in the WWDC session does not compile for me (XCode 26.2). I get the following error: Pack expansion requires that '' and 'AVMetricEvent' have the same shape let playerItem: AVPlayerItem = ... let ltkuMetrics = item.metrics(forType: AVMetricPlayerItemLikelyToKeepUpEvent.self) let summaryMetrics = item.metrics(forType: AVMetricPlayerItemPlaybackSummaryEvent.self) for await (metricEvent, publisher) in ltkuMetrics.chronologicalMerge(with: summaryMetrics) { // send metricEvent to server }
Replies
2
Boosts
1
Views
249
Activity
4w
ScreenCaptureKit stops capturing after ~10–15 minutes unexpectedly
When using the built-in macOS screen recording feature, the recording stops automatically after approximately 10–15 minutes without any warning or error message. No manual stop action is performed. The recording simply ends silently. The same issue also occurs when using ScreenCaptureKit in a custom application, which suggests this may be a system-level issue related to screen capture rather than an app-specific problem. This issue is reproducible and happens consistently after running for a period of time.
Replies
0
Boosts
0
Views
177
Activity
Apr ’26
ScreenCaptureKit stops capturing after ~10–15 minutes unexpectedly
When using the built-in macOS screen recording feature, the recording stops automatically after approximately 10–15 minutes without any warning or error message. No manual stop action is performed. The recording simply ends silently. The same issue also occurs when using ScreenCaptureKit in a custom application, which suggests this may be a system-level issue related to screen capture rather than an app-specific problem. This issue is reproducible and happens consistently after running for a period of time.
Replies
0
Boosts
0
Views
225
Activity
Apr ’26
AVContentKeySession: Cannot re-fetch content key once obtained — expected behavior?
We are developing a video streaming app that uses AVContentKeySession with FairPlay Streaming. Our implementation supports both online playback (non-persistable keys) and offline playback (persistable keys). We have observed the following behavior: Once a content key has been obtained for a given Content Key ID, AVContentKeySession does not trigger contentKeySession(_:didProvide:) again for that same Key ID We also attempted to explicitly call processContentKeyRequest(withIdentifier:initializationData:options:) on the session to force a new key request for the same identifier, but this did not result in the delegate callback being fired again. The session appears to consider the key already resolved and silently ignores the request. This means that if a user first plays content online (receiving a non-persistable key), and later wants to download the same content for offline use (requiring a persistable key), the delegate callback is not fired again, and we have no opportunity to request a persistable key. Questions Is this the expected behavior? Specifically, is it by design that AVContentKeySession caches the key for a given Key ID and does not re-request it — even when processContentKeyRequest(withIdentifier:) is explicitly called? Should we use distinct Content Key IDs for persistable vs. non-persistable keys? For example, if the same piece of content can be played both online and offline, is the recommended approach to have the server provide different EXT-X-KEY URIs (and thus different key identifiers) for the streaming and download variants? Is there a supported way to force a fresh key request for a Key ID that has already been resolved — for example, to upgrade from a non-persistable to a persistable key? Environment iOS 18+ AVContentKeySession(keySystem: .fairPlayStreaming) Any guidance on the recommended approach for supporting both streaming and offline playback for the same content would be greatly appreciated.
Replies
1
Boosts
0
Views
389
Activity
Apr ’26
Technical guidance request: native screen capture protection on macOS with Flutter while allowing AirPlay
Hello Apple Developer Support, I am reaching out for technical guidance regarding screen capture protection behavior on macOS. We are building a desktop application using Flutter running on macOS, and we have implemented native Swift code inside the macOS Runner in order to protect sensitive content from screen recording and screen sharing. Our current implementation relies on native window-level protection and display state handling from Swift, while the main UI remains rendered by Flutter. The main challenge we are facing is the following: we need to keep a strong native anti-recording protection on macOS the application is heavily used with AirPlay and screen mirroring currently, AirPlay / mirroring is often interpreted by the system similarly to screen capture or screen recording this causes our protected content to be replaced by a gray or blank area even during legitimate AirPlay usage In practice, we would like to allow: AirPlay legitimate external display / mirroring usage while still preventing: screen recording screen sharing unauthorized screen capture We would like to know whether Apple recommends an official supported approach for this use case, preferably using public APIs. More specifically: Is there an officially supported way on macOS to distinguish AirPlay mirroring from screen recording / screen sharing? Is "NSWindow.sharingType" the recommended public API for this scenario? Is there a recommended approach when the UI surface is rendered through Flutter / Metal? Are there any best practices with ScreenCaptureKit for protecting content without affecting AirPlay? We understand that some lower-level APIs may not be officially supported, so we would greatly appreciate guidance toward a public and future-proof implementation path. Thank you very much for your time and support. Best regards, Tony
Replies
0
Boosts
0
Views
122
Activity
Apr ’26
Generating a new FPS certificate (SDK 26) alongside an existing SDK 4 certificate
Hi, Our client currently has an FPS deployment certificate generated with SDK version 4 that is still actively used in production. They would like to generate an additional certificate using SDK version 26. Before doing so, they just want to confirm: Will the existing SDK 4 certificate remain unaffected and still visible in the Apple Developer portal? Any considerations they should keep in mind? Thanks!
Replies
0
Boosts
0
Views
255
Activity
Mar ’26
Generating a new FPS certificate (SDK 26) alongside an existing SDK 4 certificate
Hi, Our client currently has an FPS deployment certificate generated with SDK version 4 that is still actively used in production. They would like to generate an additional certificate using SDK version 26. Before doing so, they just want to confirm: Will the existing SDK 4 certificate remain unaffected and still visible in the Apple Developer portal? Any considerations they should keep in mind? Thanks! :)
Replies
1
Boosts
0
Views
391
Activity
Mar ’26
offline with FairPlay
I am working on offline license support for FairPlay. I am trying to identify offline vs. live streaming requests. How do I do that? Is there any way to identify the request type (offline or streaming) from SPC?
Replies
0
Boosts
1
Views
156
Activity
Mar ’26
On iOS 26, HLS alternate audio track selection behaves inconsistently
Summary On iOS 26, HLS alternate audio track selection behaves inconsistently on both VOD and live streams: the French track falls back to the DEFAULT=YES (English) track after manual selection, and in some cases switching to a non-default track appears to work but it is then impossible to switch back to English. Environment iOS version: 26 Players affected: native Safari on iOS 26 and THEOplayer (issue also reproducible on THEOplayer's own demo page) Stream type: HLS/CMAF with demuxed alternate audio renditions (CMFC container) Affected stream types: both VOD and live streaming Issue NOT present on iOS 17/18 Manifest #EXTM3U #EXT-X-VERSION:4 #EXT-X-INDEPENDENT-SEGMENTS #EXT-X-STREAM-INF:BANDWIDTH=8987973,AVERAGE-BANDWIDTH=8987973,VIDEO-RANGE=SDR,CODECS="avc1.640028",RESOLUTION=1920x1080,FRAME-RATE=29.970,AUDIO="program_audio" video_1080p.m3u8 #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="program_audio",LANGUAGE="en",ASSOC-LANGUAGE="en",NAME="English",AUTOSELECT=YES,DEFAULT=YES,CHANNELS="2",URI="audio_ENG.m3u8" #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="program_audio",LANGUAGE="de",ASSOC-LANGUAGE="de",NAME="Deutsch",AUTOSELECT=YES,DEFAULT=NO,CHANNELS="2",URI="audio_DEU.m3u8" #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="program_audio",LANGUAGE="fr",ASSOC-LANGUAGE="fr",NAME="Francais",AUTOSELECT=YES,DEFAULT=NO,CHANNELS="2",URI="audio_FRA.m3u8" Steps to Reproduce Load the HLS manifest (VOD or live) in Safari on iOS 26, or in any AVFoundation-backed player Start playback — English plays correctly as DEFAULT Manually select "Francais" from the audio track selector Observe that English audio continues playing (French does not play) In a separate scenario: manually select "Deutsch" — German plays correctly Attempt to switch back to English — English does not resume; audio remains on the previously selected track Expected behavior Selecting any track should immediately switch to that language Switching back to English (DEFAULT=YES) should work at any time Behavior should be consistent across VOD and live streams Actual behavior Two distinct anomalies observed, reproducible on both VOD and live streams, in both native Safari and THEOplayer: French-specific fallback: selecting the French track causes playback to fall back to English. This does not happen with German. Cannot return to English: in cases where a non-default track plays correctly, attempting to switch back to the DEFAULT=YES track (English) fails — the previous non-default track continues playing. The fact that the issue reproduces in native Safari confirms this is an AVFoundation/WebKit-level regression, not a third-party player bug. What we have already verified and ruled out LANGUAGE codes are BCP-47 compliant (en, fr, de) ✓ EXT-X-VERSION:4 is present ✓ Audio codec removed from STREAM-INF CODECS (video-only) ✓ ASSOC-LANGUAGE attribute added matching LANGUAGE value ✓ Container metadata verified via ffprobe: mdhd box correctly contains language tags (e.g. "fra") ✓ Audio segment content verified via ffplay: correct audio in each language file ✓ French audio source file contains correct French audio content ✓ Issue reproduces in native Safari on iOS 26, confirming it is not a THEOplayer-specific bug Issue does NOT reproduce on iOS 17/18 with the same manifest and segments Additional notes The VOD stream is packaged with AWS MediaConvert, CMAF output group, SEGMENTED_FILES, AAC-LC codec (mp4a.40.2), 128kbps, 48kHz stereo. English uses AudioTrackType ALTERNATE_AUDIO_AUTO_SELECT_DEFAULT; French and German use ALTERNATE_AUDIO_AUTO_SELECT. The live stream uses AWS MediaPackage with a similar CMAF/HLS output configuration. Please advise whether this is a known regression in AVFoundation on iOS 26 and whether a fix is planned.
Replies
1
Boosts
0
Views
356
Activity
Mar ’26
Unity iOS (Metal) → WebRTC (Unity WebRTC) video stream to remote Unity client: PeerConnection connects but receiver renders black frames
Hello! My name is Mason Prather. I'm a graduate student at Kennesaw State University and a Research Engineer working in XR environments through my Graduate Research Assistant role. I’m currently building a research prototype that connects a mobile companion application to a VR headset. The mobile application is built in Unity and deployed on iOS, and it streams video frames to a remote Unity client using WebRTC. Environment Device: iPhone 15 OS: iOS 26.3 (tested on physical device, not Simulator) Engine: Unity 2022.3.57f1 Graphics API: Metal Streaming Technology: WebRTC (Unity WebRTC package) Architecture: Mobile Unity app streaming video frames to a remote Unity client Receiver Device: Meta Quest Pro headset (Unity application) Networking: LAN (UDP discovery + TCP signaling) Video Source: Unity RenderTexture Goal The goal of the system is to allow a VR user to view media stored on their phone inside a VR environment. The iOS app: renders or captures media content converts frames into a WebRTC video track streams the video to the headset Current Status Connection setup works correctly. Observed behavior: Signaling connection successful ICE candidate exchange successful PeerConnection state becomes Connected Video track created successfully However, the receiving application displays black frames. iOS App Details The video source originates from a Unity RenderTexture. Inside the phone application: RenderTexture displays correctly Frames appear correct locally But the receiving peer does not display the frames. Relevant Components Unity WebRTC package iOS Metal rendering pipeline Custom TCP signaling LAN discovery via UDP Expected Behavior Rendered frames should transmit via WebRTC and appear on the remote device. Actual Behavior The remote video track is active, but the rendered frames appear black on the receiving client. Questions Are there known issues involving Unity WebRTC + iOS Metal texture capture? Are there specific pixel format requirements when streaming textures from Unity on iOS? Could the issue relate to texture readback limitations or GPU synchronization? I am more than happy to provide screenshots and console logs upon request. If anyone has experience streaming Unity video frames via WebRTC on iOS, I would greatly appreciate any guidance.
Replies
0
Boosts
0
Views
373
Activity
Mar ’26
Swift Array Out of Bounds Crash in VTFrameProcessor when using VTLowLatencyFrameInterpolationParameters
Hi everyone, Our team is encountering a reproducible crash when using VTLowLatencyFrameInterpolation on iOS 26.3 while processing a live LL-HLS input stream. 🤖 Environment Device: iPhone 16 OS: iOS 26.3 Xcode: Xcode 26.3 Framework: VideoToolbox 💥 Crash Details The application crashes with the following fatal error: Fatal error: Swift/ContiguousArrayBuffer.swift:184: Array index out of range The stack trace highlights the following: VTLowLatencyFrameInterpolationImplementation processWithParameters:frameOutputHandler: Called from VTFrameProcessor.process(parameters:) Here is the simplified implementation block where the crash occurs. (Note: PrismSampleBuffer and PrismLLFIError are our internal custom wrapper types). // Create `VTFrameProcessorFrame` for the source (previous) frame. let sourcePTS = sourceSampleBuffer.presentationTimeStamp var sourceFrame: VTFrameProcessorFrame? if let pixelBuffer = sourceSampleBuffer.imageBuffer { sourceFrame = VTFrameProcessorFrame(buffer: pixelBuffer, presentationTimeStamp: sourcePTS) } // Validate the source VTFrameProcessorFrame. guard let sourceFrame else { throw PrismLLFIError.missingImageBuffer } // Create `VTFrameProcessorFrame` for the next frame. let nextPTS = nextSampleBuffer.presentationTimeStamp var nextFrame: VTFrameProcessorFrame? if let pixelBuffer = nextSampleBuffer.imageBuffer { nextFrame = VTFrameProcessorFrame(buffer: pixelBuffer, presentationTimeStamp: nextPTS) } // Validate the next VTFrameProcessorFrame. guard let nextFrame else { throw PrismLLFIError.missingImageBuffer } // Calculate interpolation intervals and allocate destination frame buffers. let intervals = interpolationIntervals() let destinationFrames = try framesBetween(firstPTS: sourcePTS, lastPTS: nextPTS, interpolationIntervals: intervals) let interpolationPhase: [Float] = intervals.map { Float($0) } // Create VTLowLatencyFrameInterpolationParameters. // This sets up the configuration required for temporal frame interpolation between the previous and current source frames. guard let parameters = VTLowLatencyFrameInterpolationParameters( sourceFrame: nextFrame, previousFrame: sourceFrame, interpolationPhase: interpolationPhase, destinationFrames: destinationFrames ) else { throw PrismLLFIError.failedToCreateParameters } try await send(sourceSampleBuffer) // Process the frames. // Using progressive callback here to get the next processed frame as soon as it's ready, // preventing the system from waiting for the entire batch to finish. for try await readOnlyFrame in self.frameProcessor.process(parameters: parameters) { // Create an interpolated sample buffer based on the output frame. let newSampleBuffer: PrismSampleBuffer = try readOnlyFrame.frame.withUnsafeBuffer { pixelBuffer in try PrismLowLatencyFrameInterpolation.createSampleBuffer(from: pixelBuffer, readOnlyFrame.timeStamp) } // Pass the newly generated frame to the output stream. try await send(newSampleBuffer) } 🙋 Questions Are there any known limitations or bugs regarding VTLowLatencyFrameInterpolation when handling live 60fps streams? Are there any undocumented constraints we should be aware of regarding source/previous frame timing, pixel buffer attributes, or how destinationFrames and interpolationPhase arrays must be allocated? Is a "warm-up" sequence recommended after startSession() before making the first process(parameters:) call?
Replies
1
Boosts
0
Views
724
Activity
Mar ’26
Inquiry regarding CoreMediaErrorDomain Code=-15517 during LL-HLS Live Playback
Hello, I am currently developing a live streaming application using AVPlayer to play LL-HLS (Low-Latency HLS) content. During our testing phase, we consistently encountered the following error in the logs: CoreMediaErrorDomain Code=-15517 The challenge we are facing is that the error description is quite vague. It only provides cryptic messages such as "Key not found" or "No value information," which makes it extremely difficult to identify the root cause or perform a deep-dive analysis. I have searched through the official Apple Developer documentation and technical notes, but I couldn’t find any specific reference to what Code -15517 signifies in the context of LL-HLS or CoreMedia. Regarding this issue, I have the following questions: What is the specific meaning of this error code (-15517)? Does it relate to missing tags in the HLS manifest, or is it an internal state issue within the AVPlayer stack? Specifically, I would like to know if this is a critical error that disrupts playback, or if it is just a warning that can be safely ignored. Is there any additional logging or debugging tool you would recommend to further investigate "Key not found" issues in LL-HLS? Any insights or guidance from the community or Apple engineers would be greatly appreciated. Thank you in advance for your help.
Replies
1
Boosts
0
Views
301
Activity
Feb ’26
The audio of FairPlay protected content can be captured - Safari on iOS
Hi, Has anyone been able to protect the audio part of FairPlay protected content from being captured as part of screen recording on Safari/iOS (PWA and/or online web app)? We have tried many things but could not prevent the audio from being recorded. Same app and content on Safari/Mac does not allow audio to be recorded. Any tips?
Replies
1
Boosts
0
Views
425
Activity
Feb ’26
A mistake in FairPlay Streaming SDK 26 sample code on comparing ProtocolVersionUsed value?
Hello, I am reviewing the sample codes of FairPlay Streaming SDK 26 and there was a place where I think is a mistake. The codes are for the server, for both Swift and Rust codes. There is an if statement which compares "ProtocolVersionUsed"(spcData.versionUsed) and SPCVersion1 constant, though "ProtocolVersionUsed" and SPC Version is a different thing, so shouldn't it be using a different constant value? [createContentKeyPayload.swift] // Fallback to version 1 if content can have encrypted slice headers, which need to be decrypted separately. Slice headers are not encrypted when using CBCS. if serverCtx.spcContainer.spcData.versionUsed == base_constants.SPCVersion.v1.rawValue && [createContentKeyPayload.rs] // Fallback to version 1 if content can have encrypted slice headers, which need to be decrypted separately. Slice headers are not encrypted when using CBCS. if (serverCtx.spcContainer.spcData.versionUsed == SPCVersion::v1 as u32) && Thank you.
Replies
0
Boosts
0
Views
148
Activity
Feb ’26
Clarification on SPC Version 3 Availability and Requirements (SDK 26 Certificate Bundle)
Hello, I’m using a valid certificate bundle generated with SDK 26 (combined RSA‑1024 + RSA‑2048). However, all my devices currently still generate SPC v2 during playback, including my iPhone 16 under iOS 26.2. Apple staff mentioned that future iOS versions will send SPC v3 when using an SDK 26 certificate bundle. Could you please clarify: Which iOS/macOS versions will first support SPC v3? Are there any additional client‑side requirements (Safari version, playback APIs, headers, etc.) to trigger SPC v3? Is there any way to test SPC v3 today, e.g., using beta builds? Thank you!
Replies
1
Boosts
1
Views
612
Activity
Feb ’26