Device: iPhone 17 Pro
iOS Version: iOS 26.1
Camera: Ultra-wide (0.5x) using AVCaptureSession
Our camera app freezes on iPhone 17 when switching frame rates (30fps ↔ 60fps). This works fine on iPhone 16 Pro and earlier.
What We've Observed:
Freeze happens on frame rate change - particularly when stabilization was enabled
Thread.sleep is used - to allow camera hardware to settle before re-enabling stabilization
Works on older iPhones - only iPhone 17 exhibits this behavior
Console shows these errors before freeze:
17281
<<<< FigXPCUtilities >>>> signalled err=18446744073709534335 <<<< FigCaptureSourceRemote >>>> err=-17281
Is Thread.sleep on the main thread causing the freeze? Should all camera configuration be on a background queue?
Is there something specific about iPhone 17 ultra-wide camera that requires different handling?
Should we use session.beginConfiguration() / session.commitConfiguration() instead of direct device configuration?
Is calling setFrameRate from a property's didSet (which runs synchronously) problematic?
Are the FigCaptureSourceRemote errors (-17281) indicative of the problem, and what do they mean?
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
A functioning Multiplatform app, which includes use of Continuity Camera on an M1MacMini running Sequoia 15.5, works correctly capturing photos with AVCapturePhoto. However, that app (and a test app just for Continuity Camera) crashes at delegate callback when run on a 2017 MacBookPro under MacOS 13.7.5. The app was created with Xcode 16 (various releases) and using Swift 6 (but tried with 5). Compiling and running the test app with Xcode 15.2 on the 13.7.5 machine also crashes at delegate callback.
The iPhone 15 Continuity Camera gets detected and set up correctly, and preview video works correctly. It's when the CapturePhoto code is run that the crash occurs.
The relevant capture code is:
func capturePhoto() {
let captureSettings = AVCapturePhotoSettings()
captureSettings.flashMode = .auto
photoOutput.maxPhotoQualityPrioritization = .quality
photoOutput.capturePhoto(with: captureSettings, delegate: PhotoDelegate.shared)
print("**** CameraManager: capturePhoto")
}
and the delegate callbacks are:
class PhotoDelegate: NSObject, AVCapturePhotoCaptureDelegate {
nonisolated(unsafe) static let shared = PhotoDelegate()
// MARK: - Delegate callbacks
func photoOutput(
_ output: AVCapturePhotoOutput,
didFinishProcessingPhoto photo: AVCapturePhoto,
error: (any Error)?
) {
print("**** CameraManager: didFinishProcessingPhoto")
guard let pData = photo.fileDataRepresentation() else {
print("**** photoOutput is empty")
return
}
print("**** photoOutput data is \(pData.count) bytes")
}
func photoOutput(
_ output: AVCapturePhotoOutput,
willBeginCaptureFor resolvedSettings: AVCaptureResolvedPhotoSettings
) {
print("**** CameraManager: willBeginCaptureFor")
}
func photoOutput(_ output: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) {
print("**** CameraManager: willCaptureCapturePhotoFor")
}
}
The crash report significant parts are.....
Crashed Thread: 3 Dispatch queue: com.apple.cmio.CMIOExtensionProviderHostContext
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000000
Exception Codes: 0x0000000000000001, 0x0000000000000000
Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11
Terminating Process: exc handler [30850]
VM Region Info: 0 is not in any region. Bytes before following region: 4296495104
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
UNUSED SPACE AT START
--->
__TEXT 100175000-10017f000 [ 40K] r-x/r-x SM=COW ...tinuityCamera
Thread 0:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x7ff803aed552 mach_msg2_trap + 10
1 libsystem_kernel.dylib 0x7ff803afb6cd mach_msg2_internal + 78
2 libsystem_kernel.dylib 0x7ff803af4584 mach_msg_overwrite + 692
3 libsystem_kernel.dylib 0x7ff803aed83a mach_msg + 19
4 CoreFoundation 0x7ff803c07f8f __CFRunLoopServiceMachPort + 145
5 CoreFoundation 0x7ff803c06a10 __CFRunLoopRun + 1365
6 CoreFoundation 0x7ff803c05e51 CFRunLoopRunSpecific + 560
7 HIToolbox 0x7ff80d694f3d RunCurrentEventLoopInMode + 292
8 HIToolbox 0x7ff80d694d4e ReceiveNextEventCommon + 657
9 HIToolbox 0x7ff80d694aa8 _BlockUntilNextEventMatchingListInModeWithFilter + 64
10 AppKit 0x7ff806ca59d8 _DPSNextEvent + 858
11 AppKit 0x7ff806ca4882 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1214
12 AppKit 0x7ff806c96ef7 -[NSApplication run] + 586
13 AppKit 0x7ff806c6b111 NSApplicationMain + 817
14 SwiftUI 0x7ff90e03a9fb 0x7ff90dfb4000 + 551419
15 SwiftUI 0x7ff90f0778b4 0x7ff90dfb4000 + 17578164
16 SwiftUI 0x7ff90e9906cf 0x7ff90dfb4000 + 10340047
17 ContinuityCamera 0x10017b49e 0x100175000 + 25758
18 dyld 0x7ff8037d1418 start + 1896
Thread 1:
0 libsystem_pthread.dylib 0x7ff803b27bb0 start_wqthread + 0
Thread 2:
0 libsystem_pthread.dylib 0x7ff803b27bb0 start_wqthread + 0
Thread 3 Crashed:: Dispatch queue: com.apple.cmio.CMIOExtensionProviderHostContext
0 ??? 0x0 ???
1 AVFCapture 0x7ff82045996c StreamAsyncStillCaptureCallback + 61
2 CoreMediaIO 0x7ff813a4358f __94-[CMIOExtensionProviderHostContext captureAsyncStillImageWithStreamID:uniqueID:options:reply:]_block_invoke + 498
3 libxpc.dylib 0x7ff803875b33 _xpc_connection_reply_callout + 36
4 libxpc.dylib 0x7ff803875ab2 _xpc_connection_call_reply_async + 69
5 libdispatch.dylib 0x7ff80398b099 _dispatch_client_callout3 + 8
6 libdispatch.dylib 0x7ff8039a6795 _dispatch_mach_msg_async_reply_invoke + 387
7 libdispatch.dylib 0x7ff803991088 _dispatch_lane_serial_drain + 393
8 libdispatch.dylib 0x7ff803991d6c _dispatch_lane_invoke + 417
9 libdispatch.dylib 0x7ff80399c3fc _dispatch_workloop_worker_thread + 765
10 libsystem_pthread.dylib 0x7ff803b28c55 _pthread_wqthread + 327
11 libsystem_pthread.dylib 0x7ff803b27bbf start_wqthread + 15
Of course, the MacBookPro is an old device - but Continuity Camera works with the installed Photo Booth app, so it's possible.
Any thoughts on solving this situation would be appreciated.
Regards, Michaela
PHPhotoLibrary.authorizationStatus(for: .readWrite) == .authorized
Iinfo.plist Privacy - Photo Library Usage Description set
I check authorization before attempting to get the photoPickerItem.itemIdentifier, but every time the return value from itemIdentifier is nil. Seems I missing some permissions, but unsure why the system is still keeping _shouldExposeItemIdentifier set to false.
Topic:
Media Technologies
SubTopic:
Photos & Camera
I am new to Swift and iOS development, and I have a question about video capture performance.
Is it possible to capture video at a resolution of 4032×3024 while simultaneously running a vision/ML model on the video stream (e.g., using Vision or CoreML)?
I want to know:
whether iOS devices support capturing video at that resolution,
whether the frame rate drops significantly at that scale,
and whether it is practical to run a Vision/ML model in real-time while recording at such a high resolution.
If anyone has experience with high-resolution AVCaptureSession setups or combining them with real-time ML processing, I would really appreciate guidance or sample code.
Hey,
There seems to be an inconsistency when capturing a photo using
QualityPrioritization.Quality on the iPhone 17 Pro Main wide Lens. If you zoom above "2x" the output image always has "-2.0ev" bias in the meta data and looks underexposued. This does not happen at zoom levels above 2, or if you set the QualityPrioritization to .Balanced.
See below:
with .Quality
with .Balanced
This does not happen on the other lenses.
I'm using a simple set up and it is consistent across JPEG and ProRAW capture. I have a demo project if that is useful.
Thanks,
Alex
I'm using this library: https://github.com/Yummypets/YPImagePicker to capture photos.
I've modified it slightly, and I'm using an older version.
When testing on my iPhone 16e, ios 26, whenever I take a photo, I get the following two error messages:
<<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302
<<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281)
These error messages appear, but as far as I can tell, the photo comes through OK, and I can save the data no problem. I've even removed all my handling code to see if it was something I was doing.
I don't really want to ship with these errors showing, but I also have no idea what can be causing this error to appear. chatgpt was not helpful diagnosing this.
Does anyone know what can cause this error
Is there a way I can see the source code to figure out if there's something I'm doing wrong here?
It really seems like this is an internal apple error, or else I would have expected more details on the error relating to the code I've written. Any clues would be appreciated!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Hi all,
I'm using Apple Sample Code below to create application using dockkit.
"Controlling a DockKit accessory using your camera app"
https://developer.apple.com/documentation/dockkit/controlling-a-dockkit-accessory-using-your-camera-app?changes=_8
I used vision hand recognition and put the observation data to dockAccessory.track, but Belkin or Insta360 devices never move on iPhone 16 Pro Max with iOS 18.3.
If I use other functions like face search (system tracking) in the app, those work ok.
I used Belkin and Insta360 Flow 2 Pro to reproduce the problem.
My friend is also saying that the custom tracking feature was working fine on the OS 18 beta, but on recent iOS 18.3 that feature does not work.
If I can get the iOS 18.0 beta then we can test that feature. But I cannot revert my iOS from 18.3 to the iOS 18.0 Beta.
Regards,
TO
We are planning to develop an app that connects to a UVC camera to capture and display video via AVFoundation. Could you please advise on which iPhone models support UVC cameras?
I'm receiving output from avcapturesession and capturing an image using Vision, but the image is output in landscape orientation instead of portrait.
Even when I set the orientation to up in ciimage, cgimage, and uiimage, the image is still output in landscape orientation.
On iPhones 16 and below, the image is output in portrait orientation.
But on iPhones 17 and above, the image is output in landscape orientation.
Please help.
Hello, We are trying to use the new Background Upload Extension to improve uploads of assets (Photos, Live Photos, Videos) in the background in our application.
1-The assets have finished uploading, but I'm unable to retrieve successful records using PHAssetResourceUploadJob.fetchJobs(action: .acknowledge, options: nil). When will the successful records be returned?
2-How to retrieve the system's pending tasks ? We want to cancel tasks handed over to the system when returning to the main app to avoid duplicate resource uploads.
3-When we set UploadJobExtensionEnabled = true, will tasks handed over to the system still execute after returning to the main app? Do we need to set UploadJobExtensionEnabled = false upon returning to the main app? If we set UploadJobExtensionEnabled = false, will previously submitted upload tasks be cleared?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Hello, We are trying to use the new Background Upload Extension to improve uploads of assets (Photos, Live Photos, Videos) in the background in our application.
1-The assets have finished uploading, but I'm unable to retrieve successful records using PHAssetResourceUploadJob.fetchJobs(action: .acknowledge, options: nil). When will the successful records be returned?
2-How to retrieve the system's pending tasks? We want to cancel tasks handed over to the system when returning to the main app to avoid duplicate resource uploads.
3-When we set UploadJobExtensionEnabled = true, will tasks handed over to the system still execute after returning to the main app? Do we need to set UploadJobExtensionEnabled = false upon returning to the main app? If we set UploadJobExtensionEnabled = false, will previously submitted upload tasks be cleared?
Topic:
Media Technologies
SubTopic:
Photos & Camera
I am developing an iOS camera app that can record video directly to external storage connected to an iPhone.
To detect whether an external USB storage device is connected and to obtain its URL, I am considering using AVExternalStorageDeviceDiscoverySession.
However, when checking support using AVExternalStorageDeviceDiscoverySession.isSupported, I observe that it returns true only on Pro model iPhones, and false on non-Pro models in my environment.
I have reviewed Apple’s official documentation, but I could not find any clear description of the supported devices or requirements (for example, whether this API is limited to Pro models or requires specific hardware capabilities).
I would appreciate any information regarding the following points:
●The actual requirements for AVExternalStorageDeviceDiscoverySession to be supported
Device limitations (Pro vs non-Pro models)
Hardware requirements (USB controller, external recording capability, etc.)
iOS version dependencies
●Whether support for non-Pro models is planned in the future
Tested environments
iPhone 16 Pro (iOS 18.7.1) → isSupported == true
iPhone 16e (iOS 26.2) → isSupported == false
iPhone 17 (iOS 26.2) → isSupported == false
iPhone Air (iOS 26.2) → isSupported == false
If anyone has observed similar behavior or has official information from Apple regarding this API, I would greatly appreciate your insights.
I am developing an iOS camera app that can record video directly to external storage connected to an iPhone.
To detect whether an external USB storage device is connected and to obtain its URL, I am considering using AVExternalStorageDeviceDiscoverySession.
However, when checking support using AVExternalStorageDeviceDiscoverySession.isSupported, I observe that it returns true only on Pro model iPhones, and false on non-Pro models in my environment.
I have reviewed Apple’s official documentation, but I could not find any clear description of the supported devices or requirements (for example, whether this API is limited to Pro models or requires specific hardware capabilities).
I would appreciate any information regarding the following points:
①The actual requirements for AVExternalStorageDeviceDiscoverySession to be supported
Device limitations (Pro vs non-Pro models)
Hardware requirements (USB controller, external recording capability, etc.)
iOS version dependencies
②Whether support for non-Pro models is planned in the future
Tested environments
iPhone 16 Pro (iOS 18.7.1) → isSupported == true
iPhone 16e (iOS 26.2) → isSupported == false
iPhone 17 (iOS 26.2) → isSupported == false
iPhone Air (iOS 26.2) → isSupported == false
If anyone has observed similar behavior or has official information from Apple regarding this API, I would greatly appreciate your insights.
I’m writing to report a serious usability regression in the iOS 26 Photos app. Folders can still be created and albums can still be assigned to them, but folders can no longer be opened to view the albums they contain. A container that cannot be opened is not a container, and this breaks a fundamental information architecture model that has existed in Photos for well over a decade.
This change disproportionately harms users who maintain large, intentional photo libraries—travel archives, projects, professional work, or long-term personal documentation—where hierarchy and ordering are essential. Search and automated surfacing are not substitutes for deliberate structure. Removing the ability to browse folder → album hierarchy on iOS strips users of control while still exposing the UI for folder creation, which is internally inconsistent.
If this behavior is intentional, it should be clearly documented and the folder UI removed to avoid misleading users. If it is not intentional, it needs urgent correction. At minimum, iOS should retain parity with macOS Photos for basic navigation of folders and albums. This is not a niche request; it is a regression in core functionality.
Topic:
Media Technologies
SubTopic:
Photos & Camera
Hi Apple Developer Forums,
I’m developing an iOS camera app that processes RAW captures using Core Image. I’m seeing a large “first use” performance penalty specifically when creating the CIImage from CIRAWFilter.outputImage.
What’s slow (important detail)
I’m measuring the time for:
let rawFilter = CIRAWFilter(imageData: rawData, identifierHint: hint)
let ciImage = rawFilter.outputImage
This is not CIContext.render(...) / createCGImage(...). It’s just the time to access outputImage (i.e., building the Core Image graph / RAW pipeline setup).
Observed behavior
First time accessing CIRAWFilter.outputImage: ~3 seconds
Second time (same app session, similar RAW): ~3 milliseconds
So something heavy is happening only on first use (decoder initialization, pipeline setup, shader/library compilation, caching, etc.).
Using Metal System Trace, I also noticed that during the slow first call there are many “Create MTLLibrary” events, while the second call doesn’t show this pattern.
Warm-up attempts using bundled DNG
I tried to “warm up” early (e.g., on camera screen entry) by loading a bundled DNG and then accessing CIRAWFilter.outputImage by taking a photo:
Warm-up with a ~247 KB DNG → first real RAW outputImage cost drops to ~1.42s
Warm-up with a ~25 MB DNG → first real RAW outputImage cost drops to ~843ms
This helps, but it’s still far from the steady-state ~3ms.
Warm-up by capturing a real RAW (works, but concerns)
The only method that fully eliminates the delay is to trigger a real RAW capture programmatically before the user’s first photo, then use that captured rawData to warm up the CIRAWFilter.outputImage path. This brings the first user-facing capture close to the steady-state timing.
However:
In some regions, the camera shutter sound cannot be suppressed, so “hidden warm-up capture” is unacceptable UX.
I’m also unsure whether triggering a real capture without an explicit user action could raise compliance/privacy concerns, even if the image is immediately discarded and never saved/uploaded.
Questions
Is the large first-time cost of CIRAWFilter.outputImage expected (RAW pipeline initialization / shader compilation)?
Is there an Apple-recommended way to pre-initialize the Core Image RAW pipeline / Metal resources so the first outputImage is fast, without taking a real photo?
Are there any best practices (e.g. CIContext creation timing, prepareRender(...), specific options) that reliably reduce this first-use overhead for CIRAWFilter?
Attachments
Figure 1: First RAW capture with no warm-up (~3s outputImage time)
Figure 2: First RAW capture after warm-up with bundled DNG (improved but still hundreds of ms)
Thanks for any guidance or experience sharing!
Hi everyone,
I'm developing a camera application that requires precise, predictable control over the focus system. I'm encountering unexpected behavior with face-driven autofocus in continuous autofocus mode.
Issue:
When using AVCaptureDevice.FocusMode.continuousAutoFocus, the system continues to prioritize faces for focus even after attempting to disable face-driven autofocus with:
device.automaticallyAdjustsFaceDrivenAutoFocusEnabled = false
device.isFaceDrivenAutoFocusEnabled = false
Observations:
The behavior is inconsistent across different scenes.
In well-lit/properly exposed scenes: focus persistently locks onto faces, ignoring my configuration.
.
In underexposed scenes: the intended focus behavior is more consistently respected.
Hi everyone, does anybody have any resources I could check out regarding the 48->12mp binning behavior on supported sensors? I know the 48mp sensor on iPhone can automatically bin pixels for better low light performance. But not sure how to reliably make this happen in practice.
On iPhone 14 Pro+ with a 48MP sensor, I want the best of both worlds for ProRAW:
∙ Bright light: 48MP full resolution
∙ Low light: 12MP pixel-binned for better noise
`photoOutput.maxPhotoDimensions = CMVideoDimensions(width: 8064, height: 6048)
let settings = AVCapturePhotoSettings(rawPixelFormatType: proRawFormat, processedFormat: [...])
settings.photoQualityPrioritization = .quality
// NOT setting settings.maxPhotoDimensions — always get 12MP`
When I omit maxPhotoDimensions, iOS always returns 12MP regardless of lighting. When I set it to 48MP, I always get 48MP.
Is there an API to let iOS automatically choose the optimal resolution based on conditions, or should I detect low light myself (via device.iso / exposureDuration) and set maxPhotoDimensions accordingly?
Any help or direction would be much appreciated!
I'm adopting Liquid Glass in iOS 26, when I try to test VNDocumentCameraViewController with document scanning after Liquid Glass enabled, there's a crash just after a photo is taken in VNDocumentCameraViewController, here's the screenshot when it crashed
The exception output in XCode console is this:
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Layout requested for visible navigation bar, <UINavigationBar: 0x1240bde00; frame = (0 117; 390 54); opaque = NO; tintColor = UIExtendedSRGBColorSpace 1 1 0 1; layer = <CALayer: 0x120c21e60>> standardAppearance=0x12407b900 scrollEdgeAppearance=0x12407bb80 compactAppearance=0x12407b880 no-scroll-edge-support, when the top item belongs to a different navigation bar. topItem = <UINavigationItem: 0x1240bd800> style=navigator leftBarButtonItems=0x123d4e5f0 rightBarButtonItems=0x123d4d5a0, navigation bar = <UINavigationBar: 0x107b9ad00; frame = (0 47; 390 54); opaque = NO; autoresize = W; tintColor = UIExtendedSRGBColorSpace 1 1 0 1; layer = <CALayer: 0x120c20150>> delegate=0x10a805200 standardAppearance=0x107b2c300 scrollEdgeAppearance=0x107b2c280 compactAppearance=0x107b2c100, possibly from a client attempt to nest wrapped navigation controllers.'
*** First throw call stack:
(0x18e1db994 0x18b0f5814 0x18c092aa0 0x193b18660 0x193a7d540 0x193a7e020 0x1953ec4a0 0x1943b7d78 0x18ed83420 0x18ed82f74 0x18eb83134 0x18eb44c10 0x18eb70bc4 0x18eb7e74c 0x193ac8cd0 0x193ac8c04 0x193ad6afc 0x193ad5f8c 0x27b456560 0x18e12c4cc 0x18e15c0b0 0x18e15bfd8 0x18e133c1c 0x18e132a6c 0x22ed54498 0x193af6ba4 0x193a9fa78 0x193bcb68c 0x102cc2718 0x102cc2688 0x102cc2794 0x18b14ae28)
libc++abi: terminating due to uncaught exception of type NSException
We have a very strange issue that I am trying to solve or find the best practice for.
We have a SwiftUI View that uses the Camera to preview. So as suggested in Apples Docs we check authorisation status and then if it's not determined we request authorisation.
We also have the privacy entry in the info.plist
case .notDetermined:
AVCaptureDevice.requestAccess(for: .video) { accessStatusAuthorised in
if !accessStatusAuthorised {
self.cameraStatus = .notAuthorised
} else {
self.isAuthorized = true
self.cameraStatus = .authorised
self.startCameraSession(cameraPosition: cameraPosition)
}
}
case .restricted:
cameraStatus = .notAuthorised
isAuthorized = false
case .denied:
cameraStatus = .notAuthorised
isAuthorized = false
case .authorized:
cameraStatus = .authorised
isAuthorized = true
startCameraSession(cameraPosition: cameraPosition)
break
@unknown default:
isAuthorized = true
cameraStatus = .notAuthorised
}
However when we call this code it freezes the Camera feed, even when allow has been tapped.
However and this is the confusing part.
If we do not call the code above, we still get the permission for camera access pop up and the camera works fine after allowing.
What im concerned about is changing the code to do this and its a possible apple bug that gets fixed and hey then none of the Apps allow the camera function.
I cannot see any where that the process has changed for iOS 26 / Xcode 26.
Can anyone shed any light on this or had similar experience ?
Hello,
I have an iOS camera app that captures exposure brackets and performs custom HDR processing.
On iOS 26, I’m observing a visual difference between:
a single photo captured at –2 EV, and the –2 EV frame from an exposure bracket (–2 / 0 / +2 EV).
On iOS 26:
The single –2 EV image looks natural and consistent.
The –2 EV image from the bracket appears clamped / distorted, most noticeably in high dynamic range scenes (highlight compression and loss of detail).
On iOS 18, both approaches produce visually identical and correct –2 EV images.
The issue only appears for bracketed captures on iOS 26.
Attachments (examples)
iOS 26
Single capture –2 EV (JPEG):
/Users/danilobudimir/Downloads/ios26SingleImage/JPEG image-4006-8B77-51-0.jpeg
Single capture –2 EV — Capture report (dumped settings):
/Users/danilobudimir/Downloads/ios26SingleImage/UnderExposureDebug_CaptureReport_2026-01-09T15-59-20Z.md
Bracket capture –2 EV frame (JPEG):
/Users/danilobudimir/Downloads/bracket_iOS26/JPEG image-45CE-9793-A5-0.jpeg
Bracket capture — Capture report (dumped settings):
/Users/danilobudimir/Downloads/bracket_iOS26/UnderExposureDebug_CaptureReport_2026-01-09T15-55-42Z.md
iOS 18
Single capture –2 EV (JPEG):
/Users/danilobudimir/Downloads/ios18SingleImage/JPEG image-47FD-AF73-28-0.jpeg
Single capture –2 EV — Capture report:
/Users/danilobudimir/Downloads/ios18SingleImage/UnderExposureDebug_CaptureReport_2026-01-09T16-25-27Z.md
Bracket capture — –2 EV frame (JPEG):
/Users/danilobudimir/Downloads/bracket_iOS18/JPEG image-4A4C-9E93-46-0.jpeg
Bracket capture — Capture report:
/Users/danilobudimir/Downloads/bracket_iOS18/UnderExposureDebug_CaptureReport_2026-01-09T16-27-23Z.md
Question
Is there any new behavior in iOS 26 AVFoundation related to:
AVCapturePhotoBracketSettings,
tone mapping / HDR preprocessing,
or internal image processing applied specifically to bracketed frames?
Is there a new flag, format requirement or opt-out mechanism required to preserve linear underexposed frames in exposure brackets?