In some places of our app we make use of NSAccessibilityElement subclasses to vend some extra items to accessibility clients.
We need to know which item has the VoiceOver focus so we can keep track of it.
setAccessibilityFocused: does not get called when accessibility clients focus NSAccessibilityElements. This method is only called when accessibility clients focus view-based accessibility elements (i.e. when a NSView subclass gets focused).
At the same time we need to programmatically move VoiceOver focus to those items when something happens. Those accessibility elements inherit from NSObject so we can't make them first responder.
Is this the expected behavior? What are our options in terms of reacting to VoiceOver cursor moving around? What are our options in terms of programmatically moving the VoiceOver cursor to a different element?
Here's a sample project that demonstrates the first part of the issue: https://github.com/vendruscolo/apple-rdars/tree/master/DTS12368714%20-%20NSAccessibilityElement%20focus%20tracking
If you run the app, a window will show up. It contains a button and a red square. If you enable VoiceOver you'll be able to move the cursor over the red square, and a message will be logged. You'll also notice there's an extra element after the red square. That element is available to VoiceOver, however when it gets focuses, no message gets logged.
Explore best practices for creating inclusive apps for users of Apple accessibility features and users from diverse backgrounds.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
This has been an ongoing issue and continues in Tahoe. When dictating into Gmail in Safari, whole portions of sentences are copy and pasted, making the text a mess. I have reported this in feedback for a couple years, and it has never been resolved.
Topic:
Accessibility & Inclusion
SubTopic:
General
}
// Start listening to the microphone
public void StartListening()
{
if (!isListening)
{
#if UNITY_IOS || UNITY_TVOS
microphoneInput = Microphone.Start(null, true, 10, 44100);
#else
try
{
microphoneInput = Microphone.Start(null, true, 10, 16000); // Use 16,000 Hz instead of 44,100
if (microphoneInput == null)
{
microphoneInput = Microphone.Start(null, true, 10, AudioSettings.outputSampleRate);
}
#endif
isListening = true;
Debug.Log(Microphone.devices.Length + " Started listening...");
debugText.text = Microphone.devices.Length + "- Started listening...";
}
catch (System.Exception e)
{
Debug.LogError($"Starting microphone failed: {e.Message}");
debugText.text = $"Starting microphone failed: {e.Message}";
}
}
}
void Update()
{
if (isListening && microphoneInput != null)
{
// Analyze the audio for voice activity
float volume = GetAverageVolume();
if (volume > detectionThreshold)
{
Debug.Log("User is speaking!");
lastVoiceTime = Time.time;
SoundDetected = true;
if (Time.time - lastVoiceTime > silenceDuration)
{
Debug.Log("User is silent.");
debugText.text = volume.ToString() + " - User is silent.";
}
slider.value = volume;
}
}
}
private float GetAverageVolume()
{
float[] samples = new float[128];
microphoneInput.GetData(samples, Microphone.GetPosition(null));
float sum = 0f;
foreach (float sample in samples)
{
sum += Mathf.Abs(sample);
}
return sum / samples.Length;
}
Problem:
When I build and run the app from Xcode, the microphone works fine, and I receive input. However, when running the app normally (outside of Xcode), I can’t seem to access the microphone. The debug logs indicate no microphone is detected.
Question:
Is there any additional configuration I need to do for the microphone to work in a normal (non-Xcode) run on Vision Pro? Or any common issues that could be causing the microphone access to fail in this scenario?
Thanks in advance for any insights!
Best,
Siddharth
We use an embedded WKWebView for several screens in our app.
Recently, we have been testing keyboard navigation via Full Keyboard Access in our apps. On IOS 18, everything works pretty much as expected. On IOS 26, it does not.
On IOS 26, you can "tab" away from the webview and then never tab back to the webview for keyboard navigation.
Is this a known issue? Are there workarounds for this issue that anyone is aware of?
In our application we are using UITableView for data population and that TableView cell contains a button. When we are enabling full keyboard access that time only TableView cell is focusing not the button. We need to focus on cell and button differently.
In our application we are using a Search bar in a pop over view and we have enabled Accessibility full keyboard access and we are using external keyboard. Now if the focus is on Searcher that time by next Tab key press Search bar will dismiss and focus needs to shift to the next UIElement.
In our application we are using OTP login. When accessibility full keyboard access is enabled, and we are trying to enter OTP in the OTP field that time in iOS 17 focus is moving to the next text field accordingly but in iOS 18 focus is staying the first OTP field only and not moving to the next text field.
In our application we are using UIAlertViewController. When accessibility full keyboard access is enabled, and we are trying to dismiss that AlertViewController with Esc key from external keyboard that is not working. We are presenting AlertViewController as a popover. We need dismiss the AlertViewController with Esc key press from external keyboard.
There is an issue with Help Books that started with the release of macOS 14.4. The issue is that when an app attempts to go directly to a Help Book page, the help viewer opens to the Help Book's main index page, rather than the specific page requested. As I investigated the issue I found that the requested page was actually part of help viewer's navigation history, and all I had to do was to click the Back navigation arrow and the requested page would be displayed. So it seems like the requested page is momentarily visited but is then (for whatever reason) quickly replaced by the main index page.
Our app uses the AHGotoPage() API for directly accessing our Help Book's pages. This is the same mechanism/code that our app has used for more than a decade and has never caused us any issues. Everything works fine on macOS 14.3.0 and earlier. I've scoured the documentation and can't find any newer APIs for accessing Help pages. I've also tried various other things (e.g. reworking the code, creating new indexes for the app's Help, etc.), but none of it seems to make a difference. As far as I can tell, the issue seems to stem from some change made to the OS.
So my questions are:
Is this a known bug? And if so, is there any ETA on a fix?
Is there something different we should be doing for newer versions of the OS (create indexes differently, use a different API, etc.)?
Topic:
Accessibility & Inclusion
SubTopic:
General
Hello,
the AVSpeechSynthesisVoice has a audioFileSettings attributes
let utterance = AVSpeechUtterance(string: text)
utterance.voice = AVSpeechSynthesisVoice(identifier: voiceSelected!)
print("- voice \(utterance.voice!.audioFileSettings)")
["AVLinearPCMIsBigEndianKey": 0, "AVLinearPCMIsFloatKey": 1, "AVLinearPCMIsNonInterleaved": 1, "AVNumberOfChannelsKey": 1, "AVSampleRateKey": 22050, "AVFormatIDKey": 1819304813, "AVLinearPCMBitDepthKey": 32]
This is declared in
AVSpeechSynthesisVoice {
...
@available(iOS 13.0, *)
open var **audioFileSettings:** [String : Any] { get }
@available(iOS 17.0, *)
open var voiceTraits: AVSpeechSynthesisVoice.Traits { get }
}
How can we specify the audioFileSettings attributes in a AVSpeechSynthesisProviderVoice ?
Cause in AVSpeechSynthesisProviderVoice there is no such field
AVSpeechSynthesisProviderVoice {
open var name: String { get }
open var identifier: String { get }
open var primaryLanguages: [String] { get }
open var supportedLanguages: [String] { get }
open var voiceSize: Int64
open var version: String
open var gender: AVSpeechSynthesisVoiceGender
open var age: Int
}
Regards
Topic:
Accessibility & Inclusion
SubTopic:
General
My team is designing an app for retail associates that need to share managed iPads. We keep the app in Guided Access mode on our login app until an auth token is obtained. Then the iPad is opened for general use. Upon signout we need to re-enter guided access mode and we can do this via manual signout easily. But with idle signout, ie after 60 minutes of inactivity, we need to be able to make a call from the background (in a locked state even) and sign out the user, clear the pin code and enter single app mode before restarting. So that hopefully once the device restarts, we have the app in a locked state again until the next user provides credentials that can obtain a new auth token.
We are struggling to see if this is even possible. Our bosses will be displeased if we tell them it isn't. So anybody with any tips would be very appreciated.
SwiftUI provides the accessibilityCustomContent(_:_:) modifier to add additional accessibility information for an element. However, I couldn’t find a similar approach in UIKit.
Is there a way to achieve this in UIKit?
Hello all.
Currently I am trying to get WKWebView to scroll with a physical keyboard and it just will not work. I tried allowsKeyboardScrolling( ) and it did not work. UIWebView works but its no longer supported. Trying to get full keyboard access to work to make our app more accessible but WKWebView does not want to play nice.
Has anyone else had issues trying to use WKWebView with an external keyboard, and if so did you find any solutions? Greatly appreciated!
After enabling Developer Mode on my iPhone and restarting it, the device asks me to press the Home button to confirm. Unfortunately, my Home button is broken, so I can’t access Developer Mode. The iPhone itself still works, but I can’t enable the mode. Is there any way to bypass this without the Home button?
Topic:
Accessibility & Inclusion
SubTopic:
General
I made a (very simple) custom tab bar in SwiftUI. It's simply an HStack containing two buttons. These buttons control the selection of a paged TabView. This works well, but in VoiceOver they don't behave like the bottom tab bar or e.g. a segmented picker. Specifically, VoiceOver does not say something like "tab one of two" when the first button is focused.
According to my research, in UIKit this can be accomplished by giving the container view the accessibility trait tabBar, hiding it as an accessibility element and give it the accessibility container type semanticGroup.
In SwiftUI, there is also the trait isTabBar, but that does not seem to have any impact for VoiceOver. I don't see an equivalent of semanticGroup in SwiftUI. I tried accessibilityElement(children: .contain) but that also does not seem to have any impact.
So, is there any way in SwiftUI to make a button behave like a tab-button in VoiceOver? And how is SwiftUI's isTabBar accessibility trait supposed to be used?
Hello!
I'm trying to improve the accessibility of a UIKit login form in our iOS app. If an error occurs, an error message is shown in a label that is hidden by default. For our VoiceOver users, I want to move the focus to the error message label so that VoiceOver reads out the error message.
I'm trying to achieve this using UIAccessibility.post, but try as I might, it does not work. To better understand the problem, I created a very simple App which shows a button and a label (always visible), and on pressing the button, I post an accessibility notification:
UIAccessibility.post(notification: .layoutChanged, argument: label)
What I expect to happen is for the focus to move from the button to the label. What happens instead is the focus stays with the button and VoiceOver reads out the button's label again. So it seems to process the notification, but ignore the argument.
Am I misunderstanding how accessibility notifications work or is this simply broken at the moment? I am testing this withy my iPhone with the current iOS version 18.2.1
By the way, using the more modern variant leads to the same result:
AccessibilityNotification.LayoutChanged(label).post()
I'm looking into how to programmatically control color filters in the Accessibility settings under "System Settings" -> "Accessibility" -> "Color Filters"--in particular the "Intensity" and "Filter type" settings.
As far as I have gathered, changing this setting can only be accomplished using the CoreGraphics APIs or Accessibility APIs (I've poked around GitHub, Stack Overflow, and queried some LLMs), but there doesn't seem to be a clear cut example for doing this using public facing APIs, without ripping off source code from another project wholesale or using private APIs.
My goal is to overlay a color filter at either a per-application or system level to help with accessibility. If there's a way to overlay this capability on an application-by-application basis as a third-party developer, that would be the most ideal scenario. For example, modifying the look and feel/UX for Launchpad, Photos, etc, as a third-party developer without accessing the source code of the application that I'm modifying the look/feel for (with appropriate user consent of course).
Topic:
Accessibility & Inclusion
SubTopic:
General
I have been working to remediate PDFs for a client. The documents/forms have many tables. When I correctly tag a table, using Foxit Editor Pro, it works beautifully on a PC reading it with NVDA. On Mac using VoiceOver the table isn't accessible. It doesn't matter if I try to read it in Adobe Acrobat, Foxit, or Preview. The reader often says the document is empty, omits column headers, and/or associates the wrong header with the column data.
The documents have essentially the same coding behind them as for the web. Why is it they perform so well on a PC with NVDA, but so poorly with Mac VoiceOver? I am a Quality Assurance Specialist. I review websites apps, and documents for accessibility. Why can't I do my job using only my Mac system?
As a Mac user, it frustrates me that I can't use my preferred system for checking documents to see if they are accessible because VoiceOver doesn't work well. I actually have to recommend to my clients and their customers that they need to use a PC with NVDA or Jaws for these documents to be able to get all the information. Unfortunately, most people aren't able to have, or maintain, both systems. Overall, Mac products are very high quality. This, and other issues with VoiceOver, seems to be a large gap in Apple's offerings and functionality.
I would appreciate a human response to the original email I sent about this on 7/30/2025.
Topic:
Accessibility & Inclusion
SubTopic:
General
Hi,
I've wrapped AVRoutePickerView in SwiftUI using pretty much the code given here, with a few changes:
func makeUIView(context: Context) -> UIView {
let routePickerView = AVRoutePickerView()
// Configure the button's color.
//routePickerView.delegate = context.coordinator
//routePickerView.backgroundColor = .secondarySystemBackground
routePickerView.tintColor = .accent
routePickerView.activeTintColor = .accent
// Indicate whether your app prefers video content.
routePickerView.prioritizesVideoDevices = false
return routePickerView
}
I commented out routePickerView.delegate = context.coordinator because it doesn't compile; context.coordinator is of type Void and I'm not sure how to fix that. I'm not sure if that has anything to do with the issue.
Anyway, this works fine without VoiceOver; if I tap the button, I get the AirPlay popover. But in VoiceOver, if I select the button and double-tap, nothing happens… it just reads the button's accessibilityLabel again. How can I get the AirPlay popover to show in VoiceOver?
We currently have an odd issue with VoiceOver spelling a word letter by letter while the same word is spoken as a whole for other items.
The app is in German.
I have a View in SwiftUI whose button traits are removed, then a label "Start Tab 1 von 5" is added. "Tab is spoken as a whole word here, all fine.
If I change the label to "Tab-Schaltfläche" or for example "SimplyGo Tab 3 von 5", then "Tab" is spoken as "T A B", letter by letter. is there a way to force VoiceOver to speak it as a whole?
Topic:
Accessibility & Inclusion
SubTopic:
General