I've been playing around with keyboard scrolling of overflow regions, and I was interested to note how Firefox's native behavior doesn't expose any additional semantics -- i.e., it doesn't apply a role or accessible name when the scrolling region becomes focusable.
And I think that's the right thing to do -- that our standard workaround of including role="region" and aria-label or aria-labelledby (along with tabindex="0") creates unnecessary verbosity.
Because navigating an element with the virtual cursor isn't affected by scrolling. Virtual navigation already allows for keyboard access to the overflow, and it makes no difference to navigation or spoken output.
Tab navigation is affected of course, because it's the difference between whether an element creates a Tab stop or not, but this also doesn't require a label, I don't think, because Tabbing to such an element causes the first line to be read anyway.
I've created a demo script on that basis, which doesn't add a role or label, it simply adds or removes tabindex based on whether the region actually has overflowing content.
It's triggered by a ResizeObserver so it continually updates in response to anything that changes the element's size (and you can test this by resizing the window, increasing zoom or font-size).
Lol, folks. Listen to your article before you post it. Doesn't matter what voice. You'll catch things like this from macrumors.com. In the app's settings (accessed via ChatPGT ➝ Settings… in the menu bar when the app's main window ...
It's always so frustrating when all the web accessibility content only talks about text heavy websites and forms. Like yes, I get it, I should have alt text on images. But there's so little information about how to build accessible web apps. What do I do if 80% of my page is a WebGL canvas and the other 20% is all buttons/sliders? How do I structure this if there is basically no "regular text" on the entire page?
I was going to leave Feedback® about making SwiftUI .accessibilityLabel work with SSML, but I found out that we could cook it ourselves #Accessibility#SwiftUI
BeMyEyes Privacy Policy 1/2: We record and store video streams and other images to enforce our Terms of Service, to promote and preserve safety, and to improve our Services and create new Services. We may provide recorded video streams or images to other organizations that are performing research or working to develop products and services that may assist blind and low-vision people or other members of the general public.
Begging creators and developers to use this tool to see if there are harmful flashing effects. I’m sick and tired of people slapping a “strobing effects” warning up front and calling it a day. #epilepsy#accessibility https://trace.umd.edu/peat/
The most egregious example of this is video essay editors using literally flashy effects that make their videos impossible to watch. You chose to use those filters. Stop it.
Tori Lacey, 26, chronicled the troubling incident on her #TikTok & #Instagram pages, where she usually posts content about her #travel exploits as a person who uses a #wheelchair.
I am once again a bronze supporter of Inclusive Design 24.
I want to see all of you presenting cool stuff (and making me look good as a result), but you won’t get that chance if you don’t submit before 7 June. https://inclusivedesign24.org/2024/
Prior years are up there if you want to see the range of talks that have been accepted in the past.
Chrome / TalkBack bug I first reported in 2020, and which was fixed for a time (?) appears to be back. Looking for confirmation before I file yet another one.
A named region with a tabindex does not expose its contents. Chrome / TalkBack only announces its accName and role.
I think I have an ugly workaround (“Shawarma” heading).
Access for All: Two friends helping change opportunities for blind people with an open-source screen reader for all. Now on Microsoft Unlocked: https://unlocked.microsoft.com/nvda/
Except when you need #accessibility features. SOrry, not sorry, but at that point it's just shit in the way a well-meaning person grabs an unsuspecting other person by the arm and drags them across the street when really they were just about to turn into their own front door. Inconsistent, generally doesn't do what you want and suddenly fucks off when you need it most. I wish it was better, I really do, but at the moment it really just isn't, and hasn't been for a very long time. Might this be the push for it to actually not suck? I can have dreams, but I sincerely doubt it after what I've seen so far. Generally tends to be a two steps forward, 3 steps back kind of situation. How's Orca with wayland these days?
So, I know generative AI is supposed to be just the most incorrect thing ever, but I want you to compare two descriptions. "A rock on a beach under a dark sky." And: The image shows a close-up view of a rocky, cratered surface, likely a planet or moon, with a small, irregularly shaped moon or asteroid in the foreground. The larger surface appears to be Mars, given its reddish-brown color and texture. The smaller object, which is gray and heavily cratered, is likely one of Mars' moons, possibly Phobos or Deimos. The background fades into the darkness of space. The first one is supposed to be the pure best thing that isn't AI. Right? Like, it's what we've been using for the past like 5 years. And yes, it's probably improved over those years. This is Apple's image description. It's, in my opinion, the best, most clear, and sounds like the ALT-text that it's made from, which people made BTW, and the images it was made with, which had to come from somewhere, were of very high quality, unlike Facebook and Google which just plopped anything and everything into theirs. The second was from Be My Eyes. Now, which one was more correct? Obviously, Be My Eyes. Granted, it's not always going to be, but goodness just because some image classification tech is old, doesn't mean it's better. And just because Google and Facebook call their image description bullshit AI, doesn't mean it's a large language model. Because at this point in time, Google TalkBack does not use Gemini, but uses the same thing VoiceOver has. And Facebook uses that too, just a classifier. Now, should sighted people be describing their pictures? Of course. Always. With care. And having their stupid bots use something better than "picture of cats." Because even a dumb image classifier can tell me that, and probably a bit more, lol. Cats sleeping on a blanket. Cats drinking water from a bowl. Stuff like that. But for something quick, easy, and that doesn't rely on other people, shoot yeah I'll put it through Be My Eyes. #accessibility#AI#LLM#BeMyEyes#blind
Just finished the presentation of my #TheWebConf History of the Web track paper on "Toward Making Opaque Web Content More Accessible: Accessibility From Adobe Flash to Canvas-Rendered Apps":