When using @MonaApp with #VoiceOver, is it possible to share an image attached to a post with another app? When I use the "view media" rotor action, and then triple-tap on what VoiceOver claims is the image, I get sharing options related to the alt text, or detected text, or something, but not the image itself. In the end I had to take a screenshot of the image and use that instead.
If you are #blind and you have been locked out of being able to use your InstantPot after the inaccessible app update, please email support@instantpot.com and refer to Case 02284154 asking that they restore #VoiceOver#accessibility to their #iOS app.
I just realized that if you're formatting to an abbreviated weekday then VoiceOver (at least Ava and Zoe Premium) will only fully pronounce some of the days.
I guess it's because the ones that aren't pronounced are actual words with other meanings? 🤔
Text(oneShowDay.formatted(.dateTime.weekday(.abbreviated)))
I've just pushed a bunch of #accessibility changes for screen readers to the main branch of FediThready. ( It makes long texts into post sized chunks)
I've run through it with #VoiceOver and it seems ok. HOWEVER it all feels like it's "usable" instead of "good".
If there's a #a11y geek out there who uses screen readers regularly I would REALLY appreciate some suggestions on how to make this actually feel good for folks who rely on screen readers.
This tech could have revolutionized the voice-over industry in the 90's... but instead it was lost to time. Hear Al Lowe talk about Sierra On-Line's secret in-house studio weapon: https://youtu.be/oOZhpcJIyd8
It's June 10, 2024, at 4:00 PM Central US time. Almost every blind person that owns an iPhone has installed the iOS 18 beta. Some are playing retro games with the new, AI driven screen recognition. Others are gladly using DecTalk as their main voice, the Enhanced Siri voices that use ML to speak using emotion and context, as their reading voice, Eloquence as their notification voice (sent to one ear to minimize distractions), and finding it amazing that VoiceOver emphasizes italic text, and emboldens bold text. Others are finding it amazing that they can navigate their whole phone using Braille screen input, searching to find things by typing a few letters, or just swiping down through everything. A few are connecting their multi-line Braille displays, and feeling app icons and images, made much more understandable through touch, using an AI filter.
The next day, when news of all these features filters down to Android users, they quickly begin hammering Google, wanting DecTalk and Eloquence on their Pixel phones, like iOS users have. But Google is silent as always, only just now having given Chromebook users high quality Google TTS voices.
Note: great liberty has been taken to imagine the coolest outcome for the vague feature announcements Apple gave for VoiceOver users. We'll see just how cool, or not, they actually are on June 10.
DEVONthink To Go 3.8.2 is here. It supports the PDF bookmarklet and no longer applies default styling to Markdown when you use your own CSS. The new version also shows fewer notifications, checks for broken file permissions, and improves VoiceOver support. #devonthinktogo#devonthink#pkm#markdown#css#voiceoverhttps://buff.ly/3yl1yyB
I noticed that with the iOS Weather app VoiceOver would say "Hourly Forecast" when you selected an hour in it but not say it again when you moved to another hour.
@bas helped me recreate this with an AccessibilityChildBehavior of "contain" and an accessibility label.
Now it says my label when I touch an hour in the Hourly Forecast or day in the Nearby Days.
with #VoiceOver on a mac, after I press VO+Up to get out of the group I'm in, is there a way to quickly get back to the where I was inside that group? VO+Down takes me to the first item in the group.
Hey if Sonos can just up and screw their UI then I can just up and decide to never use them for future speakers. Time to check the resale value of this garbage in case they can't fix the UI accessibility like yesterday.
@nick#Sonos has replaced its app not because they truly think the app is better. But because they can replace specialised Android, iOS, Windows, and macOS teams with one generic team who know how to use cross-platform tools.
It goes beyond that, though. Look at the ideas behind the new home screen, which essentially can be described as: "put what you want on it". Is that primarily a user-facing improvement? No.
Rather, it's a reason to not rely on designers who can carefully think through information architecture, viewport sizes, user flows, and the best ways to present information. Make it the user's problem so that they can fire the people whose responsibility it used to be, or move them to another team where they won't be able to do their best work and will eventually quit and not be replaced.
This update goes way beyond #accessibility. It's a fundamental shift in how they do business, and it will be shit for everyone. That, more than the lack of #VoiceOver support, is what will probably cause me to move away from their ecosystem.
If you were wondering whether the new #Sonos app is as bad with #VoiceOver as people said, I can confirm that it is.
The first element that receives focus has no #accessible role or name, i.e. VoiceOver doesn't announce anything for it. The screen is split up into sections, like "Recently Played", "Your Services", and "Sonos Favourites", but none of these have headings. And, as previously noted, explore by touch doesn't work; VO seems to just see that blank element I mentioned as being stretched across the entire screen.
As a result of all this, the "Search" button requires 32 swipes from the top of the screen to reach, at least with my setup. If you have more services and/or more favourites, that number of swipes will be higher. #accessibility
Actors: If you’re an American or have American friends/colleagues and family, encourage them (and I encourage you) to contact your representative and both your senators and tell them your needs in regards to the No Fakes Act: https://www.congress.gov/event/118th-congress/senate-event/335702
Looking for a voice actor/narrator for an audiobook production. I need a woman with a low voice and a US accent. This is a paid project. No audio editing needed. I've got a team for that already.
You can find my email in the weblink in my bio to get in touch!
#VoiceDream apparently now has beta Kindle support in the main app, i.e. a beta version isn't required. There are reports on the mailing list that you need to be a subscriber to see it, that there should be a "Kindle beta" option in the "Settings" menu, and that you can't lock the screen or background the app while reading a Kindle book. I subscribed and updated the app, but it isn't showing up for me, so I don't know any more than that.
Kindle doesn't appear in #VoiceDream as a standard content source. Instead, Amazon's web reader is embedded into the app, and the book text is extracted from the web page to be spoken.
This means that Kindle isn't deeply integrated into the rest of the app, and results in the majority of Voice Dream features being unavailable. You can choose a book and start audio playback, but not access bookmarking, text highlighting, annotations, full-text search, the built-in dictionary, etc. More fundamentally, standard book navigation (e.g. by heading) is not possible either. You can skip by page, and that's all.
I don't know if the features aimed at other audiences, such as finger reading and word highlights, work or not. I would suspect not, given the webview-based architecture, but I haven't been able to verify either way.
Meanwhile:
Playing a Kindle book doesn't register it as your "currently reading" item. If you relaunch the app, or close the Kindle viewer, you have to locate the book in Amazon's web interface from scratch.
Speaking of Amazon's web interface, selecting a book to read happens entirely within it, bringing all of the accessibility issues along for the ride that you may expect from Amazon in 2024.
While Kindle content is playing, the #VoiceOver magic tap gesture causes whatever non-Kindle document you were reading elsewhere in Voice Dream to resume.
You can pause the playback of a Kindle book on AirPods. But when you try to resume, you also trigger the previous non-Kindle document.
On the two books I've tried, there are large pauses in the speech stream at frequent intervals, lasting almost a second. These don't seem to line up with page changes, and I'm not sure what causes them. Maybe something related to scrolling.
I pressed "next page" four or five times in quick succession, to jump past all of the copyright information in a book. Unfortunately, this caused playback to completely stop working, no matter how many times I toggled it.
Sometimes, you might think that previous #accessibility wisdom has been superseded by new "facts". Maybe someone told you that #screenReaders don't work well with a particular design pattern, but you tested #ScreenReader X and it seemed to work fine. Perhaps you heard that an interactive HTML input doesn't persist with forced colours styling, but you tried a High Contrast mode in Microsoft Edge and it seemed to be there.
There are three considerations usually missing here:
How are you defining and evaluating the working state? Do you have a functional, accurate understanding of the #accessTechnology or accessibility feature you are asserting things about?
You tested one thing in relation to a statement about multiple things, e.g. a statement is made about screen readers, plural, and you only tested with #VoiceOver (it's always VoiceOver). Beyond posting on the web-a11y Slack, how do you propose testing more broadly, if you plan to at all?
Possibly the most critical at all: is this question worth its overheads? If answering it conclusively would require me to test ten screen readers with 45 speech engines, or seven browsers with 52 permutations of CSS properties, maybe following the advice is "cheaper" than determining whether the advice is still completely relevant.
Important disclaimer: this relates specifically to cases where following the advice would not actively make things worse for users.
TL;DR: when you know doing a thing won't make things bad, doing the thing is usually quicker than evaluating whether not doing the thing is also bad.
New blog post! Free #accessibility consulting for #tidal! #music recommendations from yours truly! #ScreenReader testing! Click here and find out how #NVDA clearly wins this iteration of the series:
Hi dear #iOS users! I have a Health widget on my home screen. Is there a way to make it show steps instead of calories, percentage and what not, or it will be reason number 7001 why I don't like Apple's policies? And if it's the later, please suggest a #VoiceOver accessible app for that? Thanks!
It's April, people are already talking about WWDC and looking forward to IOS 18.
IOS 17.5 beta has dropped and fixes for serious issues with VoiceOver's Hebrew text to speech, introduced in IOS 17 have not been addressed by Apple.
I'm a fan of IOS devices and their commitment to accessibility.
However, when half a year after introducing a but that causes the Hebrew TTS to say a long string of gibberish (xb7xd78), every time it encounters an apostrophe is making doubt Apple's commitment to it's users outside the US.
Sounds like J or ch are written using a Hebrew letter plus an apostrophe. So every time VoiceOver sees teh word Chat it says C-xb7xd78-ats.
The blind IOS community here as distributed a custom punctuation pronunciation file that helps by telling VoiceOver not to pronounce apostrophes, but this still causes such word's to be mispronounced. Tsats instead of Chats, but at least it's bearable.
I don't think such a glaring bug would have been ignored by Apple if it happened in English.
I got into control center (the little swipe down quick control thing on iOS) with VoiceOver on, and now I can't pop my way out of it, because there's nothing to navigate to, to dismiss... is this an apple bug, or is there a way to back out I don't know? #VoiceOver#iOS#a11y