dragonscave.space is one of the many independent Mastodon servers you can use to participate in the fediverse.
A fun, happy little Mastodon/Glitch instance.

Server stats:

241
active users

@jamiemccarthy @FreakyFwoof Yup, that's why Spoken Content does it. I think, but am not sure, that Apple uses different APIs for synthesizing speech in Spoken Content and VoiceOver. For example, in spoken content, it uses the neural variants of the Siri voices, but in VoiceOver it only uses the concatonative ones probably because of battery reasons. You can try to add the languages you want it to recognize to the language rotor and make sure the rotor itself is set to default. That has the highest chance of working. But in general, VoiceOver and pronouncing things is a bit hit or miss. I believe it's this ➡️ emoji which for me always gets read in Japanese. It's the one often used in written directory listings.

Quiet public

@jamiemccarthy @FreakyFwoof Sorry, it's these ones: ─ ━ │

Quiet public

@talon @jamiemccarthy @FreakyFwoof Unicode defines it, but barely any screen reader is sophisticated enough to do the proper thing, which would be to treat a block of those symbols like a font, as those symbols are actually a font. It might say modifier bold 'text' rather than reading that for each letter. But that requires a heuristic that could quickly run into problems in any kind of dynamic flow, requiring an absolute ton of state and backtracking capability.

Quiet public

@x0 @jamiemccarthy @FreakyFwoof Which kind of supports the initial point. There is no screen reader that does it all well. There are quirks all across the board from Apple operating systems to windows and android to Linux. Making a speech system that can read all of that seems to already be very difficult, so a system which can also detect all the different ways in which Unicode can be used and abused to write text feels almost impossible.

Quiet public

@talon @jamiemccarthy @FreakyFwoof I'm unsure if that combination of symbols even displays on most platforms, given those are in some of the very recent blocks. The responsibility here should actually be to the API exposing these things, which would work for all but basic text editors. These characters include what is effectively font style metadata. Presumably the rendering engine is translating them to some kind of monospace font under the hood, which would, I think, eventually include regular indexes. That is, the browser should convert a chain of these into a block of plain letters using the implied font.

Quiet public

@talon @jamiemccarthy @FreakyFwoof The generated markup that gets exposed to the accessibility tree, then, would be a span or something that specified the used font, containing these letters. The screen reader may read out the font name if you have that setting on, otherwise it would just read the letters as-is.

Quiet public

@x0 @jamiemccarthy @FreakyFwoof So what you're suggesting is something like a HarfBuzz equivalent for accessible text?

Quiet public

@talon @jamiemccarthy @FreakyFwoof I suppose, I don't quite know what harfbuzz does. I'm thinking of a translation in existing rendering pipelines in apps like browsers which support that kind of markup.

Quiet public

@talon @jamiemccarthy @FreakyFwoof And writing an NVDA add-on, for example, to do the conversion isn't quite as straightforward as you would think, as it requires a dictionary containing every single variation of those symbols to be stored, and then somehow tack on attributes. If you just wanted to ignore the extra attributes and present as letters you could forcefully encode it as ASCII which would give the equivalent plane letters, but there's Greek letters in there too!