I'm happy that we're getting more and more lifelike text to speech voices using AI, but here's something you might not know. These AI based text to speech voices can be unpredictable. It's not that they say things wrong or mispronounce any more than other speech synths do, but what definitely does happen is that it does not say the same string of text the same way twice. It might change the intonation, or even sometimes the speed of certain syllables from utteration to utteration. I use my screen reader with the speed very, very fast. Often I don't pay conscious attention to exactly what words are spoken because I've gotten so used to the text to speech voices that I use that my brain does this subconsciously. They have certain patterns that I can recognize and this tells me what the synth just said without having to understand every single syllable or word. This is important for reading short texts like names of buttons, window titles, web addresses, messages, usernames, etc.
I much prefer very algorithmic, synthetic speech for this. Not only is it very predictable in how it pronounces things, but it also speeds up much more. If you speed up, for example, Google's Wavenet voices, they start slurring words. This is obviously no good at all. It's authentic, sure, but it's annoying to me. I'm happy to use AI speech, for example the Siri voices that come with the new MacOS, if I'm reading something longer like a book, story and so on. But for every day use? No thanks. I think it's important that we don't get too carried away here. If I had the choice, I would choose a non natural voice. And that by quite a big margin. Here's your fun fact of the day!
And let's not even talk about code. A natural voice reading code is just... it just doesn't work. It just feels totally wrong. I need to navigate through code very fast. Not only do AI voices have quite a bit of latency, but if I'm quickly scrolling through a file I'm listening to the actual words just as much as I'm listening for familiar sequences of sounds. AI based TTS don't have that because things are ever so slightly different.
This also means that cloud anything is absolutely out. If you're making web requests to get your screen reader to speak then stop right now. I won't use it, you wouldn't use it, nobody would use it. I guess Apple can do this on their new devices because of the M1 platform, but even there you can absolutely feel the delay between pressing the key and the voice reacting to what you've done. The simpler the tts, the faster the response time, the happier I am.
@talon Same here! And I think most screen reader users might agree.
This is why people are, after all these years, still using Eloquence. Because it just works! It reacts to various punctuation, but doesn't make every single string of text an emotional experience. eSpeak gets close to that, at least with inflection lowered a bit.
I wish there were more people creating synths like that now, things that are both fast and pleasant to listen to. Human-like voices are useful, and I'm happy even sighted people have started to use them for reading articles, but they can't replace everything.
@Mayana @talon Ugh, I've at least made modest peace with Microsoft's OneCore voices, but it randomly and through no pattern I've discovered replaces "as" with "American Samoa". I'm not at all down with a voice that does this. And no, AFAICT it isn't "AS" vs. "as", it just randomly swaps out one for the other. That's...disturbing, to say the least. Our aural culture is bad enough already.
@Mayana @talon OK, I've heard "American Samoa" before, and I can't find a string that triggers it, but this one triggers something interesting for me using Microsoft Mark and US English, can't say what it would do with other combos:
10 The passage.mp3
I mean, that's a filename, so part of me is like "Whatever." But that's...a lot of intelligence to apply to how a string is presented, and I don't like TTS engines doing that. Mispronounce something predictably, don't add a million special cases that totally change the meaning of what is spoken, without any indication that those changes are being made.
@nolan @Mayana oh my god that made me snort. Yeah that's awful. I do not like the OneCore voices. They're just... slow. On Windows I'm definitely guilty... and use Eloquence. On other systems it's either ESpeak or Vocalizer.
You know, I would be happy with ESpeak, but to me it just sounds too metallic and sharp. What I like about Eloquence is that it has a relatively warm voice tone, which makes it easier to listen to all day. ESpeak is much sharper, and its consonants have a weird attack. They stick out. Eloquence, and most other voices, soften them a lot. I prefer that.
@Mayana @talon Fair, Eloquence certainly does do that. But I don't think it's fair to track that all the way down. For all its flaws, the original DECTalk had some interesting quirks itself. My issue is that, as those quirks get more sophisticated, they also get harder to learn your way around. I hate to assume people are stupid, but blind people are going to have some odd beliefs around American Samoa or Northern Mariana Islands, and gods know what else, if TTS keeps interjecting itself in that pathway.
A fun, happy little Mastodon/Hometown instance. Join us by the fire and have awesome discussions about things, stuff and everything in between! Admins: @Talon and @Mayana.