I’ve published Part 3 of “I Want to Love Linux. It Doesn’t Love Me Back.”
This one’s about the so-called universal interface: the console. The raw, non-GUI, text-mode TTY. The place where sighted Linux users fall back when the desktop breaks, and where blind users are supposed to do the same. Except — we can’t. Not reliably. Not safely. Not without building the entire stack ourselves.
This post covers Speakup, BRLTTY, Fenrir, and the audio subsystem hell that makes screen reading in the console a game of chance. It dives into why session-locked audio breaks espeakup, why BRLTTY fails silently and eats USB ports, why the console can be a full environment — and why it’s still unusable out of the box. And yes, it calls out the fact that if you’re deafblind, and BRLTTY doesn’t start, you’re just locked out of the machine entirely. No speech. No visuals. Just a dead black box.
There are workarounds. Scripts. Hacks. Weird client.conf magic that you run once as root, once as a user, and pray to PipeWire that it sticks. Some of this I learned from a reader of post 1. None of it is documented. None of it is standard. And none of it should be required.
This is a long one. Technical, and very real. Because the console should be the one place Linux accessibility never breaks. And it’s the one place that’s been left to rot.
Link to the post: https://fireborn.mataroa.blog/blog/i-want-to-love-linux-it-doesnt-love-me-back-post-3-speakup-brltty-and-the-forgotten-infrastructure-of-console-access/
@fireborn Haha well written and sadly accurate as always.
On my setup though, weirdly enough I have one big difference with yours when trying to use espeakup and pipewire/pulseaudio at the same time.
Instead of locking espeakup out, it does the oposite. Espeakup is the only thing allowed to talk, because it grabbed onto the alsa device like sticky glue and won't ever let go of it. Meanwhile my audio session itself can't make any sound at all.
I've been wanting to build an hardware synth for years so that we could just use it with both speakup and orca, but the project basically went almost nowhere after an attempted port of espeak-ng to a cortex m7 microcontroller that was never completed.
I'd want to make this available for a low cost obviously, not like the hardware synths of the 80s.
@xogium I made a similar thing using a pi0 and a wm8960 chip with 2 ohm speakers.
@fireborn Nice. I'd use a mcu over a linux capable board if I ever manage to build it, for several reasons.
One, it would boot much, much, much faster and be capable of realtime. Second being that, well... Relying on a system that is notorious to destroy your audio when it just feels like it... Yeah. Nop.
I totally understnad why you went that route, but for an actual viable product I'd definitely go with a microcontroller.
@xogium I went that route because I was a broke hs student at the time, and just needed to be ale to read kernel boot logs and the initramfs shell.
@fireborn Hehehe yeah like I said, totally fair.
One thing this synth I still want to build could do also is not just read you the console based on speakup protocol, but you could also plug it in on a system that has no such thing, like for example an arduino project emitting data over serial, or any data that is plain text emitted over whatever serial port, and it would attempt to read it as is. It could be sounding a bit weird, and you'd have not a lot of control over the output if at all, but you could still get an idea of what's happening.