The fact that Apple's image recognition is usually more descriptive than Facebook's, shows to me than the accepted truth that ML must be done on servers is probably not as true as it seems. Perhaps, rather than chucking all possible data at a model, higher quality data, and higher quality algorhythms, are the key. I mean, Apple's model runs on the *device*. I mean, I know it has to "learn" through stware updates, but that doesn't stop it from working extremely well. If AI is to be the future, then Apple is the only company that is using it for accessibility in any amount of effectiveness.
@devinprater Agreed! On a related note: I’m quite pleased with the relatively privacy-friendly option of on-device image recognition that Apple devices offer. I was unaware of this functionality until I switched away from Android and was surprised to find that I could search for, say, “card” and have the Photos app instantly return an image of a birthday card I had taken several months ago. I don’t know if more recent versions of Android do this without having to upload images to Google servers.
@algeiaband Yeah, I believe Samsung or Google has an AI core, so they *could* do this if they wanted.
@devinprater I'm geniunely curious how that works. Like, I thought for accessibility AI to get better at stuff, they would kind of need to collect and store data, right? Like, if a server is not holding all that training data, how can an AI get better?
@csepp @devinprater I'm confused because the internet is just one big user generated pool of servers. If it's not coming from the devices, it's coming from the internet right? Where people upload content? I guess I don't get why people freak out when stuff is on servers because, what do people think the internet is? Collected servers that are connected.
@blindscribe Sure, for it to get better, the model would have to get updated. But for the model to work, it doesn't matter. The model for Apple's image recognition is just like 145 MB. The Screen recognition is like 16 MB. But in these cases, agile retraining with more data doesn't have to happen so often. Also, learning is on-device so that it learns from *your* data, like what apps you open most and at what time of day and such. I mean, the iPhone is a very powerful computer, and all it needs to learn from is you, which it's more than capable enough to do.
@devinprater The interesting thing to me is how quickly people are to accept that we need to centralize. In the days of mainframes, it was the accepted truth that everything had to be done on the mainframe. Read SF of the day and everyone just has terminals in their homes that connect to the central computer. Nobody predicted the PC. And then everyone predicted that it would die (Sun spent years selling thin clients to nobody). Now everyone is sure AI means mainframes all over again.
A fun, happy little Mastodon/Hometown instance. Join us by the fire and have awesome discussions about things, stuff and everything in between! Admins: @Talon and @Mayana.