155
AI crap - Why ML will make the world worse, not better
(drewdevault.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Top quality luddite opinions right here. Plenty of fear and oprobium being directed against the technology, while taking the kleprocratic capitalism and kakistocracy as a given that can't be challenged.
That seems to be the theme of the era.
Yes, it is incompatible with the status quo. That's a good thing. The status quo is unsustainable. The status quo is on course to kill us all.
The only real danger AI brings is it will let our current corrupt leaders and corrupt institutions be more efficient in their corruption. The problem there is not the AI; it's the corruption.
Improving human efficiency is essentially the purpose of technology after all. Any new invention will generally have this effect.
What? It's the sociological definition of technology. A cultural tool which is used by a community for making a task xyz, easier, faster, more efficient.
Efficiency is an extremely broad term.
What's your counter definition of technology and efficiency that is leading you to disagree?
Extra spicy take: The Luddites were right. They were really always about opposing unethical use of technology, people who use their name as an insult were always all about "progress over people", and you should never feel bad for being called a Luddite.
Extra spicy, hut the kind of spicy you get on your anus after shitting out a too-hot meal
There are definitely people who are harmed by FUD like this. For example the current writers strike, which has 11,000 people putting down tools... indefinitely shutting down global movie productions that employ millions of people and leaving them unemployed for who knows how long.
I don't have anything against you or your colleagues. You've got every right to strike if that's what you want to do.
But there are millions of people being harmed by the strike. That's a simple fact.
Journalists/etc need to do their job and provide good balanced information on critical issues like this one. FUD like Drew Devalt posted inflames the debate and makes it nearly impossible for reasonable people to figure out what to do about Large Language Models... because like it or not, they exist, and they're not going away.
PS: while I'm not a film writer, I am paid to spend my day typing creative works and my industry is also facing upheaval. I also have friends who work in the film industry, so I'm very aware and sympathetic to the issues.
Simple example: A lot of artists would like that their images aren't used for AI training and would like to have legislation to prevent that. Problem with that is that such legislation would grand a monopoly on AI to the Google's, Facebook's and Adobe's of this world, as they are already sitting on mountains of data and have ToS that allows them to use it for training. Any Open Source project that doesn't have the data and would need to web scraping would be illegal.
That's the issue. A lot of criticism on AI is extremely short sighted and ignorant, often not even understanding the very basics how it all works.
Another more fundamental problem: What are you going to do? AI is just a collection of algorithms and math. Do you want to outlaw math? Force humans to use less efficient tools? Technological progress is not something you can easily control, especially not in advance when you don't even know what changes it will bring.
We did and nothing ever came of it. Projects like https://freenet.org/ or https://freedombox.org/ have been around for a decade or two. But the masses want convinience.
You are spreading FUD right in this post: "poorly regurgitating our shit without our permission"
You wanna be taken serious? Stop repeating the nonsense as everybody else.
And as for "replace us at our jobs", that's not a problem, that's called progress. If you want UBI or something along the line, go fight for that, don't make stupid arguments against AI.
Well, maybe, still waiting for reading one that isn't. Since everybody just keeps repeating the same nonsense, including you right now.
The only sensible one I heard so far was from Hinton, which simply suggested putting about equal money into AI safety research as we do into AI research. Nothing wrong with that, though that will probably just show us more ways in which AI can go wrong and less on how to prevent it.
So you want it changed in such a way that it grands Google, Adobe and Co. exclusive ownership over powerful AI models and kills all Open Source efforts? Congratulations for proving my point.
And you wonder why nobody takes you people seriously...
Not everything anti-Ai is luddite, some is just poorly thought through or downright incorrect, this is absolutely a luddite take
But there's also real danger here today.
For example, ‘Life or Death:’ AI-Generated Mushroom Foraging Books Are All Over Amazon
These are easily avoidable problems. There are always reputable authors on topics and why would a self published foraging book by some random person be better than an AI one? You buy books written by experts, especially when it’s about life or death.
"Easily avoidable" if you know to look for them or if they're labelled appropriately. This was just an example of a danger that autocomplete AI is creating today. Unscrupulous people will continue to shit out AI generated nonsense to try to sell when the seller does zero vetting of the products in their store (one of the many reasons I no longer shop at Amazon).
Many people, especially beginners, are not going to take the time to fully investigate their sources of knowledge, and to be honest they probably shouldn't have to. If you get a book about mushrooms from the library, you can probably assume it's giving valid information as the library has people to vet books. People will see Amazon as being responsible for keeping them safe, for better or worse.
I agree that generally there is a bunch of nonsense about ChatGPT and LLM AIs that isn't really valid, and we're seeing some amount of AI bubble happening where it's a self feeding thing. In the end it will shake out, but before that all happens you have some outright dangerous and harmful things occurring today.
I mean, people should at least check if the publisher is reliable for any information source.
People should learn how to approach information sources. If that’s not happening AI doesn’t really matter for this discussion.
Because that’s exactly the point.
No I’m just saying that this problem is not a new “AI problem™️” but a basic problem with media literacy that merely gained a new aspect.
I think the idea is that someone buying a basic book on foraging mushrooms isn't going to know who the experts are.
They're going to google it, and they're going to find AI-generated reviews (with affiliate links!) of AI-generated foraging books.
Now, if said AI is generating foraging books more accurate than humans, that's fine by me. Until that's the case, we should be marking AI-generated books in some clear way.
The problem is, the LLM AIs we have today literally cannot do this because they are not thinking machines. These AIs are beefed-up autocompletes without any actual knowledge of the underlying information being conveyed. The sentences are grammatically correct and read (mostly) like we would expect human written words to read, however the actual factual content is non-existent. The appearance of correctness just comes from the fact that the model was trained on information that was (probably mostly) correct in the first place.
I mean, we should still be calling these things algorithms and not "AI" as "AI" carries a lot of subtext in people's minds. Most people understand "algorithms" to mean math, and that dehumanizes it. If you call something AI, all of a sudden people have sci-fi ideas of truly independent thinking machines. ChatGPT is not that, at all.
I agree. And ML may never be able to cross that line.
That said, we've been calling it AI for decades now. It was weird enough to me when people started using ML more. I remember the AI classes I took in college, and the AI experts I met in my jobs. Then one day it was "just ML". In most situations, it's the same darn thing.
It's literally baked into the models themselves. AI will reinforce kleptocratic capitalism and kakistocracy as you so aptly put it because the very data it's trained on is a slice of the society it resembles. People on the internet share bad, racist opinions and the bots trained on this data do the same. When AI models are put in charge of systems because it's cheaper than putting humans in place, the systems themselves become entrenched in status-quo. The problem isn't so much the technology itself, but how the technology is being rolled out, driven by capitalistic incentives, and the consequences that brings.
snicker drewdevault is an avid critic of capitalism. thats entirely the point of this post actually.
Then it is horrifically badly written. Maybe get an AI to give it a once over?