102
submitted 7 months ago by jeffw@lemmy.world to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] stevedidwhat_infosec@infosec.pub 2 points 7 months ago* (last edited 7 months ago)

I do not want that for anyone. AI is a tool that should be kept open to everyone, and trained with consent. But as soon as people argue that its only a tool that can harm, is where I'm drawing the line. That's, in my opinion, when govts/ruling class/capitalists/etc start to put in BS "safeguards" to prevent the public from making using of the new power/tech.

I should have been more verbose and less reactionary/passive aggressive in conveying my message, its something I'm trying to work on, so I appreciate your cool-headed response here. I took the "you clearly don't know what ludites are" as an insult to what I do or don't know. I specifically was trying to draw attention to the notion that AI is solely harmful as being fallacious and ignorant to the full breadth of the tech. Just because something can cause harm, doesn't mean we should scrap it. It just means we need to learn how it can harm, and how to treat that. Nothing more. I believe in consent, and I do not believe in the ruling minority/capitalist practices.

Again, it was an off the cuff response, I made a lot of presumptions about their views without ever having actually asking them to expand/clarify and that was ignorant of me. I will update/edit the comment to improve my statement.

[-] hellothere@sh.itjust.works 2 points 7 months ago

AI is a tool that should be kept open to everyone

I agree with this principle, however the reality is that given the massive computational power needed to run many (but not all) models, the control of AI is in the hands of the mega corps.

Just look at what the FAANGs are doing right now, and compare to what the mill owners were doing in the 1800s.

The best use of LLMs, right now, is for boilerplating initial drafts of documents. Those drafts then need to be reviewed, and tweaked, by skilled workers, ahead of publication. This can be a significant efficiency saving, but does not remove the need for the skilled worker if you want to maintain quality.

But what we are already seeing is CEOs, etc, deciding to take "a decision based on risk" to gut entire departments and replace them with a chat bot, which then ~~invents~~ hallucinates the details of a particular company policy, leading to a lower quality service, but significantly increased profits, because you're no longer paying for ensured quality.

The issue is not the method of production, it is who controls it.

[-] stevedidwhat_infosec@infosec.pub 1 points 7 months ago

I can see where you're coming from - however I disagree on the premise that "the reality is that (rationale) the control of AI is in the hands of the mega corps". AI has been a research topic not done solely by huge corps, but by researchers who publish these findings. There are several options out there right now for consumer grade AI where you download models yourself, and run them locally. (Jan, Pytorch, TensorFlow, Horovod, Ray, H2O.ai, stable-horde, etc many of which are from FAANG, but are still, nevertheless, open source and usable by anyone - i've used several to make my own AI models)

Consumers and researchers alike have an interest in making this tech available to all. Not just businesses. The grand majority of the difficulty in training AI is obtaining datasets large enough with enough orthogonal 'features' to ensure its efficacy is appropriate. Namely, this means that tasks like image generation, editing and recognition (huge for medical sector, including finding cancers and other problems), documentation creation (to your credit), speech recognition and translation (huge for the differently-abled community and for globe-trotters alike), and education (I read from huge public research data sets, public domain books and novels, etc) are still definitely feasible for consumer-grade usage and operation. There's also some really neat usages like federated tensorflow and distributed tensorflow which allows for, perhaps obviously, distributed computation opening the door for stronger models, run by anyone who will serve it.

I just do not see the point in admitting total defeat/failure for AI because some of the asshole greedy little pigs in the world are also monetizing/misusing the technology. The cat is out of the bag in my opinion, the best (not only) option forward, is to bolster consumer-grade implementations, encouraging things like self-hosting, local operation/execution, and creating minimally viable guidelines to protect consumers from each other. Seatbelts. Brakes. Legal recourse for those who harm others with said technology.

[-] hellothere@sh.itjust.works 0 points 7 months ago

I think we're talking past each other. You seem to be addressing a point I have not made.

A piece of technology is not something that exists outside of a political context. As an example, your repeated use of consumer, as a term for individuals, is interesting to note.

Why do you view these people as consumers, rather than producers? Where is the power in that relationship? How does that implication shape the rest of your point?

[-] stevedidwhat_infosec@infosec.pub 1 points 7 months ago

Look man I’m an adult, you may talk to me like one

I used the term consumer when discussing things from a business sense, ie we’re talking about big businesses and implementations of technology. It’s also in part due to the environment I live in.

You’ve also dodged my whole counter point to bring up a new point you could argue.

I think we’re done with this convo tbh. You’re moving goal posts and trying to muddy water

[-] hellothere@sh.itjust.works 2 points 7 months ago

I'm not moving the goal posts, I have consistently been talking about workers resisting the capture of their income by businesses mass producing items at lower qualities.

Your previous comment characterising individuals as only consumers is what I was continuing to challenge within the above context.

Either way, have a good weekend.

this post was submitted on 09 May 2024
102 points (77.1% liked)

Technology

60123 readers
3948 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS