444
submitted 3 months ago by Stern@lemmy.world to c/technology@lemmy.world
top 50 comments
sorted by: hot top controversial new old
[-] ptz@dubvee.org 184 points 3 months ago

Let's go, already!

How you can help: If you run a website and can filter traffic by user agent, get a list of the known AI scrapers agent strings and selectively redirect their requests to pre-generated AI slop. Regular visitors will see the content and the LLM scraper bots will scrape their own slop and, hopefully, train on it.

[-] azl@lemmy.sdf.org 54 points 3 months ago

This would ideally become standardized among web servers with an option to easily block various automated aggregators.

Regardless, all of us combined are a grain of rice compared to the real meat and potatoes AI trains on - social media, public image storage, copyrighted media, etc. All those sites with extensive privacy policies who are signing contracts to permit their content for training.

Without laws (and I'm not sure I support anything in this regard yet), I do not see AI progress slowing. Clearly inbreeding AI models has a similar effect as in nature. Fortunately there is enough original digital content out there that this does not need to happen.

[-] ptz@dubvee.org 28 points 3 months ago* (last edited 3 months ago)

Regardless, all of us combined are a grain of rice compared to the real meat and potatoes AI trains on

Absolutely. It's more a matter of principle for me. Kind of like the digital equivalent of leaving fake Amazon packages full of dog poo out front to make porch pirates have a bad day.

load more comments (1 replies)
[-] kevindqc@lemmy.world 17 points 3 months ago

They'll just start using a chrome user agent

[-] Deebster@programming.dev 13 points 3 months ago

Only if enough people do it. Then again, loads scrapers outside of AI already pretend to be normal browsers.

load more comments (1 replies)
load more comments (8 replies)
[-] barsquid@lemmy.world 118 points 3 months ago

It is their own fault for poisoning the internet with their slop.

[-] db2@lemmy.world 55 points 3 months ago

In case anyone doesn't get what's happening, imagine feeding an animal nothing but its own shit.

[-] BassTurd@lemmy.world 19 points 3 months ago

Not shit, but isn't that what brought about mad cow disease? Farmers were feeding cattle brain matter that had infected prions. Idk if it was cows eating cow brains or other animals though.

[-] _cnt0@sh.itjust.works 17 points 3 months ago

It was the remains of fish which we ground into powder and fed to other fish and sheep, whose remains we ground into powder and fed to other sheep and cows, whose remains we ground to powder and fed to other cows.

load more comments (2 replies)
[-] Stern@lemmy.world 18 points 3 months ago

I use the "Sistermother and me are gonna have a baby!" example personally, but I am a awful human so

[-] leftzero@lemmynsfw.com 13 points 3 months ago

Photocopy of a photocopy is my go-to metaphor for model collapse.

load more comments (1 replies)
[-] EgoNo4@lemmy.world 80 points 3 months ago

More like... Degenerative AI *ba dum tsss

load more comments (5 replies)
[-] CarbonatedPastaSauce@lemmy.world 70 points 3 months ago

Model collapse is just a euphemism for “we ran out of stuff to steal”

[-] Snowclone@lemmy.world 34 points 3 months ago* (last edited 3 months ago)

It's more ''we are so focused on stealing and eating content, we're accidently eating the content we or other AI made, which is basically like incest for AI, and they're all inbred to the point they don't even know people have more than two thumb shaped fingers anymore."

load more comments (2 replies)
[-] Adderbox76@lemmy.ca 56 points 3 months ago

Every single one of us, as kids, learned the concept of "garbage in, garbage out"; most likely in terms of diet and food intake.

And yet every AI cultist makes the shocked pikachu face when they figure out that trying to improve your LLM by feeding it on data generated by literally the inferior LLM you're trying to improve, is an exercise in diminishing returns and generational degradation in quality.

Why has the world gotten both "more intelligent" and yet fundamentally more stupid at the same time? Serious question.

[-] LANIK2000@lemmy.world 29 points 3 months ago

Because the people with power funding this shit have pretty much zero overlap with the people making this tech. The investors saw a talking robot that aced school exams, could make images and videos and just assumed it meant we have artificial humans in the near future and like always, ruined another field by flooding it with money and corruption. These people only know the word "opportunity", but don't have the resources or willpower to research that "opportunity".

[-] GamingChairModel@lemmy.world 21 points 3 months ago

Why has the world gotten both "more intelligent" and yet fundamentally more stupid at the same time? Serious question.

Because it's not actually always true that garbage in = garbage out. DeepMind's Alpha Zero trained itself from a very bad chess player to significantly better than any human has ever been, by simply playing chess games against itself and updating its parameters for evaluating which chess positions were better than which. All the system needed was a rule set for chess, a way to define winners and losers and draws, and then a training procedure that optimized for winning rather than drawing, and drawing rather than losing if a win was no longer available.

Face swaps and deep fakes in general relied on adversarial training as well, where they learned how to trick themselves, then how to detect those tricks, then improve on both ends.

Some tech guys thought they could bring that adversarial dynamic for improving models to generative AI, where they could train on inputs and improve over those inputs. But the problem is that there isn't a good definition of "good" or "bad" inputs, and so the feedback loop in this case poisons itself when it starts optimizing on criteria different from what humans would consider good or bad.

So it's less like other AI type technologies that came before, and more like how Netflix poisoned its own recommendation engine by producing its own content informed by that recommendation engine. When you can passively observe trends and connections you might be able to model those trends. But once you start actually feeding back into the data by producing shows and movies that you predict will do well, the feedback loop gets unpredictable and doesn't actually work that well when you're over-fitting the training data with new stuff your model thinks might be "good."

load more comments (2 replies)
load more comments (2 replies)
[-] RmDebArc_5@sh.itjust.works 47 points 3 months ago

This sounds like AI is literally biting its own tail

[-] AbidanYre@lemmy.world 24 points 3 months ago

ChatGPT, what is an ouroboros?

load more comments (1 replies)
[-] casmael@lemm.ee 37 points 3 months ago

…………………. Good?

load more comments (1 replies)
[-] aggelalex@lemmy.world 34 points 3 months ago

So AI:

  1. Scraped the entire internet without consent
  2. Trained on it
  3. Polluted it with AI generated rubbish
  4. Trained on that rubbish without consent
  5. Are now in need of lobotomy
[-] thejml@lemm.ee 30 points 3 months ago
[-] Telorand@reddthat.com 17 points 3 months ago

Oh, the artificial humanity!

[-] Davel23@fedia.io 11 points 3 months ago

Are you confusing the Habsburg Dynasty with the Hindenburg?

[-] Deebster@programming.dev 15 points 3 months ago

Perhapsburg they are

[-] Telorand@reddthat.com 10 points 3 months ago

No, I just thought they were vaguely similar enough words to make a dumb internet joke.

load more comments (2 replies)
load more comments (6 replies)
[-] rickdg@lemmy.world 29 points 3 months ago

Old news? Seems to be a subject of several papers for some time now. Synthetic data has been used successfully already for very specific domains.

load more comments (1 replies)
[-] BlackLaZoR@fedia.io 29 points 3 months ago

So they made garbage AI content, without any filtering for errors, and they fed that garbage to the new model, that turned out to produce more garbage. Incredible discovery!

[-] RunningInRVA@lemmy.world 19 points 3 months ago

Indeed. They discovered that:

shit in = shit out.

load more comments (3 replies)
load more comments (5 replies)
[-] gravitas_deficiency@sh.itjust.works 28 points 3 months ago* (last edited 3 months ago)

Uh, good.

As an engineer who cares a LOT about engineering ethics, it is absolutely fucking infuriating watching the absolute firehose of shit that comes out of LLMs and public-consumption audio, image, and video ML systems, juxtaposed with the outright refusal of companies and engineers who work there to accept ANY accountability or culpability for the systems THEY FUCKING MADE.

I understand the nuances of NNs. I understand that they’re much more stochastic than deterministic. So, you know, maybe it wasn’t a great idea to just tell the general public (which runs a WIDE gamut of intelligence and comprehension ability - not to mention, morality) “have at it”. The fact that ML usage and deployment in terms of information generating/kinda-sorta-but-not-really-aggregating “AI oracles” isn’t regulated on the same level as what you’d see in biotech or aerospace is insane to me. It’s a refusal to admit that these systems fundamentally change the entire premise of how “free speech” is generated, and that bad actors (either unrepentantly profit driven, or outright malicious) can and are taking disproportionate advantage of these systems.

I get it - I am a staunch opponent of censorship, and as a software engineer. But the flippant deployment of literally society-altering technology alongside the outright refusal to accept any responsibility, accountability, or culpability for what that technology does to our society is unconscionable and infuriating to me. I am aware of the potential that ML has - it’s absolutely enormous, and could absolutely change a HUGE number of fields for the better in incredible ways. But that’s not what it’s being used for, and it’s because the field is essentially unregulated right now.

[-] pyre@lemmy.world 27 points 3 months ago* (last edited 3 months ago)

oh no are we gonna have to appreciate the art of human beings? ew. what if they want compensation‽

[-] ohellidk@sh.itjust.works 25 points 3 months ago

Cool, let's try to ruin it faster!

[-] draughtcyclist@lemmy.world 23 points 3 months ago

I've been assuming this was going to happen since it's been haphazardly implemented across the web. Are people just now realizing it?

[-] DeathbringerThoctar@lemmy.world 38 points 3 months ago

People are just now acknowledging it. Execs tend to have a disdain for the minutiae. They're like kids that only want to do the exciting bits. As a result things get fucked because they don't really understand what they're doing. As Muskrat would say "move fast and break things." It's a terrible mindset.

[-] pixxelkick@lemmy.world 11 points 3 months ago

"Move Fast and Break Things" is Zuckerberg/Facebook motto, not Musk, just to note.

load more comments (2 replies)
[-] FaceDeer@fedia.io 14 points 3 months ago

No, researchers in the field knew about this potential problem ages ago. It's easy enough to work around and prevent.

People who are just on the lookout for the latest "aha, AI bad!" Headline, on the other hand, discover this every couple of months.

[-] Lettuceeatlettuce@lemmy.ml 18 points 3 months ago
[-] Hugin@lemmy.world 18 points 3 months ago

The solution for this is usually counter training. Granted my experience is on the opposite end training ai vision systems to id real objects.

So you train up your detector ai on hand tagged images. When it gets good you use it to train a generator ai until the generator is good at fooling the detector.

Then you train the detector on new tagged real data and the new ai generated data. Once it's good at detection again you train the generator ai on the new detector.

Repeate several times and you usually get a solid detector and a good generator as a side effect.

The thing is you need new real human tagged data for each new generation. None of the companies want to generate new human tagged data sets as it's expensive.

[-] TheReturnOfPEB@reddthat.com 18 points 3 months ago

have we tried feeding them actual human beings yet ?

load more comments (1 replies)
[-] NotInTheFace@lemmy.world 16 points 3 months ago

Looks like that artist drawing self portraits as his alzheimer got worse and worse.

[-] NocturnalMorning@lemmy.world 13 points 3 months ago

It's basically AI alzheimers

load more comments (1 replies)
[-] levzzz@lemmy.world 13 points 3 months ago

Fake news, just like that one time Nightshade "killed" stable diffusion (literally had no effect) Flux came out not long ago and it's better than ever

load more comments (2 replies)
[-] pastermil@sh.itjust.works 12 points 3 months ago

More like degenerative AIs

[-] SkyNTP@lemmy.ml 12 points 3 months ago

I think anyone familiar with the laws of thermodynamics could have predicted this outcome.

load more comments (2 replies)
[-] PrivacyDingus@lemmy.world 12 points 3 months ago

this headline truly is threatening me with a good time

[-] nullPointer@programming.dev 11 points 3 months ago

when all your information conflicts with itself, you really have no information at all.

load more comments
view more: next ›
this post was submitted on 18 Sep 2024
444 points (94.2% liked)

Technology

60123 readers
3948 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS