393
submitted 6 months ago by neme@lemm.ee to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] FaceDeer@fedia.io 29 points 6 months ago

The problem with AI hallucinations is not that the AI was fed inaccurate information, it's that it's coming up with information that it wasn't fed in the first place.

As you say, this is a problem that humans have. But I'm not terribly surprised these AIs have it because they're being built in mimicry of how aspects of the human mind works. And in some cases it's desirable behaviour, for example when you're using an AI as a creative assistant. You want it to come up with new stuff in those situations.

It's just something you need to keep in mind when coming up with applications.

[-] AdrianTheFrog@lemmy.world 4 points 6 months ago

Not in the case of the google search AI. It quotes directly from unreliable sources.

[-] FaceDeer@fedia.io 4 points 6 months ago* (last edited 6 months ago)

Exactly, which is why I've objected in the past to calling Google Overview's mistakes "hallucinations." The AI itself is performing correctly, it's giving an accurate overview of the search result it's being told to create an overview for. It's just being fed incorrect information.

this post was submitted on 12 Jun 2024
393 points (95.4% liked)

Technology

60130 readers
2791 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS