150
Today's Large Language Models are Essentially BS Machines
(quandyfactory.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Apparently so are today's bloggers and journalists. Since they just keep repeating the same nonsense and seem to lack any sense of understanding. I am really starting to question if humans are capable of original thought.
This does not compute. Bing Chat provides sources, as in links you can click on and that work. It doesn't pull things out of thin air, it pulls information out of Bing search and summarizes it. That information is often wrong, incomplete and misleading, as it will only take a tiny number of websites to source that information. But so would most humans using Bing search. So not really a problem with the bot itself.
ChatGPT gives most of the time far better answers, as it bases the answers on knowledge gained from all the sources, not just specific ones. But that also means it can't provide sources and if you pressure it to give you some, it will make them up. And depending on the topic, it might also not know something for which Bing can find a relevant website.
And guess what answer sounds the most reasonable? A correct one. People seriously seem to have a hard time to grasp how freakishly difficult it is to generate plausible language and how much stuff has to be going on behind the scene to make that possible. That does not mean GPT will be correct all the time or be an all knowing oracle, but you'll have to be rather stupid to expect that to begin with. It's simple the first chatbot that actually kind of works a lot of the time. And yes, it can reason and understand within its limits, it making mistakes from time to time does not refute that, especially when badly prompted (e.g. asking it to solve a problem step by step can dramatically improve the answers).
LLMs are not people, but neither are they BS generators. In plenty of areas they already outperform humans and in others not so much. But you are not learning that from articles that treat every little mistake from an LLM like some huge gotcha moment.
No one is saying there's problems with the bots (though I don't understand why you're being so defensive of them -- they have no feelings so describing their limitations doesn't hurt them).
The problem is what humans expect from LLMs and how humans use them. Their purposes is to string words together in pretty ways. Sometimes those ways are also correct. Being aware of what they're designed to do, and their limitations, seems important for using them properly.
These kinds of articles, which all repeat exactly the same extremely basic points and make lots of fallacious ones, are absolute dogshit at describing the shortcomings of AI. Many of them don't even bother actually testing the AI themselves, but just repeat what they heard elsewhere. Even with this one I am not sure what exactly they did, as Bing Chat works completely different for me from what is reported here. It won't hurt the AI, but it certainly hurts me reading the same old minimum effort content over and over and over again, and they are the ones accusing AI of generating bullshit.
Yes, humans are stupid. They saw some bad sci-fi and now they expect AI to be capable of literal magic.
These AI systems do make up bullshit often enough that there's even a term for it: Hallucination.
Kind of a euphemistic term, like how religious people made up the word 'faith' to cover for the more honest term: gullible.