64
AI Seeks Out Racist Language in Property Deeds for Termination
(news.bloomberglaw.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
One of LLMs main strengths over traditional text analysis tools is the ability to "understand" context.
They are bad at generating factual responses. They are amazing at analysing text.
LLMs neither understand nor analyze text. They are statistical models of the text they were trained on. A map of language.
And, like any map, they should not be confused for the territory they represent.
If you admit that they have issues with facts, why would you assume that the randomly generated facts their "analysis" produces must be accurate?
I mean they literally do analyze text. They're great at it. Give it some text and it will analyze it really well. I do it with code at work all the time.
Because they are two completely different tasks. Asking them to recall information from their training is a very bad use. Asking them to analyze information passed into them is what they are great at.
Give it a sample of code and it will very accurately analyse and explain it. Ask it to generate code and the results are wildly varied in accuracy.
I'm not assuming anything you can literally go and use one right now and see.