-8
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 03 Jan 2025
-8 points (43.8% liked)
Fediverse
28838 readers
269 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy
founded 2 years ago
MODERATORS
If u went and actually looked at what it did u would realise my implementation is not as bad as it sounds.
EDIT: plus its foss so u can go check the source
The concept is inherently flawed when you introduce an aspect (LLM) that can and will hallucinate (read: make shit up) when it's trying to present reality.
As far as I'm concerned, there is no place for that anywhere remotely close to news.
correct, but humans also exaggerate and lie a lot in the news, so maybe this AI could look through different sources and identify inaccuracies.
I haven't looked the source code tho...
After checking the source code, well... it just summarizes the posts. Doesn't help much with the human error problem.
But as mentioned by OP, it's in early stage of development, and they plan to add features to "find the missing perspectives on an issue" and analyze political alligmnent information. So in the future maybe it could become a useful tool.
The model i have used gives a 60% identical summary to that provided by a human. And has an overall conceptual accuracy of >95% i was very carefull with my model selection and implementation as to ensure hallucinations are extremely rare if at all possible. Im not just feeding in "summarise this: " to a general purpose llm (known for hallucinations) i break the article into chunks at sentence breaks then make a summary of that chunk directly by passing it to a purpose build summarisation model.
Have you spotted any hallucinations so far? I'm curious about what kind of hallucinations can be created when a LLM summarizes a text.
Yep.
The whole original article text wss "Advertisment" and the ai spat out "Advertisement. A ad. Click here for a link to the edd sa s."
It Obviously couldnt summarise 1 word into multiple words anf thus tried its best.
Only 1 so far
Ok that's funny XD
At least it won't be harmful in any way.