113
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 26 Dec 2024
113 points (100.0% liked)
Technology
37805 readers
96 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
Why would you ever need a LLM on an eBook reader? Do you just let it summarize your books so you don't have to read them?
I could see it useful if you need the LLM to explain something maybe? If you're reading something in a language not native to your own, or just something that's using quite complex language & writing, then it may be useful to just have a paragraph or sentence explained to you. Or maybe the book references something you're not familiar with and can get a quick explanation by the LLM.
I've done this to give myself something akin to Cliff's Notes, to review each chapter after I read it. I find it extremely useful, particularly for more difficult reads. Reading philosophy texts that were written a hundred years ago and haphazardly translated 75 years ago can be a challenge.
That said, I have not tried to build this directly into my ereader and I haven't used Boox's specific service. But the concept has clear and tested value.
I would be interested to see how it summarizes historical texts about these topics. I don't need facts (much less opinions) baked into the LLM. Facts should come from the user-provided source material alone. Anything else would severely hamper its usefulness.
For a human, at that. I get that you feel it works for you, but personally, I would trust an LLM to understand it (insofar as that's a thing they can do at all) even less.
I get that, and it's good to be cautious. You certainly need to be careful with what you take from it. For my use cases, I don't rely on "reasoning" or "knowledge" in the LLM, because they're very bad at that. But they're very good at processing grammar and syntax and they have excellent vocabularies.
Instead of thinking of it as a person, I think of it as the world's greatest rubber duck.
I'm not sure if this is how @hersh@literature.cafe is using it, but I could totally see myself using an LLM to check my own understanding like the following:
Ironically, this exercise works better if the LLM "hallucinates"; noticing a hallucination in its summary is a decent metric for my own understanding of the chapter.
That's pretty much what I do, yeah. On my computer or phone, I split an epub into individual text files for each chapter using
pandoc
(or similar tools). Then after I read each chapter, I upload it into my summarizer, and perhaps ask some pointed questions.It's important to use a tool that stays confined to the context of the provided file. My first test when trying such a tool is to ask it a general-knowledge question that's not related to the file. The correct answer is something along the lines of "the text does not provide that information", not an answer that it pulled out of thin air (whether it's correct or not).
Ooooh, that's a good first test / "sanity check" !
May I ask what you are using as a summarizer? I've played around with locally running models from huggingface, but never did any tuning nor straight-up training "from scratch". My (paltry) experience with the HF models is that they're incapable of staying confined to the given context.