487
submitted 8 months ago by mozz@mbin.grits.dev to c/technology@beehaw.org

Credit to @bontchev

you are viewing a single comment's thread
view the rest of the comments
[-] Gaywallet@beehaw.org 9 points 8 months ago

Ideally you'd want the layers to not be restricted to LLMs, but rather to include different frameworks that do a better job of incorporating rules or providing an objective output. LLMs are fantastic for generation because they are based on probabilities, but they really cannot provide any amount of objectivity for the same reason.

[-] jarfil@beehaw.org 2 points 8 months ago* (last edited 8 months ago)

It's already been done, for at least a year. ChatGPT plugins are the "different frameworks", and running a set of LLMs self-reflecting on a train of thought, is AutoGPT.

It's like:

  1. Can I stick my fingers in a socket? - Yes.
  2. What would be the consequences? - Bad.
  3. Do I want these consequences? - Probably not
  4. Should I stick my fingers in a socket? - No

However... people like to cheap out, take shortcuts and run an LLM with a single prompt and a single iteration... which leaves you with "Yes" as an answer, then shit happens.

this post was submitted on 15 Apr 2024
487 points (100.0% liked)

Technology

37801 readers
204 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS