123
submitted 11 months ago by 0x815@feddit.de to c/news@beehaw.org
you are viewing a single comment's thread
view the rest of the comments
[-] Heresy_generator@kbin.social 52 points 11 months ago* (last edited 11 months ago)

ANNs like this will always just present our own biases and stereotypes back to us unless the data is scrubbed and curated in a way that no one is going to spend the resources to. Things like this are a good demonstration of why they need to be kept far, far away from decision making processes.

[-] Greg@lemmy.ca 8 points 11 months ago

This isn't an Large Language Model, it's an Image Generative Model. And given that these models just present human's biases and stereotypes, then doesn't it follow that humans should also be kept far away from decision making processes?

The problem isn't the tool, it's the lack of auditable accountability. We should have auditable accountability in all of our important decision making systems, no matter if it's a biased machine or biased human making the decision.

This was a shitty implementation of a tool.

load more comments (7 replies)
this post was submitted on 06 Nov 2023
123 points (100.0% liked)

World News

22049 readers
32 users here now

Breaking news from around the world.

News that is American but has an international facet may also be posted here.


Guidelines for submissions:

These guidelines will be enforced on a know-it-when-I-see-it basis.


For US News, see the US News community.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS