123
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 06 Nov 2023
123 points (100.0% liked)
World News
22100 readers
207 users here now
Breaking news from around the world.
News that is American but has an international facet may also be posted here.
Guidelines for submissions:
- Where possible, post the original source of information.
- If there is a paywall, you can use alternative sources or provide an archive.today, 12ft.io, etc. link in the body.
- Do not editorialize titles. Preserve the original title when possible; edits for clarity are fine.
- Do not post ragebait or shock stories. These will be removed.
- Do not post tabloid or blogspam stories. These will be removed.
- Social media should be a source of last resort.
These guidelines will be enforced on a know-it-when-I-see-it basis.
For US News, see the US News community.
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
Plenty of actual photographs exist with Palestinian children wielding rifles and Hamas headbands. Perhaps the AI is just trained with those images as well?
Why does it matter what the excuse is?
You shouldn't get a stereotype (or in this case I suppose propaganda?) when you give a neutral prompt.
What I'm hearing is, "AI art shouldn't reflect reality." If this agent is repeating propaganda, it's propaganda that Palestinian kindergartens have been creating and putting out there on their own:
So what reality is this model reflecting then?
If you're going to make that claim, perhaps cite to a source isn't run by former Israeli intelligence that creates a lot of propaganda and has been doing so for decades.
I don't trust MEMRI translations, but there is no translation needed to understand what is happening in the above footage. I'm interested in any sources that dispute the authenticity of the above, which your link does not. If you provide a credible one I will edit my post accordingly. It seems to me that this is very real.
That's fair. Without getting too in the weeds on the issue, apparently the video is authentic and it's something Israelis do as well, so isn't really telling about either side of the conflict except to note that extremists will use children to push their views anywhere.
I wasn't aware of that, thanks for the link. It would be interesting to know how prevalent indoctrination/militarization of youth is in each of these nations. It can be hard to accurately judge magnitude in this conflict, it is so heavily propagandized.
There is absolutely no amount of data that could convince you otherwise. You’ve made it very clear you’ve made up your mind.
Maybe try presenting some rather than complaining about what you imagine I'd do, random internet stranger.
Oh is that why I followed up by saying the video is probably authentic?
Somehow I get the feeling that equating "reality" with "propaganda created by kindergartens" is the rhetorical equivalent of dividing by zero.
Actually... you kind of should. A neutral prompt should provide the most commonly appearing match from the training set... which is basically what stereotypes are; an abstraction from the most commonly appearing match from a person's experience.
Should, would, could. AI is trained on what it scrapes off the internet. It is only feeding the Augmented Idiocy which is already a problem.
To me, it should only “matter” for technical reasons - to help find the root of the problem and fix it at the source. If your roof is leaking, then fix the roof. Don’t become an expert on where to place the buckets.
You’re right, though. It doesn’t matter in terms of excusing or justifying anything. It shouldn’t have been allowed to happen in the first place.
I do agree that technical mistakes are interesting but with AI the answer seems to always be creator bias. Whether it's incomplete training sets or (one-sidedly) moderated results, it doesn't really matter. It pushes the narrative to certain direction, and people trust AIs to be impartial because they presume it's just a machine that interprets reality when it never is.
...as seen by the machine.
It's amazing how easily people seem to forget that last part; they wouldn't trust a person to be perfectly impartial, but somehow they expect an AI to be.
It's amazing how easily people seem to forget that machines uses tools its creator provides. You can't trust AI to be impartial because it never is as it is a collection of multiple choices made by people.
This is such a bore, having this same conversation over and over. Same thing happened with NFTs and whatever is currently at the height of its tech hype cycle. Don't buy into the hype and realize both AIs potential and shortcomings.