1513
505 of 700 OpenAI employees tell the board to resign.
(lemmy.dbzer0.com)
This is a most excellent place for technology news and articles.
So they paid Kenyan workers $2 an hour to sift through some of the darkest shit on the internet.
Ugh.
What? And here I am doing it for free...
They could have just given 4chan a $1 bounty per piece and they would have gleefully delivered until Lambo.
They are problaby the ones writing those pieces literature
In some countries 2 bucks an hour puts you above the median
"Above the median" should not be the standard for having to spend all day reading about racism and rape.
What about spending all day being abused by people in a call center?
I mean sure we'd all like to make enough money to live a full life with any job but that's sadly not a reality and the point you're missing is that economies don't work the same as the US in every country.
I live in Argentina, I make 25k a year as a software developer and I'm on the top 1% of highest earners on the country
What about it? It's nowhere near the same as spending all day reading graphic rape and racist screeds, let alone look at CSAM, which is what they're paying them to do now. Did you miss the part where they are psychologically damaged from this work and the counseling they have been offered is insufficient? Call centers don't usually result in that sort of thing.
Also, maybe you shouldn't expect and defend wages that low for being in the top 1%?
They're in the top 1% for Argentina, not globally. I mean, it would be nice if every worker made US wages. It's kinda fucked though that even the lowest paid workers in America can live like kings in the Philippines. I make $42k/yr as an electrical assembler at a plant that manufactures environmental test chambers. If I take my PTO and go to almost any other country, especially Argentina, I can live like royalty for a week.
I strongly disagree. I have read and seen a lot of messed up things on the internet, I much, much, prefer it to the couple weeks I spent helping out a friend at a part-time service job. (And I was doing it with good friends in a casual environment.)
You're welcome to strongly disagree that this:
Is not worth high pay, but I would say psychologically damaging your employees and then not even giving them the counseling tools to help them is absolutely worth high pay. You should not have to endure things like that for an 'above the median' wage in a country where 'the median' is still being very poor. I see this as not much better than defending other corporations making poor people in Africa work in mines for a decent wage relative to others in their country but not giving them safety equipment. And they still die poor.
I obviously prefer people aren't in poverty at all. But I have far more sympathy for the miner risking their lives than someone reading something disgusting/disturbing on the internet, it is not anywhere near close.
You don't understand how massive psychological damage can be as bad as seriously endangering someone's physical health?
Just because a graphic description of a dog being raped while a child watches doesn't bother you doesn't mean it won't bother anyone else. In fact, I would wager that it would be pretty disturbing for most people to read that, let alone read that sort of thing for hours every day.
And then there are the ones who are just as low-paid but have to look at images instead. Again, you may not be bothered by CSAM, but I would wager that most people would find looking at that all the time very hard to deal with and it could easily result in PTSD.
Getting crushed in a mine collapse harms everyone. As unfashionable as it is, the vast majority of people, that I know at least, have experienced far more traumatic things than you could ever get from third person observation.
I hate gore, I hate seeing people dying, I hate hearing about those sorts of things. They seriously upset me, but to compare that discomfort to anything like someone working (maybe enslaved) in a mine in essentially anywhere in Africa is ridiculous. Risking on a daily basis, painful death, painful suffering than death, likely slow death from dust inhalation, severe maming, etc.
If you really believed reading it were that dangerous, it is evil of you to even summarize it as you did and risk serious harm to others.
PTSD leads to suicides. Very often. And even without suicide, people with poor mental health often live very short lives due to stress.
Also, please do not misrepresent what I said. I talked about not giving them safety equipment, not them dying in a mine collapse. Both involve not giving the workers protection they need for low pay and could easily lead to very poor health and short lives in exchange for being somewhat less poor than their neighbors but still poor. The miners are not given physical safety equipment and the workers for OpenAI are not given the mental safety equipment.
I think you trust too much in modern psychology if you think that this job would lead to significant suicides but non-chemical therapy would prevent. Much more effective would just be pre-screening or informing applicants of the duties(which may have been done)
Did you not read what was pasted?
They are not being given the psychological tools they need. That's a big part of the problem. Again, it is no different than not being given safety equipment.
That's actually about 3x what the average Kenyan makes, sadly.
This reminds me of an NPR podcast from 5 or 6 years ago about the people who get paid by Facebook to moderate the worst of the worst. They had a former employee giving an interview about the manual review of images that were CP andrape related shit iirc. Terrible stuff
I'm shocked and I shouldn't be.. Poor people
No, you're right, you should be. We don't want to normalize this shit, it should continue to shock and offend.
These are the dark sides of modern technology. The kids working cobalt mines. The workers being paid pennies to categorize data so bad that it is traumatic to even read it. I can't imagine how the people who have to look at pictures can do it.
I feel like I could handle some dark text here or there, but if I had to do it for 40-50 hours a week? Hundreds of passages every day. That would warp me pretty quickly.
I'm sure there's some loophole there, maybe between countries' laws. And if there isn't, Hey! We'll make one!
Isn't CSAM classed as images and videos which depict child sexual abuse? Last time I checked written descriptions alone did not count, unless they were being forced to look at AI generated image prompts of such acts?
This is the quote in question. They're talking about images
They could be working with the governments of relevant countries to develop filters and detection systems.
IIRC there are a few legitimate and legal reasons to seek CSAM, such as journalism, and definitely developing methods to prevent it's spread.
I really find this a bit alarmist and exaggerated. Consider the motive and the alternative. You really think companies like that have any other options than to deal with those things?
Consider the impact on human psychology. Not everyone has the guts to read and even look through these. And even though they appear to have, it still scars them inside.
Maybe There is no alternative for now, but don't do that to people with such low paycheck. Consider even the background of these people who may work on these tasks to not even live, but to survive. I would have preffered to wait 10 years than to indulge these horrifying tasks to those persons.
I'm sure there are lots of people who are in jail for creating/sharing or even making a profit off of these content. They could do that work ? But then again, even though it bothers me less than people who has no choice to live their lives, that is still an Idea I find ethically very questionable.
Very much yes police authorities have CSAM databases. If what you want to do with it really is above board and sensible they'll let you access that stuff.
I don't doubt anything that OpenAI could do with that stuff can be above board, but sensible is another question: Any model that can detect something can be used to train a model which can generate it. As such those models are under lock and key just like their training sets, (social) media platforms which have a use for these things and the resources run them, under the watchful eye of the authorities. Think faceboogle. OpenAI could, in principle, try to get into the business of selling companies at that scale models they can, and have, trained themselves, I don't really see that making sense from the business POV, either.
Hold on, why exactly do they need people to label this shit?
How else will the AI be able to recognize that such text is "bad"?
This is actually extremely critical work, if results are going to be used by ai's that are going to be used widely. This essentially determines the "moral compass" of the ai.
Imagine if some big corporation did the labeling and such, trained some huge ai with that data and it became widely used. Then years pass and eventually ai develops to such extent it can be reliably be used to replace entire upper management. Suddenly becoming slave for "evil" ai overlord is starting to move from being beyond crazy idea to plausible(years and years in future, not now obviously).
Extremely critical but mostly done by underpaid workers in poor countries who have to look at the most horrific stuff imaginable and develop lifelong trauma because it's the only job available and otherwise they and their family might starve. Source This is one of the main reasons I have little hope that if OpenAI actually manages to create an AGI that it will operate in an ethical way. How could it if the people trying to instill morality into it are so lacking in it themselves.
True. Though while its horrible for those people, they might be doing more important work than they or us even realize. I also kind of trust moral judgement of oppressed more than oppressor(since they are the ones who do the work). Though i'm definitely not condoning the exploitation of those people.
Its quite awful that this seems to be the best we can hope for regarding this. I doubt google or microsoft are going to give very positive guidance whether its ok for people to suffer if it leads to more money for investors when they do their own labeling.