[-] DR_Hero@programming.dev 2 points 2 months ago

At this point I'm pretty sure their strategy is to take the hit in search engine quality since they have a stranglehold there anyway, and spam everyone with AI so they can come out ahead on that front with Human feedback. It's pretty shitty, and the exact reason we would be taking down big tech monopolys.

[-] DR_Hero@programming.dev 5 points 2 months ago

It's a dream I considered many times. It can be cheaper* than land life.

[-] DR_Hero@programming.dev 2 points 2 months ago

I was thinking more trying to avoid lawsuits based on further cementing their monopoly in adspace.

Being the world's leading advertiser and the only browser 90% of people use gives them way too much control. There's no path to privacy with chrome that doesn't end with Google as the sole gatekeeper. I mean, they already are the gatekeeper, but the current rate of lawsuits seems like an acceptable cost of doing business.

[-] DR_Hero@programming.dev 11 points 5 months ago

Collective mass arbitration is my favorite counter to this tactic, and is dramatically more costly for the company than a class action lawsuit.

https://www.nytimes.com/2020/04/06/business/arbitration-overload.html

A lot of companies got spooked a few years back and walked back their arbitration agreements. I wonder what changed for companies to decide it's worth it again. Maybe the lack of discovery in the arbitration process even with higher costs?

[-] DR_Hero@programming.dev 2 points 9 months ago

The responses aren't exactly deterministic, there are certain attacks that work 70% of the time and you just keep trying.

I got past all the levels released at the time including 8 when I was doing it a while back.

[-] DR_Hero@programming.dev 4 points 9 months ago

Excuse me but, the fuck is wrong with you?

[-] DR_Hero@programming.dev 2 points 11 months ago* (last edited 11 months ago)

The reason that makes the most sense in one of the articles I've read is that they fired him after he tried to push out one of the board members.

Replacing that board member with an ally would have cemented control over the board for a time. They might not have felt his was being honest in his motives for the ousting, so it was basically fire now, or lose the option to fire him in the future.

Edit: https://www.nytimes.com/2023/11/21/technology/openai-altman-board-fight.html

[-] DR_Hero@programming.dev 1 points 11 months ago

This one got me good

[-] DR_Hero@programming.dev 20 points 1 year ago* (last edited 1 year ago)

The worst part is that it took them years after it came out to be a known risk before they actually sent me a replacement machine.

Having to choose between the risk of heart failure and the risk of cancer sure was fun...

[-] DR_Hero@programming.dev 3 points 1 year ago

Now I'm upset this wasn't the original haha

[-] DR_Hero@programming.dev 1 points 1 year ago

I've definitely experienced this.

I used ChatGPT to write cover letters based on my resume before, and other tasks.

I used to give it data and tell chatGPT to "do X with this data". It worked great.
In a separate chat, I told it to "do Y with this data", and it also knocked it out of the park.

Weeks later, excited about the tech, I repeat the process. I tell it to "do x with this data". It does fine.

In a completely separate chat, I tell it to "do Y with this data"... and instead it gives me X. I tell it to "do Z with this data", and it once again would really rather just do X with it.

For a while now, I have had to feed it more context and tailored prompts than I previously had to.

view more: next ›

DR_Hero

joined 1 year ago