I agree with the message but I'm a bit miffed by the omission of Rosie the Riveter. And before anyone pipes in about how those women lost their jobs afterward -- it's propaganda! It's not meant to reflect reality.
C'mon, she's still an icon.
I agree with the message but I'm a bit miffed by the omission of Rosie the Riveter. And before anyone pipes in about how those women lost their jobs afterward -- it's propaganda! It's not meant to reflect reality.
C'mon, she's still an icon.
Docs and testing have no bravado, but they're important. If they're dragging you down, use your problem-solving brain and find a way to make them work for you.
Sure, but by randomly guessing code you'd get 0%. Getting 48% right is actually very impressive for an LLM compared to just a few years ago.
He's not an economist, so I'd be healthily skeptical of this exact economic solution. You should however be very concerned about his opinions on where AI is going that it may necessitate this.
It's kind of curious that the headline here is "UBI" given that he mentions AI poses an extinction-level risk.
All of these represent various social mores. I'd have no problem with my kids seeing content involving fantasy violence, but I respect that others might object. As a bisexual myself, I have less respect for those who object to their kids seeing homosexuality specifically, but I can tolerate their existence.
For fairness' sake, I wouldn't mind it if heterosexuality were on the list too.
I get the sentiment, but realistically I'll still pick the random man. A man could kill or rape me. A bear is likely to kill me.
We're just afraid of what A.I. will be.
Just a reminder for people casually browsing that unmarked spoilers are present here!
It may require intense passion and a manic episode to do something like that with one coder or a small team, which is hard to arrange bureaucratically.
This article is severely misleading. AI is a buzzword -- but mostly for chatbots. Bots. You can prove this easily: observe that chatbots such as ChatGPT type much faster than a human possibly could.
Much of the training and validation of AI requires outsourcing. Companies which just mindlessly slap an LLM into their product somewhere aren't usually outsourcing.
Don't be misled. When your CEO brings up integrating AI into your product, s/he isn't secretly talking about outsourcing.
How is this different than labeling god good?
It is very difficult to accept mortality if you don't believe in an afterlife. Religion brings comfort, and comfort improves mental health (at the cost of some delusion).