814
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 18 Sep 2023
814 points (96.5% liked)
Technology
60112 readers
1966 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
It just the beginning for sure. This future will be the end of artists and still everyone will clapping to AI productions like fools.
No one cared when spreadsheets replaced a huge chunk of office workers.
If the results are the same what's the issue.
Artists feel special because until recently computer couldn't automate them. But it's the same as any job.
The people who lost those jobs cared.
If not for the wages, people hardly have any attachment to most office jobs. But when it comes to artistic endeavors, a lot of people dream of being able to make a career in those fields. Frankly, that sort of comment itself seems like it comes from envy, like artists ought to be taken down a peg for daring to work with something they are passionate about. I couldn't think of a single artist who bragged about being above automation.
As someone who works in an office job, if AI could free me to work on something creative that would be wonderful, but if it will instead replace already existing creatives and leave us both without anywhere to work, that is not really helping anybody but executives profiting over it. What benefit does that even add to my life? Remixed porn? Meme generators? It's not the same level of benefit as industrial automation, if any. The human element of art enriches it in an unique way that AI trying to distill a style from countless samples won't be able to do.
This hits the nail on the head. A major component of art is that it's an outlet of human creativity, something we find fulfilling to both produce and consume. If creativity is delegated to machines, what's left for us humans? At some point, we'll grow tired of Taco Bell and re-runs, and what then?
Making art is something people enjoy, for one thing. Good art also has something of the artist in it, something to it other than "it was made from this prompt".
Art is just combining previously learnt techniques together with a specific subject. Since AI essentially knows all the techniques it could be better eventually.
Nothing is stopping people making art for fun.
If that's what you look for in art then sure, but I disagree with that definition. A child's drawing of her dad has aspects to it that a picture of that dad taken in a photo booth can never have. A poem about war is much more meaningful when it comes from a refugee. The Wikipedia page for art lists several 'purposes' and most of them are not something AI art can ever fulfil.
You can't say ever. It could learn every diary and report from war ever and write amazing stuff. It's just a matter of time. It's currently limited by computer power quite significantly.
It could do that, but its writing would be hollow because those stories are meaningful due to the lived experiences behind them. For example anyone who's read The Diary of a Young Girl could write something similar in Anne Frank's style, but it wouldn't be nearly as impactful because learning about an event is very different from living through it.
It's limited by the trends of human art. The art and text AI that we have are based on pattern processing. They output what is expected based on what we feed it. They aren't able to come up with entirely new styles or philosophies. They don't even have a cognitive ability to have any philosophy. An AI describing a tree or depicting an image of a tree doesn't have an understanding of what a tree is, they are not aware of the world, they can only replicate human words and images.
A breakthrough needs to happen for them to be capable of anything more, but that's going to be its own can of worms.
It's not the same as any job. It's putting your face and your words behind something you cannot consent to. If someone spoofed your username and started posting offensive things, I've no doubt anyone would be upset. That's just your username. Now add your real life photo, your face, and your voice.
You would have to be a sociopath not to care if suddenly your friends and family received a video of you performing offensive acts or shilling for a political cause you are vehemently opposed to.
That's literally not how they work. They figure out a mathematical formula for generating things and apply it. Your analogy doesn't make any sense.
They aren't copying anything in reality. No more than the way an artist's brain changes when looking at other art.
In fact that is a much better analogy for how they work as they are modelled on our neurons.
We weren't talking specifically about the article. But the "end of art" as the oc said.
AI will annihilate most data entry workers in the next few years as well.
I wish i could trust AI to do data entry.
AI is very stupid and breaks in ways you wouldn't expect
Some AIs are more intelligent than the average person.
Ask a normal person to do the tasks ChatGPT can and I bet the results would be even worse.
Ask chatGPT to do things a normal person can, and it also fails. ChatGPT is a tool, a particularly dangerous swiss army chainsaw.
I use it all the time at work.
Getting it to summerize articles is a really useful way to use it.
It's also great at explaining concepts.
Is it? Or is it just great at making you think that? I've seen many ChatGPT outputs "explaining" something I'm knowledgeable of and it being deliriously wrong.
I agree. I have a very specialized knowledge in certain areas, and when I've tried to use chat GPT to supplement my work, it often misses key points or gets them completely wrong. If it can't process the information, it will err on the side of creating an answer whether it is correct or not, and whether it is real or not. The creators call this "hallucination."
Yeah it is if you prompt it correctly.
I basically use it instead of reading the docs when learning new programming languages and Frameworks.
That's great, it works until it doesn't and you won't know when unless you already are knowledgeable from a real source.
You know it doesn't work when you try and tell it doesn't work and it'll usually correct itself.
A coworker tried to use it with a well-established Python library and it responded with a solution involving a Class that did not exist.
LLMs can be useful tools but, be careful in trusting them too much - they are great at what I'd say is best described as "bullshitting". It's not even "trust but verify" it's more "be skeptical of anything that it says". I'd encourage you to actually read the docs, especially those for libraries as it will give you a deeper understanding of what's actually happening and make debugging and innovating easier.
Ive had no problem using them. The more specific you get the more likely they are to do that. You just have to learn how to use them.
I use them daily for refactoring and things like that without issue.
That's why QA will still exist.
Plus when I say "AI will kill data entry jobs" I don't mean ChatGPT3.5/4.0, I'm talking about either a dedicated Saas offering or a future LLM model intended for individual Enterprise environment deployment and trained specifically on company data alongside Cloud and Data engineering.
Keep downvoting the guy who literally works in IT and is seeing these changes happen in real time, I'm sure you all know better than I do.
Literally what computer programmes are. A large part of development is making sure end users do things correctly.
It's a perfect task for AI. In fact most of it is achievable with standard coding.
These troglodytes probably couldn't even find their way around a terminal, don't worry about what they think can and cant be done with LLM's.
Not "knowing" doesn't have anything to do with AI performance. That's a very human centric view.
Like people are 100% right about everything they say.
I would go as far as to say a big chunk of what people believe is false. A lot of what we learned at school is wrong now.
My point is no human is right even close to 100% of the time. Yet that's the artificial target you've set for AIs.
It will still out perform an average human in a huge range of tasks.
I work in enterprise IT networking and systems and don't give a fuck about your shitty home server.
AI will do the bulk of the work, and humans will QA it. It's not that fucking hard to understand. No one here except you is focusing on the fact that it can't actually think for itself, no one ever said it was going to do its job without any kind of oversight.
Go back to being a hobbyist and let us professionals decide what can and can't be done.
I'll bet you're an MSP monkey or a DC tech
I'm actually the architect at a network operations center for a company that supports over 3000+ users across the entire US, but I can see that I hurt a bunch of self hosters feelings around here by claiming that AI will take over the majority of unskilled computer labor.