105

By "good" I mean code that is written professionally and concisely (and obviously works as intended). Apart from personal interest and understanding what the machine spits out, is there any legit reason anyone should learn advanced coding techniques? Specifically in an engineering perspective?

If not, learning how to write code seems a tad trivial now.

(page 2) 39 comments
sorted by: hot top controversial new old
[-] Jimmycrackcrack@lemmy.ml 3 points 1 month ago* (last edited 1 month ago)

I don't know how to program, but to a very limited extent can sorta kinda almost understand the logic of very short and simplistic code that's been written for me by someone who can actually code. I tried to get to get chat GPT to write a shell script for me to work as part of an Apple shortcut. It has no idea. It was useless and ridiculously inconsistent and forgetful. It was the first and only time I used chat GPT. Not very impressed.

Given how it is smart enough to produce output that's kind of in the area of correct, albeit still wrong and logically flawed, I would guess it could eventually be carefully prodded into making one small snippet of something someone might call "good" but at that point I feel like that's much more an accident in the same way that someone who has memorised a lot of French vocabulary but never actually learned French might accidentally produce a coherent sentence once in a while by trying and failing 50 times before succeeding and then failing again immediately after without ever having even known.

[-] cley_faye@lemmy.world 2 points 1 month ago

For repetitive tasks, it can almost automatically get a first template you write by hand, and extrapolate with multiple variations.

Beyond that… not really. Anything beyond single line completion quickly devolves into either something messy, non working, or worse, working but not as intended. For extremely common cases it will work fine; but extremely common cases are either moved out in shared code, or take less time to write than to "generate" and check.

I've been using code completion/suggestion on the regular, and it had times where I was pleasantly surprised by what it produced, but even for these I had to look after it and fix some things. And while I can't quantify how often it happened, there are a lot of times where it's convincing gibberish.

[-] anytimesoon@feddit.uk 1 points 1 month ago

I've also had some decent luck when using a new/unfamiliar language by asking it to make the code I wrote more idiomatic.

It's been a nice way to learn some tricks I probably wouldn't have bothered with before

[-] Anticorp@lemmy.world 2 points 1 month ago

Absolutely, but they need a lot of guidance. GitHub CoPilot often writes cleaner code than I do. I'll write the code and then ask it to clean it up for me and DRYify it.

[-] Neon@lemmy.world 2 points 1 month ago

The LLM can type the Code, but you need to know what you want / how you want to solve it.

[-] TranquilTurbulence@lemmy.zip 2 points 1 month ago

Yes and no. GPT usually gives me clever solutions I wouldn’t have thought of. Very often GPT also screws up, and I need to fine tune variable names, function parameters and such.

I think the best thing about GPTis that it knows the documentation of every function, so I can ask technical questions. For example, can this function really handle dataframes, or will it internally convert the variable into a matrix and then spit out a dataframe as if nothing happened? Such conversions tend to screw up the data, which explains some strange errors I bump into. You could read all of the documentation to find out, or you could just ask GPT about it. Alternatively, you could show how badly the data got screwed up after a particular function, and GPT would tell that it’s because this function uses matrices internally, even though it looks like it works with dataframes.

I think of GPT as an assistant painter some famous artists had. The artist tells the assistant to paint the boring trees in the background and the rough shape of the main subject. Once that’s done, the artist can work on the fine details, sign the painting, send it to the local king and charge a thousand gold coins.

[-] nikaaa@lemmy.world 1 points 1 month ago

my dad uses this LLM python code generation quite routinely, he says the output's mostly fine.

[-] Angry_Autist@lemmy.world 1 points 1 month ago

For snippets yes, ask him to tell it to make a complete terminal service and see what happens

[-] Subverb@lemmy.world 3 points 1 month ago

I use LLMs for C code - most often when I know full well how to code something but I don't want to spent half a day expressing it and debugging it.

ChatGPT or Copilot will spit out a function or snippet that's usually pretty close to what I want. I patch it up and move on to the tougher problems LLMs can't do.

[-] PenisDuckCuck9001@lemmynsfw.com 1 points 1 month ago* (last edited 1 month ago)

Ai is excellent at completing low effort ai generated Pearson programming homework while I spend all the time I saved on real projects that actually matter. My hugging face model is probably trained on the same dataset as their bot. It gets it correct about half the time and another 25% of the time, I just have to change a few numbers or brackets around. It takes me longer to read the instructions than it takes the ai bot to spit out the correct answer.

None of it is "good" code but it enables me to have time to write good code somewhere else.

[-] bear@lemmynsfw.com 1 points 1 month ago

No. To specify exactly what you want the computer to do for you, you'd need some kind of logic-based language that both you and the computer mutually understand. Imagine if you had a spec you could reference to know what the key words and syntax in that language actually mean to the computer.

[-] ImplyingImplications@lemmy.ca -1 points 1 month ago

Writing code is probably one of the few things LLMs actually excell at. Few people want to program something nobody has ever done before. Most people are just reimplimenting the same things over and over with small modifications for their use case. If imports of generic code someone else wrote make up 90% of your project, what's the difference in getting an LLM to write 90% of your code?

[-] chknbwl@lemmy.world 1 points 1 month ago

I see where you're coming from, sort of like the phrase "don't reinvent the wheel". However, considering ethics, that doesn't sound far off from plagiarism.

[-] dandi8@fedia.io 1 points 1 month ago

IMO this perspective that we're all just "reimplementing basic CRUD" applications is the reason why so many software projects fail.

load more comments
view more: ‹ prev next ›
this post was submitted on 26 Aug 2024
105 points (88.9% liked)

No Stupid Questions

35514 readers
1696 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS