this post was submitted on 11 Aug 2023
639 points (93.7% liked)

Programmer Humor

32054 readers
1727 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 
all 39 comments
sorted by: hot top controversial new old
[–] Gompje@lemmy.world 42 points 1 year ago (2 children)

Haha indeed!

It’s funny when it starts to just invent things. Like packages, with version number!, that.. do not exists..

Or when it outputs code without using the variables ..

The most annoying thing is imho that it keeps explaining everything al the time. Even when I prompt “you have a working app with vuejs..” and others it sometimes still explains how to setup the app.

That said: the tool has become a staple in my workflow whenever I need a starting point. Or have to do some math algorithmic things

[–] kspatlas@artemis.camp 9 points 1 year ago

Like it invented an entire package to do TOFU in racket for me

[–] Sureito@feddit.de 3 points 1 year ago (1 children)

Are you using version 3.5 or 4?

[–] Pantoffel@feddit.de 1 points 1 year ago (1 children)

I think both are fine. But when I see 3.5 has difficulties, I usually switch to 4 and get the job done there.

[–] goodnessme@lemm.ee 2 points 1 year ago (1 children)

Why use 3.5 at all in that case?

[–] Pantoffel@feddit.de 1 points 1 year ago* (last edited 1 year ago)

Because I only have 50 messages in 3 hours

[–] silicea@lemmy.world 38 points 1 year ago (5 children)

And it feels like the output by ChatGPT is getting worse every month.

[–] azvasKvklenko@sh.itjust.works 17 points 1 year ago (1 children)

It's learning from humans, that's to be expected

[–] glowie@infosec.pub 12 points 1 year ago (2 children)
[–] starman@programming.dev 16 points 1 year ago (2 children)
[–] glowie@infosec.pub 10 points 1 year ago

(⊃。•́‿•̀。)⊃

How appropriate

[–] the_beber@lemm.ee 7 points 1 year ago

AI is honestly just A most of the time.

[–] vox@sopuli.xyz 3 points 1 year ago

yeah the response quality is so much worse and it has that weird "take my answer and shut up" attitude now.

[–] RightHandOfIkaros@lemmy.world 3 points 1 year ago (1 children)

I remember about 10 months ago or so ChatGPT used to output some surprisingly top-tier code. I'd ask it to create a method with some required functionality and it would output the code, fully commented and everything. I didn't have to edit the code, it just worked, and it was more or less efficient.

Now? I can't even get it to write comments for code I give to it.

[–] SpicaNucifera@lemm.ee 1 points 1 year ago

Interesting. You can't choose which "generation" you use?

[–] dan@upvote.au 0 points 1 year ago

The free version or the paid version? Part of it is that they're trying to push people towards the paid version, which is a much more sophisticated model.

[–] Bishma@discuss.tchncs.de 36 points 1 year ago (4 children)

Maybe it's because I'm only using it as plan B or C (after the documentation has already failed me), but I have never gotten any usable code out of chatGPT.

And yet co-pilot is able to finish my code perfectly after I type the first few characters... even though they're the same model.

[–] AmbientChaos@sh.itjust.works 10 points 1 year ago

I think co-pilot works better because it has the context of the whole project for reference when suggesting auto completion. I've gotten a lot of unusable junk from it too though

[–] dan@upvote.au 9 points 1 year ago (2 children)

Co-pilot isn't using the same model. They're using a model that's been trained on a LOT of open-source code.

[–] thanks_shakey_snake@lemmy.ca 14 points 1 year ago

Alot of "open" source code ( ͡° ͜ʖ ͡°)

[–] downdaemon@lemmy.ml 4 points 1 year ago (1 children)
[–] dan@upvote.au 1 points 1 year ago

Not that I'm aware of. Even if the input is public data, the actual training scripts and resulting model tend to be closed-source. Meta's one of the only major companies I know of to release their models under a somewhat-open-source license.

[–] ReakDuck@lemmy.ml 7 points 1 year ago

Maybe same model but differnt data

[–] legion02@lemmy.world 3 points 1 year ago

I go the other way with it. Give me something broken but close and I'll use the documentation to fix it.

[–] AdamEatsAss@lemmy.world 17 points 1 year ago (1 children)

I feel like for simple algorithms chatGPT could be good. Like as a reference for how to code something. But if it's simple code I often find it faster to just write it myself then reorganize whatever it makes to work with and match the style of other code in my codebase. And if it's complex code I often find it harder to describe what I want then to just make it.

[–] flossdaily@lemmy.world 22 points 1 year ago (2 children)

In my experience, what makes gpt-4 great for coding is its astonishing knowledge of available software libraries, built-in interface features, etc.

I'll tell it the task I want done, and it will tell me where to find, and how to install the necessary dependencies.

With zero experience in browser extension design, gpt-4 helped me to build an incredibly complicated Chrome extension, using vector database; creating a custom, cloud-based server; web scraping with headless browsers, voice recognition, speech synthesis, wake-word capabilities, and a sophisticated user interface. I had ZERO experience with ANY of these.

For me, using gpt-4 was like collaborating with a just okay programmer, but one who had extensive experience with literally every programming language, API, protocol, etc.

And it was a collaboration. We would talk through problems together. I would make observations and guesses about why a block of code wasn't working, and it would tell me why I was wrong, or alternately tell me I was right, and produce a fixed version.

[–] Pantoffel@feddit.de 3 points 1 year ago

So what are you building? A browser STT interface for chatting with GPT and other LLMs?

[–] Hazzia@discuss.tchncs.de 2 points 1 year ago (1 children)

Gpt-4 is the paid version, right? I'll give it a go when my budget loosens up a bit

[–] Even_Adder@lemmy.dbzer0.com 3 points 1 year ago (1 children)

People have had similar success with Bing Chat, and it's free and uses GPT-4.

[–] Pantoffel@feddit.de 3 points 1 year ago (1 children)

I feel that bing chat results are a lot worse than gpt4 tho

[–] Even_Adder@lemmy.dbzer0.com 1 points 1 year ago

It has different rules from OpenAI's GPT-4, so it might require a slightly different approach.

[–] kropeper@lemmy.ml 6 points 1 year ago

If there are even any docs... I usechatgpt when i can find no usefull docs. Quite often it can find some information somewhere.

[–] Patient_runner@lemmy.world 5 points 1 year ago

This is so apt ! though it does help to get difficult syntax for small fragments working quickly so you can get some proof for your concept instead of struggling with syntax errors for an hour

[–] flashgnash@lemm.ee 4 points 1 year ago* (last edited 1 year ago) (1 children)

Unless it's Microsoft documentation in which case it feels more like bill gates beating me over the head with the frying pan until I give up and find an alternative way to achieve my goal

[–] SpicaNucifera@lemm.ee 2 points 1 year ago

Or AWS documentation. Trying to use it feels like getting tortured with thumbscrews.

I hate it so much. It's a cyclical maze of ignorance and frustration.

Google, on the other hand, may have my hand in marriage.

[–] MajorHavoc@lemmy.world 1 points 1 year ago

Programming with AI help is like having the expert chef at my shoulder, giving me tips, but he's high as hell on three different mild altering drugs.

Then he's like "That cake needs some lemon juice. Trust me."

[–] fox2263@lemmy.world 1 points 1 year ago

Is that Eddie Kingston?