this post was submitted on 19 Nov 2023
116 points (93.9% liked)

Technology

34788 readers
344 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Eheran@lemmy.world 6 points 11 months ago (2 children)

Ask it to write code that replaces every occurrence of "me" in every file name in a folder with "us", but excluding occurrences that are part of a word (like medium should not be usdium) and it will give you code that does exactly that.

You can ask it to write code that does a heat simulation in a plate of aluminum given one side of heated and the other cooled. It will get there with some help. It works. That's absolutely fucking crazy.

[–] sugar_in_your_tea@sh.itjust.works 5 points 11 months ago (1 children)

Maybe, that really depends on if that task or a very similar task exists in sufficient amounts in its training set. Basically, you could get essentially the same result by searching online for code examples, the LLM might just make it a little faster (and probably introduce some errors as well).

An LLM can only generate text that exists in its training data. That's a pretty important limitation, which has all kinds of copyright-related issues associated with it (e.g. I can't just copy a code example from GitHub in most cases).

[–] Eheran@lemmy.world 0 points 11 months ago (1 children)

No, it does not depend on preexisting tasks, which is why I told you those 2 random examples. You can come up with new, never before seen questions if you want to. How to stack a cable, car battery, beer bottle, welding machine, tea pot to get the highest tower. Whatever. It is not always right, but also much more capable than you think.

[–] huginn@feddit.it 2 points 11 months ago (1 children)

It is dependent on preexisting tasks, you're just describing encoded latent space.

It's not explicit but it's implicitly encoded.

And you still can't trust it because the encoding is intrinsically lossy.

[–] Eheran@lemmy.world -1 points 11 months ago

It can come up with new solutions.

[–] huginn@feddit.it 3 points 11 months ago* (last edited 11 months ago)

Ask it to finish writing the code to fetch a permission and it will make a request with a non-existent code. Ask it to implement an SNS API invocation and it'll make up calls that don't exist.

Regurgitating code that someone else wrote for an aluminum simulation isn't the flex you think it is: that's just an untrustworthy search engine, not a thinking machine