this post was submitted on 17 Mar 2024
132 points (94.0% liked)

Technology

58055 readers
4651 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] match@pawb.social 6 points 6 months ago (2 children)

ai automates the behavior of an average agent, not a talented one

[–] FiniteBanjo@lemmy.today 2 points 6 months ago

And when it doesn't it still tells you that it does, incapable of correction.

[–] FaceDeer@fedia.io -4 points 6 months ago (3 children)

Unless you specify that you want a talented output. A lot of people don't realize that you need to tell AIs what kind of output you want them to give you, if you don't then they'll default to something average. That's the cause of a lot of disappointment with tools like ChatGPT.

[–] Spuddlesv2@lemmy.ca 9 points 6 months ago (3 children)

Ahhh so the secret to using ChatGPT successfully is to tell it to give you good output?

Like “make sure the code actually works” and “don’t repeat yourself like a fucking idiot” and “don’t hallucinate false information”!

[–] Natanael@slrpnk.net 0 points 6 months ago* (last edited 6 months ago) (1 children)

Unironically yes, sometimes. A lot of the best works which its training samples are based on cites the original poster's qualifications, and this filters into the model where asking for the right qualifications directly can influence it to rely more on high quality input samples when generating its response.

But it's still not perfect, obviously. It doesn't make it stop hallucinating.

[–] FaceDeer@fedia.io -1 points 6 months ago

Yeah, you still need to give an AI's output an editing and review pass, especially if factual accuracy is important. But though some may mock the term "prompt engineering" there really are a bunch of tactics you can use when talking to an AI to get it to do a much better job. The most amusing one I've come across is that some AIs will produce better results if you offer to tip them $100 for a good output, even though there's no way to physically fulfill such a promise. The theory is that the AI's training data tended to have better stuff associated with situations where people paid for it, so when you tell the AI you're willing to pay it'll effectively go "ah, the user is expecting good quality."

You shouldn't have to worry about the really quirky stuff like that unless you're an AI power-user, but a simple request for high-quality output can go a long way. Assuming you want high quality output. You could also ask an AI for a "cheesy low-quality high-school essay riddled with malapropisms" on a subject, for example, and that would be a different sort of deviation from "average."

[–] KeenFlame@feddit.nu 0 points 6 months ago* (last edited 6 months ago)

Absolutely, it's one of the first curious things you discover when using them, such as stable diffusion "masterpiece" or the famous system prompt leaks from proprietary llms

It makes sense in how it works but in proprietary use it is mostly handled for you

Finding the right words and amount is a hilarious exercise that provides pretty good insight in the attention mechanics

Consider the "let's work step by step"

This proved a revolutionary way to system the coders as they then will structure the output better, there's then more research that happened around why this is so amazingly effective at making the model proof check itself

Predictions are obviously closely related to the action part of our brains as well, so it makes sense that it would help when you think about it

[–] kromem@lemmy.world -1 points 6 months ago

Literally yes.

For example about a year ago one of the multi step prompt papers that improved results a bit had the model guess what expert would be best equipped to answer the question in the first pass and then asked it to answer the question as that expert in the second pass and it did a better job than trying to answer it directly.

The pretraining is a regression towards the mean, so you need to bias it back towards excellence with either fine tuning or in context learning.

[–] ZILtoid1991@lemmy.world 3 points 6 months ago (2 children)

So I need to praise it and call it a good boy?

[–] PiratePanPan@lemmy.dbzer0.com 3 points 6 months ago

What? You didn't know ChatGPT has a praise kink?

[–] kromem@lemmy.world 0 points 6 months ago

Literally yes. You'll see that OpenAI's system prompts say 'please' and Anthropic's mentions that helping users makes the AI happy.

Which makes complete sense if you understand what's going on with how the models actually work and not the common "Markov chain" garbage armchair experts spout off (the self attention mechanism violates the Markov property characterizing Markov chains in the first place, so if you see people refer to transformers as Markov chains either they don't know what they are taking about or they think you need an oversimplified explanation).

[–] kromem@lemmy.world 2 points 6 months ago (1 children)

I always love watching you comment something that's literally true regarding LLMs but against the groupthink and get downvoted to hell.

Clearly people aren't aware that the pretraining pass is necessarily a regression to the mean and it requires biasing it using either prompt context or a fine tuning pass towards excellence in outputs.

There's a bit of irony to humans shitting on ChatGPT for spouting nonsense when so many people online happily spout BS that they think they know but don't actually know.

Of course a language model trained on the Internet ends up being confidently incorrect. It's just a mirror of the human tendencies.

[–] FaceDeer@fedia.io 2 points 6 months ago (1 children)

Yeah, these AIs are literally trying to give us what they "think" we expect them to respond with.

Which does make me a little worried given how frequently our fictional AIs end up in "kill all humans!" Mode. :)

[–] kromem@lemmy.world 1 points 6 months ago

Which does make me a little worried given how frequently our fictional AIs end up in "kill all humans!" Mode. :)

This is completely understandable given the majority of discussion of AI in the training data. But it's inversely correlated to the strength of the 'persona' for the models given the propensity for the competing correlation of "I'm not the bad guy" present in the training data. So the stronger the 'I' the less 'Skynet.'

Also, the industry is currently trying to do it all at once. If I sat most humans in front of a red button labeled 'Nuke' every one would have the thought of "maybe I should push that button" but then their prefrontal cortex would kick in and inhibit the intrusive thought.

We'll likely see layered specialized models performing much better over the next year or two than a single all in one attempt at alignment.