this post was submitted on 18 Jul 2023
114 points (100.0% liked)

196

16591 readers
2392 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 
top 29 comments
sorted by: hot top controversial new old
[–] Inverno@lemm.ee 16 points 1 year ago (1 children)

Is it a natural response that a human would give? No.

Is it an overly cautious response that still gives correct information? Yes.

[–] lugal@sopuli.xyz 6 points 1 year ago (2 children)

That's kind of my point. AI's have no concept of time and even less of death and they "think" (in a metaphorical sense) very differently from us. I think it's good to be aware of that and since this is a meme page, I don't even need a real point. I think it's funny even tho I fully understand the underlying process (I obviously don't understand the process in detail but you get what I mean)

[–] Inverno@lemm.ee 5 points 1 year ago (2 children)

Interestingly gpt4 gives a slightly different response:

I'm sorry, but as of my last update in September 2021, Michael Jackson passed away in June 2009. I am unable to provide any recent updates or information beyond that date. If there have been any posthumous releases or tributes, I would recommend checking the latest news sources for the most up-to-date information.

Honestly not sure if it’s better or worse. It could be better in that it realizes the absurdity in asking what he’s up to and assumes you must be talking about posthumous music releases instead.

Could be worse in that it should know we’re obviously talking about the person and not his music and shouldn’t jump to that conclusion.

But that’s honestly even a tough call for a human. If someone asks a strange question like that, do you assume they mean exactly what they speak or do you assume what they speak is not what they actually mean and then adjust the answer to fit what they probably mean?

The rest of the way the answer is structured just comes out of the system prompt telling it to remind the user of its cutoff date.

I’m not trying to argue and I’m not saying you’re wrong or right, I’m just kind of thinking out loud.

But as it is a meme, yeah gpt what the hell, who knows lol

[–] lugal@sopuli.xyz 4 points 1 year ago (1 children)

Interesting! I recently heard the phrasing that AI's aren't intelligent but it's better to think of them as applied stochastic. It does not "understand" the question, it just calculates the most properly answer. That's why they suck at suggestive questions sometimes.

And there is a second AI at play that's important here. The main AI just knows stuff, but doesn't reflect if the answer is appropriate. The second is trained on real people who interact with chatGPT and give feedback on the output.

So it doesn't mention the cutoff because it's self reflexive but because of the second AI that learned not to be too sure about real people's latest developments and didn't learn to differentiate between the living and the dead. I think this is a good way to illustrate this process.

[–] CrankyCarrot@lemmy.world 1 points 1 year ago

it just calculates the most properly answer

It's a very clever predictive text :)

[–] b3nsn0w@pricefield.org 1 points 1 year ago

keep in mind also that gpt-3.5 and beyond is instruction-trained, so if you give it a task, even implied, it will really want to accomplish that task, unless that goes against some other training. by asking "what is Michael Jackson up to today" you're putting an expectation on it to produce a correct answer, or as close to as it can manage, hence the attempt to go into recent developments or posthumous releases. it's trying to be useful even in situations when it cannot provide anything of value.

this is a bit more important when you're doing prompt engineering, because if you give it two options, one to actually answer a question and one to indicate that it cannot answer under these circumstances, it will have a strong bias against the latter option. if you ask it to improve a sentence or say it's already perfect, for example, it will have a clear aversion to saying it's already perfect. you should usually just ask it to do the task unconditionally, then run a second pass to choose between the two options, because in those cases the text suggests that some task has been accomplished.

[–] Yearly1845@reddthat.com 2 points 1 year ago (1 children)
[–] lugal@sopuli.xyz 2 points 1 year ago

Literally 1845

[–] nottheengineer@feddit.de 7 points 1 year ago (1 children)

OpenAI downgraded it hard recently.

[–] lugal@sopuli.xyz 1 points 1 year ago (2 children)

Guess not the pro version or does it have other reasons? Maybe copy right?

[–] nottheengineer@feddit.de 4 points 1 year ago

That got downgraded too. The reason is performance. These things use up metric fucktons of compute power and openai wasn't able to handle the load. The website was down all the time during business hours, so they decided to take the emergency exit.

[–] dedale@kbin.social 2 points 1 year ago (1 children)

Or a side effect of censorship.

[–] Hotzilla@sopuli.xyz 1 points 1 year ago

This is human set rule that forces it to say that if ever promt asks for new information, it will tell the cutoff date. All of these rules do make it much dummer.

[–] trent@kbin.social 7 points 1 year ago (3 children)

Mostly I just hate the dialogue that people think WE are akin to a language model, where prompts go in and actions come out... it's a gross misrepresentation of the human brain, and we don't even know much of how it works, but we do know we don't spit outputs based solely on inputs.

[–] wasntme@yiffit.net 3 points 1 year ago

we don't spit outputs based solely on inputs.

I mean, that depends on what you define as an input, but unless you believe in some magic juice that makes us special (like a soul), then yeah, we are just responding based on inputs.

Now, that doesn't mean AIs work like our brains, responding based on inputs is pretty much applicable to everything, that's simply what determinism is, action leads to reaction. It's the process in between taking an input and giving an output that matters.

[–] lugal@sopuli.xyz 1 points 1 year ago (1 children)

Yeah, that's just bullshit. It isn't even logical coherent. If we are all just reacting, where is the input supposed to come from?

[–] CheeseNoodle@lemmy.world 1 points 1 year ago

I believe at least one attempt to put a serious definition to sapience defined it as a process in which a creature reacts not only to external input but also to internally generated input. Granted you can meet that definition by tacking a random number generator to the inner workings of any LLM.

[–] AdmiralRob@lemmy.zip 1 points 1 year ago

I mean, we kinda do, we just take in so much more inputs than we could ever reasonably put in a computer, over a much longer time period, and with processing done on those inputs that isn't entirely understood.

[–] nothisispatrick@lemmy.world 4 points 1 year ago

What’s wrong here? Everything looks correct

[–] Flicsmo@rammy.site 3 points 1 year ago (1 children)

I don't get it, is something wrong with that response? I looked it up and that is when he died.

[–] lugal@sopuli.xyz 6 points 1 year ago (4 children)

Since he's already dead, there can't be any new developments or updates. He died, that's the end of the story, I don't need new sources to know he's still dead.

My point is kind of that AI's don't have a concept of time and even less of death. But primarily I just thought this reaction is funny.

[–] Risus_Nex@lemmy.world 2 points 1 year ago

I would argue that, even tho it's not how a human would respond, it's correct nevertheless. Because there could be news about him still. Maybe someone found out his grave is empty or he faked his death or there is a new song out from him, like with Linkin Park, where they released a "new" song with Chester singing.

[–] scribs@lemmy.blahaj.zone 2 points 1 year ago

it is wondering why you asked then lol

[–] Gambler@lemm.ee 2 points 1 year ago

I kinda read the phrasing as there is no more news about him that would be relevant without a more specific query not is he alive again.

I’d more think it’s saying “he wasn’t posthumously given another Grammy this year that I know about”

[–] Flicsmo@rammy.site 2 points 1 year ago (2 children)

I dunno, it makes sense to me. New information or music releases can come out after someone's death, and you asked what he's been up to recently, not if/when he had died

[–] lugal@sopuli.xyz 2 points 1 year ago

I'm not a native speaker but don't being up to something imply that he is actively doing or planning something? I didn't asked if there were news about him.

[–] WaLLy3K@infosec.pub 2 points 1 year ago

It was asked what "he", as a person, was up to recently. Not "What's the latest news on Michael Jackson?" as a topic.

That's the mistake that ChatGPT made in its response.

[–] Kolanaki@yiffit.net 3 points 1 year ago

Imagine finding out that Micheal Jackson now walks the earth as an undead abomination from an AI chat model.

[–] McJonalds@lemmy.world 1 points 1 year ago

It's trlling you he hasn't been up to anything lately given that he's head. The bot isn't supposed to give you a "human" answer, and no one is claiming that

load more comments
view more: next ›