Deebster

joined 3 years ago
[–] Deebster@lemmy.ml 7 points 4 months ago

Between XKCD and Alec, the whole of human knowledge's pretty much covered.

[–] Deebster@lemmy.ml 14 points 4 months ago (6 children)

Yunanistan sounds made up

[–] Deebster@lemmy.ml 6 points 4 months ago

Works here too, but when I tried to save it to the Internet Archive the saved page doesn't have AI results 😟

[–] Deebster@lemmy.ml 3 points 4 months ago

Rickroll: (v) to troll the youth using memes

Makes sense!

[–] Deebster@lemmy.ml 8 points 4 months ago (1 children)

Ironically, it's a pretty well-known one itself (you see people just refer to it by mentioning "today's 10000").

[–] Deebster@lemmy.ml 3 points 4 months ago

And a way to contact drivers if you'd just something in the car, and a system to notify others of your route and progress.

[–] Deebster@lemmy.ml 4 points 5 months ago (1 children)

The Dublin-NYC one's reopened now with automated blurring out of a bunch of stuff.

[–] Deebster@lemmy.ml 6 points 5 months ago (1 children)

Huh, so it is! Growing up in the UK, the US version seemed to be on more, and I'd assumed that that was the original.

[–] Deebster@lemmy.ml 10 points 5 months ago (1 children)

You've missed off the ! so Voyager thinks it's an email address.

!mullets@lemmy.ca

[–] Deebster@lemmy.ml 6 points 5 months ago* (last edited 5 months ago) (2 children)

I doubt that you can get your skin hot enough to denature those proteins without damaging yourself. I've given myself a blister before trying.

[–] Deebster@lemmy.ml 3 points 5 months ago (1 children)

Over many years, I've settled on hydrocortisone cream followed by an ice cube. Those little buggers love me.

[–] Deebster@lemmy.ml 3 points 5 months ago (1 children)

Hmm, I think they're close enough to be able to say a neural network is modelled on how a brain works - it's not the same, but then you reach the other side of the semantics coin (like the "can a submarine swim" question).

The plasticity part is an interesting point, and I'd need to research that to respond properly. I don't know, for example, if they freeze the model because otherwise input would ruin it (internet teaching them to be sweaty racists, for example), or because it's so expensive/slow to train, or high error rates, or it's impossible, etc.

When talking to laymen I've explained LLMs as a glorified text autocomplete, but there's some discussion on the boundary of science and philosophy that's asking is intelligence a side effect of being able to predict better.

view more: ‹ prev next ›