Chthonic

joined 1 year ago
[–] Chthonic@slrpnk.net 1 points 10 months ago

This makes me wanna play Thea again

[–] Chthonic@slrpnk.net 11 points 10 months ago

Brilliant, very meta, love it

[–] Chthonic@slrpnk.net 1 points 10 months ago

It's no borat in Skyrim that's for sure

[–] Chthonic@slrpnk.net 5 points 11 months ago

Do you honestly believe that if trump regains power they're going to nail him on state charges? We'll be lucky to ever have elections again, let alone have him face consequences for his crimes. If he wins it's gonna be full blown fascism

[–] Chthonic@slrpnk.net 5 points 11 months ago (1 children)

I like the lighting and composition but it looks a little fried, how hard did you sharpen?

[–] Chthonic@slrpnk.net 22 points 11 months ago (1 children)

ill give u a bone squirtle

[–] Chthonic@slrpnk.net 6 points 11 months ago

It's not that wild, is there anything more republican than voting against your own best interests?

[–] Chthonic@slrpnk.net 17 points 11 months ago (4 children)

What's fucked up is that if you die here you die for real

[–] Chthonic@slrpnk.net 3 points 1 year ago

If he were smarter and/or not a walking ego then yeah, that would have been the move. Though if he were smart he probably wouldn't be in this mess.

[–] Chthonic@slrpnk.net 23 points 1 year ago* (last edited 1 year ago) (5 children)

It's not. He never wanted to buy twitter, he just wanted to pump and dump the stock, but because he is stupid and the plan was obvious they sued him to make him honor the deal.

So if he just turned around and shut the company down, it would give the SEC legal grounds to argue that his intention all along was market manipulation.

[–] Chthonic@slrpnk.net 25 points 1 year ago (8 children)

My understanding is that the SEC would have fucked him if he just shut it down, because it would indicate that he never intended to buy it in the first place and instead was just trying to manipulate the stock market (which is definitely what he was doing).

[–] Chthonic@slrpnk.net 1 points 1 year ago* (last edited 1 year ago) (1 children)

They don't reason, they're stochastic parrots. Their internal mechanisms are well understood, no idea where you got the notion that the folks building these don't know how they work. It can be hard to predict/understand how an LLM generated a given prompt because of the huge training corpus and statistical nature of neural nets in general.

LLMs work the same as any other net, just with massive sample sets. They have no reasoning capabilities of any kind. We are naturally inclined to ascribe humanlike thought processes to them because they produce human-sounding outputs.

If you would like the perspective of real scientists instead of a "tech-bro" like me I would recommend Emily Bender and Timnit Gebru. I'd recommend them as experts without a vested interest in the massively overblown hype about what LLMs are actually capable of.

view more: next ›