this post was submitted on 18 Nov 2024
21 points (100.0% liked)

TechTakes

1426 readers
346 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

top 50 comments
sorted by: hot top controversial new old
[–] sailor_sega_saturn@awful.systems 4 points 3 hours ago* (last edited 3 hours ago)

Oh hey looks like another Chat-GPT assisted legal filing, this time in an expert declaration about the dangers of generative AI: https://www.sfgate.com/tech/article/stanford-professor-lying-and-technology-19937258.php

The two missing papers are titled, according to Hancock, “Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance” and “The Influence of Deepfake Videos on Political Attitudes and Behavior.” The expert declaration’s bibliography includes links to these papers, but they currently lead to an error screen.

Irony can be pretty ironic sometimes.

[–] skillissuer@discuss.tchncs.de 7 points 4 hours ago* (last edited 4 hours ago) (3 children)

andrew tate's "university" had a leak, exposing cca 800k usernames and 325k email addresses of people that failed to pay $50 monthly fee

entire thing available at DDoSectrets, just gonna drop tree of that torrent:

├── Private Channels
│   ├── AI Automation Agency.7z
│   ├── Business Mastery.7z
│   ├── Content Creation + AI Campus.7z
│   ├── Copywriting.7z
│   ├── Crypto DeFi.7z
│   ├── Crypto Trading.7z
│   ├── Cryptocurrency Investing.7z
│   ├── Ecommerce.7z
│   ├── Health & Fitness.7z
│   ├── Hustler's Campus.7z
│   ├── Social Media & Client Acquisition.7z
│   └── The Real World.7z
├── Public Channels
│   ├── AI Automation Agency.7z
│   ├── Business Mastery.7z
│   ├── Content Creation + AI Campus.7z
│   ├── Copywriting.7z
│   ├── Crypto DeFi.7z
│   ├── Crypto Trading.7z
│   ├── Cryptocurrency Investing.7z
│   ├── Ecommerce.7z
│   ├── Fitness.7z
│   ├── Hustler's Campus.7z
│   ├── Social Media & Client Acquisition.7z
│   └── The Real World.7z
└── users.json.7z

yeah i studied defi and dropshipping at andrew tate's hustler university

statements dreamed up by the utterly deranged

[–] sailor_sega_saturn@awful.systems 2 points 2 hours ago (1 children)

"Yeah I thought about going into civil engineering but the department of hustling really spoke to me y'know?"

[–] skillissuer@discuss.tchncs.de 1 points 51 minutes ago* (last edited 49 minutes ago)

i have never felt imposter syndrome since

nikhil suresh, probably

[–] o7___o7@awful.systems 3 points 4 hours ago* (last edited 4 hours ago)

The word "deranged" is getting a workout lately, ain't it?

[–] Soyweiser@awful.systems 2 points 4 hours ago

I'm just curious how many hits you would get if you searched for '4 hour work week', as iirc that is where all these people stole the idea from. (well, not totally, the idea they are stealing is selling others the idea of the 4 hour work week, but I hope you get what I mean, 4 hour work weeks all the way down).

[–] rook@awful.systems 5 points 10 hours ago (1 children)

Interesting post and corresponding mastodon thread on the non-decentralised-ness of bluesky by cwebber.

https://dustycloud.org/blog/how-decentralized-is-bluesky/

https://social.coop/@cwebber/113527462572885698

The author is keen about this particular “vision statement”:

Preparing for the organization as a future adversary.

The assumption being, stuff gets enshittified and how might you guard your product against the future stupid and awful whims of management and investors?

Of course, they don’t consider that it cuts both ways, and Jack Dorsey’s personal grumbles about Twitter. The risk from his point of view was the company he founded doing evil unthinkable things like, uh, banning nazis. He’s keen for that sort of thing to never happen again on his platforms.

[–] dgerard@awful.systems 5 points 9 hours ago

note that cwebber wrote the ActivityPub spec, btw

[–] khalid_salad@awful.systems 13 points 21 hours ago (2 children)

how come every academic I have worked with has given me some variation of

they already have all of my data, I don't really care about my privacy

i'm in computer science 🙃

[–] Architeuthis@awful.systems 5 points 12 hours ago (1 children)

When people start going on about having nothing to hide usually it helps to point out how there's currently no legal way to have a movie or a series episode saved to your hard drive.

I suspect great overlap between nothing-to-hide-people and the people who watch the worst porn imaginable but think incognito mode is magic.

[–] self@awful.systems 9 points 11 hours ago

what’s wild is in the ideal case, a person who really doesn’t have anything to hide is both unimaginably dull and has effectively just confessed that they would sell you out to the authorities for any or no reason at all

people with nothing to hide are the worst people

[–] self@awful.systems 10 points 20 hours ago

the marketing fucks and executive ghouls who came up with this meme (that used to surface every time I talked about wanting to de-Google) are also the ones who make a fuckton of money off of having a real-time firehose of personal data straight from the source, cause that’s by far what’s most valuable to advertisers and surveillance firms (but I repeat myself)

[–] YourNetworkIsHaunted@awful.systems 8 points 23 hours ago (1 children)

Never thought I'd die fighting alongside a League of Legends fan.

How about an artist valuer?

Aye. That I could do.

[–] BlueMonday1984@awful.systems 6 points 23 hours ago (1 children)

You just know Netflix's inbox is getting flooded with the absolute worst shit League of Legends players can come up with right now

And having played more LoL than I care to admit in high school, that's some truly vile shit. If only it actually made it through the filters to whoever actually made the relevant choices.

[–] gerikson@awful.systems 9 points 1 day ago (5 children)

Dude discovers that one LLM model is not entirely shit at chess, spends time and tokens proving that other models are actually also not shit at chess.

The irony? He's comparing it against Stockfish, a computer chess engine. Computers playing chess at a superhuman level is a solved problem. LLMs have now slightly approached that level.

For one, gpt-3.5-turbo-instruct rarely suggests illegal moves,

Writeup https://dynomight.net/more-chess/

HN discussion https://news.ycombinator.com/item?id=42206817

Particularly hilarious at how thoroughly they're missing the point. The fact that it suggests illegal moves at all means that no matter how good it's openings are the scaling laws and emergent behaviors haven't magicked up an internal model of the game of Chess or even the state of the chess board it's working with. I feel like playing games is a particularly powerful example of this because the game rules provide a very clear structure to model and it's very obvious when that model doesn't exist.

[–] BigMuffin69@awful.systems 8 points 23 hours ago* (last edited 23 hours ago)

I remember when several months (a year ago?) when the news got out that gpt-3.5-turbo-papillion-grumpalumpgus could play chess around ~1600 elo. I was skeptical the apparent skill wasn't just a hacked-on patch to stop folks from clowning on their models on xitter. Like if an LLM had just read the instructions of chess and started playing like a competent player, that would be genuinely impressive. But if what happened is they generated 10^12 synthetic games of chess played by stonk fish and used that to train the model- that ain't an emergent ability, that's just brute forcing chess. The fact that larger, open-source models that perform better on other benchmarks, still flail at chess is just a glaring red flag that something funky was going on w/ gpt-3.5-turbo-instruct to drive home the "eMeRgEnCe" narrative. I'd bet decent odds if you played with modified rules, (knights move a one space longer L shape, you cannot move a pawn 2 moves after it last moved, etc), gpt-3.5 would fuckin suck.

Edit: the author asks "why skill go down tho" on later models. Like isn't it obvious? At that moment of time, chess skills weren't a priority so the trillions of synthetic games weren't included in the training? Like this isn't that big of a mystery...? It's not like other NN haven't been trained to play chess...

[–] sc_griffith@awful.systems 14 points 1 day ago (1 children)

LLMs sometimes struggle to give legal moves. In these experiments, I try 10 times and if there’s still no legal move, I just pick one at random.

uhh

[–] mountainriver@awful.systems 4 points 8 hours ago

Battlechess both could choose legal moves and also had cool animations. Battlechess wins again!

[–] sailor_sega_saturn@awful.systems 7 points 1 day ago* (last edited 1 day ago)

Here are the results of these three models against Stockfish—a standard chess AI—on level 1, with a maximum of 0.01 seconds to make each move

I'm not a Chess person or familiar with Stockfish so take this with a grain of salt, but I found a few interesting things perusing the code / docs which I think makes useful context.

Skill Level

I assume "level" refers to Stockfish's Skill Level option.

If I mathed right, Stockfish roughly estimates Skill Level 1 to be around 1445 ELO (source). However it says "This Elo rating has been calibrated at a time control of 60s+0.6s" so it may be significantly lower here.

Skill Level affects the search depth (appears to use depth of 1 at Skill Level 1). It also enables MultiPV 4 to compute the four best principle variations and randomly pick from them (more randomly at lower skill levels).

Move Time & Hardware

This is all independent of move time. This author used a move time of 10 milliseconds (for stockfish, no mention on how much time the LLMs got). ... or at least they did if they accounted for the "Move Overhead" option defaulting to 10 milliseconds. If they left that at it's default then 10ms - 10ms = 0ms so 🤷‍♀️.

There is also no information about the hardware or number of threads they ran this one, which I feel is important information.

Evaluation Function

After the game was over, I calculated the score after each turn in “centipawns” where a pawn is worth 100 points, and ±1500 indicates a win or loss.

Stockfish's FAQ mentions that they have gone beyond centipawns for evaluating positions, because it's strong enough that material advantage is much less relevant than it used to be. I assume it doesn't really matter at level 1 with ~0 seconds to produce moves though.

Still since the author has Stockfish handy anyway, it'd be interesting to use it in it's not handicapped form to evaluate who won.

[–] pikesley@mastodon.me.uk 8 points 1 day ago

@gerikson @BlueMonday1984 the only analysis of computer chess anybody needs https://youtu.be/DpXy041BIlA

[–] misterbngo@awful.systems 9 points 1 day ago

Stack overflow now with the sponsored crypto blogspam Joining forces: How Web2 and Web3 developers can build together

I really love the byline here. "Kindest view of one another". Seething rage at the bullshittery these "web3" fuckheads keep producing certainly isn't kind for sure.

[–] self@awful.systems 12 points 2 days ago* (last edited 2 days ago)

a better-thought-out announcement is coming later today, but our WriteFreely instance at gibberish.awful.systems has reached a roughly production-ready state (and you can hack on its frontend by modifying the templates, pages, static, and less directories in this repo and opening a PR)! awful.systems regulars can ask for an account and I'll DM an invite link!

[–] swlabr@awful.systems 9 points 2 days ago (1 children)

When the reporter entered the confessional, AI Jesus warned, “Do not disclose personal information under any circumstances. Use this service at your own risk.

Do not worry my child, for everything you say in this hallowed chamber is between you, AI Jesus, and the army of contractors OpenAI hires to evaluate the quality of their LLM output.

load more comments
view more: next ›