this post was submitted on 16 Dec 2024
24 points (100.0% liked)

TechTakes

1491 readers
30 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] corbin@awful.systems 8 points 4 days ago (2 children)

Today's "Luigi isn't sexy" poster is Thomas Ptacek. The funniest example is probably this reply on the orange site:

That's an extrapolation from a poll, not literally 50 million people…

A cryptographer not believing in statistical analysis! I can't stop giggling, sorry.

[–] TinyTimmyTokyo@awful.systems 3 points 4 days ago (1 children)

One thing to keep in mind about Ptacek is that he will die on the stupidest of hills. Back when Y Combinator president Garry Tan tweeted that members of the San Francisco board of supervisors should be killed, Ptacek defended him to the extent that the mouth-breathers on HN even turned on him.

[–] froztbyte@awful.systems 2 points 4 days ago

I was trying to remember at which point I unfollowed him, and I think it was exactly this nonsense

[–] mawhrin@awful.systems 3 points 4 days ago

I liked ptaček better when he still knew he doesn't know everything.

[–] rook@awful.systems 7 points 4 days ago

And, whilst I’m here, a post from someone who tried using copilot to help with software dev for a year.

I think my favourite bit was

Don’t use LLMs for autocomplete, use them for dialogues about the code.

Tried that. It’s worse than a rubber duck, which at least knows to stay silent when it doesn’t know what it’s talking about.

https://infosec.exchange/@david_chisnall/113690087142854474

(and also https://en.m.wikipedia.org/wiki/Rubber_duck_debugging for those who haven’t come across it)

[–] skillissuer@discuss.tchncs.de 13 points 5 days ago* (last edited 5 days ago) (3 children)

ai fan asks chempros about their use of lying boxes: majority opinion is that this shit is useless, leaks confidential information and is a massive legal liability https://www.reddit.com/r/Chempros/comments/1hgxvsj/ai_in_the_workplace_how_have_chemistsscientists/

top response:

It’s a good trick to be instantly dismissed. No, really, that’s the latest I had in terms of company policy. If you’re caught using AI for anything, you’re out the door. It’s a lawsuit waiting to happen (and a lawsuit we cannot defend against). Gross misconduct, not eligible for rehire, and all that. Same as intentionally misrepresenting data (because it is). (Pharma)

[–] blakestacey@awful.systems 9 points 5 days ago (1 children)

From the replies:

In cGMP and cGLP you have to be able to document EVERYTHING. If someone, somewhere messes up the company and authorities theoretically should be able to trace it back to that incident. Generative AI is more-or-less a black box by comparison; plus how often it’s confidently incorrect is well known and well documented. To use it in a pharmaceutical industry would be teetering on gross negligence and asking for trouble.

Also suppose that you use it in such a way that it helps your company profit immensely and—uh oh! The data it used was the patented IP of a competitor! How would your company legally defend itself? Normally it would use the documentation trail to prove that they were not infringing on the other company’s IP, but you don’t have that here. What if someone gets hurt? Do you really want to make the case that you just gave Chatgpt a list of results and it gave a recommended dosage for your drug? Probably not. When validating SOPs are they going to include listening to Chatgpt in it? If you do, then you need to make sure that OpenAI has their program to the same documentation standards and certifications that you have, and I don’t think they want to tangle with the FDA at the moment.

There’s just so, SO many things that can go wrong using AI casually in a GMP environment that end with your company getting sued and humiliated.

And a good sneer:

With a few years and a couple billion dollars of investment, it’ll be unreliable much faster.

[–] skillissuer@discuss.tchncs.de 4 points 4 days ago (1 children)

for anyone wondering cgmp/cglp means current good manufacturing/laboratory practices and it's mostly a set of paperwork concerning audits etc and repeatability of everything

[–] Soyweiser@awful.systems 5 points 4 days ago* (last edited 4 days ago) (1 children)

Im assume a few of these good practices have been discovered after a certain price in blood was paid.

[–] skillissuer@discuss.tchncs.de 4 points 4 days ago

everything has to be validated, certified, calibrated, written down and accessible for audit, on top of, you know, actual physical side of good manufacturing like keeping everything clean and in spec. some of that is to control for random fuckups and some is for cover-your-ass purposes. but yeah, good couple thousand people died before it became an actual globally enforced thing

[–] sailor_sega_saturn@awful.systems 5 points 4 days ago (2 children)

Days since last comparison of Chat-GPT to shitty university student: zero

More broadly I think it makes more sense to view LLMs as an advanced rubber ducking tool - like a broadly knowledgeable undergrad you can bounce ideas off to help refine your thinking, but whom you should always fact check because they can often be confidently wrong.

Seriously why does everyone like this analogy?

[–] blakestacey@awful.systems 6 points 4 days ago

As a person whose job has involved teaching undergrads, I can say that the ones who are honestly puzzled are helpful, but the ones who are confidently wrong are exasperating for the teacher and bad for their classmates.

[–] skillissuer@discuss.tchncs.de 6 points 4 days ago* (last edited 4 days ago)

good question, i have no clue especially that i wasn't like this as undergrad, it's really not hard to say "i don't know, boss" or "more experimental data is needed" and chatgpt will never say this

shitty undergrad won't probably leak confidential info either (maybe on sender side, but never on receiver side, as in receiving unexplained stolen confidential info from cosmic noise)

[–] YourNetworkIsHaunted@awful.systems 8 points 5 days ago (2 children)

AI could be a viable test for bullshit jobs as described by Graeber. If the disinfotmatron can effectively do your job then doing it well clearly doesn't matter to anyone.

[–] skillissuer@discuss.tchncs.de 4 points 5 days ago (1 children)

idk, genai can fuck up a couple of these too

[–] YourNetworkIsHaunted@awful.systems 5 points 5 days ago* (last edited 5 days ago)

It's not an exhaustive search technique, but it may be an effective heuristic if anyone is planning The Revolution(tm).

[–] rook@awful.systems 6 points 4 days ago (1 children)

Interesting article about netflix. I hadn’t really thought about the scale of their shitty forgettable movie generation, but there are apparently hundreds and hundreds of these things with big names attached and no-one watches them and no-one has heard of them and apparently Netflix doesn’t care about this because they can pitch magic numbers to their shareholders and everyone is happy.

“What are these movies?” the Hollywood producer asked me. “Are they successful movies? Are they not? They have famous people in them. They get put out by major studios. And yet because we don’t have any reliable numbers from the streamers, we actually don’t know how many people have watched them. So what are they? If no one knows about them, if no one saw them, are they just something that people who are in them can talk about in meetings to get other jobs? Are we all just trying to keep the ball rolling so we’re just getting paid and having jobs, but no one’s really watching any of this stuff? When does the bubble burst? No one has any fucking clue.”

What a colossal waste of money, brains, time and talent. I can see who the market for stuff like sora is, now.

https://www.nplusonemag.com/issue-49/essays/casual-viewing/

[–] istewart@awful.systems 5 points 4 days ago

I feel like before Redbox went under, it was also a dumping ground for this sort of thing. For instance, that mid-budget Western "Rust" where Alec Baldwin killed the camerawoman on set felt like it was destined for this sort of distribution strategy. Who's clamoring to go out to the theater to see a Western with Alec Baldwin these days? But it might stand out among all the other slop when you're looking to turn your brain off on a Saturday night.

See also the rise of the "geezer-teasers," where a random 80s/90s action star signs up to appear in the first and last 10 minutes of a generic action movie filmed someplace inexpensive, most likely eastern Europe or southeast Asia. There were a lot of those. Perhaps my favorite, that I still want to watch someday, was Danny Trejo and Danny Glover in "Bad-Ass 2: Bad-Asses."

[–] maol@awful.systems 5 points 4 days ago (1 children)

I had to use clipchamp for something recently and my god, what an awful, enshittified piece of software. It's sending me emails now!

[–] froztbyte@awful.systems 8 points 4 days ago* (last edited 4 days ago)

tangentially: I've been getting reminded of a bunch of services existing, by way of pointless "your year in review" bullshit

fuck spotify for starting that misfeature, and fuck everyone else for falling over themselves to get On Trend

[–] Soyweiser@awful.systems 3 points 4 days ago

Lol lmao (For the people not into Dutch, our main alt-right politician lost a lot of money investing in the luna cryptocurrency (of course he is into crypto, and of course this site (which is a pro crypto site, so they pivot to his bitcoin holdings (which is no shock we know cryptofash people pay the fash in crypto)) is using the 'register now and get the first 10 bucks free!' trick casinos also pull).

[–] rook@awful.systems 16 points 6 days ago (1 children)

In further bluesky news, the team have a bit of an elon moment and forget how public they made everything.

https://bsky.app/profile/miriambo.bsky.social/post/3ldq2c7lu6c25 (only readable if you are logged in to bluesky) Good morning. Let me check if I’ve got this right. Juni created a bot that shows what Aaron (head of trust and safety) likes. His likes are public information. Aaron likes a porn post. Trust and safety ban the bot and creator in 16 minutes. Creator appeals and ban is upheld

[–] blakestacey@awful.systems 13 points 6 days ago

the team have a bit of an elon moment

"Oh shit, which one of them endorsed the German neo-Nazis?"

Aaron likes a porn post

"Whew."

[–] blakestacey@awful.systems 7 points 6 days ago* (last edited 6 days ago) (3 children)

Not A Sneer But: "Princ-wiki-a Mathematica: Wikipedia Editing and Mathematics" and a related blog post. Maybe of interest to those amongst us whomst like to complain.

[–] blakestacey@awful.systems 3 points 4 days ago

I saw this floating around fedi (sorry, don't have the link at hand right now) and found it an interesting read, partly because it helped codify why editing Wikipedia is not the hobby for me. Even when I'm covering basic, established material, I'm always tempted to introduce new terminology that I think is an improvement, or to highlight an aspect of the history that I feel is underappreciated, or just to make a joke. My passion project — apart from the increasingly deranged fanfiction, of course — would be something more like filling in the gaps in open-access textbook coverage.

[–] sc_griffith@awful.systems 4 points 5 days ago

very interesting, thank you for sharing

[–] khalid_salad@awful.systems 9 points 6 days ago (15 children)

Y'all, with Proton enshittifying (scribe and wallet nonsense), I think I am never going to sign up for another all-in-one service like this. Now I gotta determine what to do about:

  • Proton Mail
  • Proton VPN
  • Proton Drive
  • Proton Calendar

and I'd be forced to reassess my password manager if hadn't already been using BitWarden when Proton Pass came out.

Self-hosting is a non-starter (too lazy to remember a new password for my luggage). Any thoughts? Are other Proton users here jumping ship? Should I just resign myself to using Proton until they eventually force some stupid ass "Chatbot will look at the contents of your Drive and tell you which authorities to surrender yourself to"?

[–] maol@awful.systems 4 points 4 days ago

I am no tech expert but I use tuta for email and disroot for forms, pads and file sharing.

[–] rook@awful.systems 7 points 6 days ago (1 children)

For VPNs, at least, I can offer some suggestions. If you wanted to securely access a specific box or network of yours, tailscale is pretty great and very painless to use. If you wanted to do stuff without various folk noticing then that’s a bit trickier but I’ve been happy using mullvad… they’re not the cheapest, though they have some splendid anonymous payment mechanisms (you can literally mail them a wad of banknotes with a magic code on a bit of paper… you don’t even need to muck about with bitcoin).

[–] khalid_salad@awful.systems 5 points 6 days ago* (last edited 6 days ago) (1 children)

I have a subscription for Private Internet Access that I was using before subscribing to Proton Mail (which comes with Proton VPN). I figured it was all the same (they all have a slightly skeezy feel to me).

Then I checked out Mullvad's website and it's really quite awesome. Everything about their service has a "we want to make this accessible to everyone" vibe, which I appreciate. I am going to try it out. <3

[–] froztbyte@awful.systems 5 points 6 days ago (1 children)

Oh yeah I forgot to mention that in my comment: drop PIA. Never touch anything owned by PIA or Kape. Ever.

load more comments (1 replies)
load more comments (13 replies)
[–] rook@awful.systems 10 points 6 days ago* (last edited 6 days ago) (2 children)

Bluesky’s approach to using domain names to mean identity is now showing cracks that everyone can see: https://tedium.co/2024/12/17/bluesky-impersonation-risks/

(it was always shaky, but mostly only shown by infosec folks who signed up as amazon s3, etc)

TL;DR: scammer buys .com domain for journalist’s name, registers it on bluesky, demands money to hand it over or face reputational damage, uses other fake accounts with plausible names and backgrounds to encourage the mark to pay up. Fun stuff. The best bit is when the sockpuppets got one of the real people they were pretending to be banned from bluesky.

[–] Soyweiser@awful.systems 10 points 6 days ago

It seems like it is a neat addition to a robust verification system, sadly they picked it as a replacement for a verification system. Ah the libertarian desire to build a thing but not be responsible for it.

[–] froztbyte@awful.systems 8 points 6 days ago

this is such a mess, holy shit

and only on .com? I have some very pointed questions about the maturity of the verification program/design

load more comments
view more: next ›