In other news, Hindenburg Research just put out a truly damning report on Roblox, aptly titled "Roblox: Inflated Key Metrics For Wall Street And A Pedophile Hellscape For Kids", and the markets have responded.
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
i wouldn't want to sound like I'm running down Hinton's work on neural networks, it's the foundational tool of much of what's called "AI", certainly of ML
but uh, it's comp sci which is applied mathematics
how does this rate a physics Nobel??
They're reeeaallly leaning into the fact that some of the math involved is also used in statistical physics. And, OK, we could have an academic debate about how the boundaries of fields are drawn and the extent to which the divisions between them are cultural conventions. But the more important thing is that the Nobel Prize is a bad institution.
a friend says:
effectively they made machine learning look like an Ising model, and you honestly have no idea how much theoretical physicists fucking love it when things turn out to be the Ising model
does that match your experience? if so i'll quote that
That sounds about right, yeah.
https://www.bbc.com/news/articles/c62r02z75jyo
It’s going to be like the Industrial Revolution - but instead of our physical capabilities, it’s going to exceed our intellectual capabilities ... but I worry that the overall consequences of this might be systems that are more intelligent than us that might eventually take control
😩
This work getting the physics nobel for “using physics” is reeeeeeal fuckin tangential
Also: TL note: 11,000,000 Swedish Krona equals 11,386,313.85 Norwegian Krone
Don't know how much this fits the community, as you use a lot of terms I'm not inherently familiar with (is there a "welcome guide" of some sort somewhere I missed).
Anyway, Wikipedia moderators are now realizing that LLMs are causing problems for them, but they are very careful to not smack the beehive:
The purpose of this project is not to restrict or ban the use of AI in articles, but to verify that its output is acceptable and constructive, and to fix or remove it otherwise.
I just... don't have words for how bad this is going to go. How much work this will inevitably be. At least we'll get a real world example of just how many guardrails are actually needed to make LLM text "work" for this sort of use case, where neutrality, truth, and cited sources are important (at least on paper).
I hope some people watch this closely, I'm sure there's going to be some gold in this mess.
The purpose of this project is not to restrict or ban the use of AI in articles, but to verify that its output is acceptable and constructive, and to fix or remove it otherwise.
Wikipedia's mod team definitely haven't realised it yet, but this part is pretty much a de facto ban on using AI. AI is incapable of producing output that would be acceptable for a Wikipedia article - in basically every instance, its getting nuked.
lol i assure you that fidelitously translates to "kill it with fire"
Yeah, that sounds like text which somebody quickly typed up for the sake of having something.
it is impossible for a Wikipedia editor to write a sentence on Wikipedia procedure without completely tracing the fractal space of caveats.
Welcome to the club. They say a shared suffering is only half the suffering.
This was discussed in last week's Stubsack, but I don't think we mind talking about talking the same thing twice. I, for one, do not look forward to browsing Wikipedia exclusively through pre-2024 archived versions, so I hope (with some pessimism) their disapponintingly milquetoast stance works out.
Reading a bit of the old Reddit sneerclub can help understand some of the Awful vernacular, but otherwise it's as much of a lurkmoar as any other online circlejerk. The old guard keep referencing cringe techbros and TESCREALs I've never heard of while I still can't remember which Scott A we're talking about in which thread.
Scott Computers is married and a father but still writes like an incel and fundamentally can't believe that anyone interested in computer science or physics might think in a different way than he does. Dilbert Scott is an incredibly divorced man. Scott Adderall is the leader of the beige tribe.
Don't know how much this fits the community, as you use a lot of terms I'm not inherently familiar with (is there a "welcome guide" of some sort somewhere I missed)
first impression: your post is entirely on topic, welcome to the stubsack
techtakes is a sister sub to sneerclub (also on this instance, previously on reddit) and that one has a bit of an explanation. generally any (classy) sneerful critique of bullshit and wankery goes, modulo making space for chuds/nazis/debatelords/etc (those get shown the exit)
the mozilla PR campaign to convince everyone that advertising is the lifeblood of commerce and that this is perfectly fine and good (and that everyone should just accept their viewpoint) continues
We need to stare it straight in the eyes and try to fix it
try, you say? and what's your plan for when you fail, but you've lost all your values in service of the attempt?
For this, we owe our community an apology for not engaging and communicating our vision effectively. Mozilla is only Mozilla if we share our thinking, engage people along the way, and incorporate that feedback into our efforts to help reform the ecosystem.
are you fucking kidding me? "we can only be who we are if we maybe sorta listen to you while we keep doing what we wanted to do"? seriously?
How do we ensure that privacy is not a privilege of the few but a fundamental right available to everyone? These are significant and enduring questions that have no single answer. But, for right now on the internet of today, a big part of the answer is online advertising.
How do we ensure that traffic safety is not a privilege of the few but a fundamental right available to everyone? A big part of the answer is drunk driving.
How do we prevent huge segments of the world from being priced out of access through paywalls?
Based Mozilla. Abolish landlords. Obliterate the commodity form. Full luxury gay communism now.
the purestrain corporate non-apology that is “we should have communicated our vision effectively” when your entire community is telling you in no uncertain terms to give up on that vision because it’s a terrible idea nobody wants
"it's a failure in our messaging that we didn't tell you about the thing you'd hate in advance. if we were any good we would've gotten out ahead of it (and made you think it's something else)"
and the thing is, that's probably exactly the lesson they're going to be learning from this :|
PC Gamer put out a pro-AI piece recently - unsurprisingly, Twitter tore it apart pretty publicly:
I could only find one positive response in the replies, and that one is getting torn to shreds as well:
I did also find a quote-tweet calling the current AI bubble an "anti-art period of time", which has been doing pretty damn well:
Against my better judgment, I'm whipping out another sidenote:
With the general flood of AI slop on the Internet (a slop-nami as I've taken to calling it), and the quasi-realistic style most of it takes, I expect we're gonna see photorealistic art/visuals take a major decline in popularity/cultural cachet, with an attendant boom in abstract/surreal/stylised visuals
On the popularity front, any artist producing something photorealistic will struggle to avoid blending in with the slop-nami, whilst more overtly stylised pieces stand out all the more starkly.
On the "cultural cachet" front, I can see photorealistic visuals becoming seen as a form of "techno-kitsch" - a form of "anti-art" which suggests a lack of artistic vision/direction on its creators' part, if not a total lack of artistic merit.
Another upcoming train wreck to add to your busy schedule: O’Reilly (the tech book publisher) is apparently going to be doing ai-translated versions of past works. Not everyone is entirely happy about this. I wonder how much human oversight will be involved in the process.
https://www.linkedin.com/posts/parisba_publications-activity-7249244992496361472-4pLj
translate technically fiddly instructions of the type where people have trouble spotting mistakes, with patterned noise generators. what could go wrong
Earlier today, the Internet Archive suffered a DDoS attack, which has now been claimed by the BlackMeta hacktivist group, who says they will be conducting additional attacks.
Hacktivist group? The fuck can you claim to be an activist for if your target is the Internet Archive?
Training my militia of revolutionary freedom fighters to attack homeless shelters, soup kitchens, nature preserves, libraries, and children's playgrounds.
Someone shared this website with me at work and now I am sharing the horror with you all: https://www.syntheticusers.com/
Reduce your time-to-insight
I do not think that word means what they think it means.
Emily Bender devoted a whole episode of Mystery AI Hype Theater 3000 to this.
I have nothing to add, save the screaming.
Synthetic Users uses the power of LLMs to generate users that have very high Synthetic Organic Parity. We start by generating a personality profile for each user, very much like a reptilian brain around which we reconstruct its personality. It’s a reconstruction because we are relying on the billions of parameters that LLMs have at their disposal.
They could've worded this so many other ways
But I suppose creepyness is a selling point these days
Any mild pushback to the claims of LLM companies sure bring out the promptfondlers on lobste.rs
https://lobste.rs/s/qcppwf/llms_don_t_do_formal_reasoning_is_huge
Plenty of agreement, but also a lot of "what is reasoning, really" and "humans are dumb too, so it's not so surprisingly GenAIs are too!". This is sure a solid foundation for multi-billion startups, yes sirree.
Online art school Schoolism publicly sneers at AI art, gets standing ovation
And now, a quick sidenote:
This is gut instinct, but I'm starting to get the feeling this AI bubble's gonna destroy the concept of artificial intelligence as we know it.
Mainly because of the slop-nami and the AI industry's repeated failures to solve hallucinations - both of those, I feel, have built an image of AI as inherently incapable of humanlike intelligence/creativity (let alone Superintelligence^tm^), no matter how many server farms you build or oceans of water you boil.
Additionally, I suspect that working on/with AI, or supporting it in any capacity, is becoming increasingly viewed as a major red flag - a "tech asshole signifier" to quote Baldur Bjarnason for the bajillionth time.
For a specific example, the major controversy that swirled around "Scooby Doo, Where Are You? In... SPRINGTRAPPED!" over its use of AI voices would be my pick.
Eagan Tilghman, the man behind the ~~slaughter~~ animation, may have been a random indie animator, who made Springtrapped on a shoestring budget and with zero intention of making even a cent off it, but all those mitigating circumstances didn't save the poor bastard from getting raked over the coals anyway. If that isn't a bad sign for the future of AI as a concept, I don't know what is.
Not a sneer, but I saw an article that was basically an extremely goddamn long list of forum recommendations and it gave me a warm and fuzzy feeling inside.
Nobody likes Bryan Johnson’s breakfast at the Network School
A cafe run by immortality-obsessed multi-millionaire Bryan Johnson is reportedly struggling to attract customers with students at the crypto-funded Network School in Singapore preferring the hotel’s breakfast buffet over “bunny food.”
I did not expect to be tricked into reading about the nighttime erections of the man with the most severe midlife crisis in the world.
he has 80% fewer gray hairs, representing a “31-year age reversal”
According to Wikipedia this guy is 47. Sorry about your hair as a teenager I guess? I hope the early graying didn't lead to any long term self-esteem issues.
neil turkewitz coming in with a wry comment about AI's legal issues:
And, because this is becoming so common, another sidenote from me:
With the large-scale art theft that gen-AI has become thoroughly known for, how the AI slop it generates has frequently directly competed with its original work (Exhibit A), the solid legal case for treating the AI industry's Biblical-scale theft as copyright infringement and the bevvy of lawsuits that can and will end in legal bloodbaths, I fully expect this bubble will end up strengthening copyright law a fair bit, as artists and megacorps alike endeavor to prevent something like this ever happening again.
Precisely how, I'm not sure, but to take a shot in the dark I suspect that fair use is probably gonna take a pounding.
Many thanks to @blakestacey and @YourNetworkIsHaunted for your guidance with the NSF grant situation. I've sent an analysis of the two weird reviews to our project manager and we have a list of personnel to escalate with if we can't get any traction at that level. Fingers crossed that we can be the pebble that gets an avalanche rolling. I'd really rather not become a character in this story (it's much more fun to hurl rotten fruit with the rest of the groundlings), but what else can we do when the bullshit comes and finds us in real life, eh?
It WAS fun to reference Emily Bender and On Bullshit in the references of a serious work document, though.
Edit: So...the email server says that all the messages are bouncing back. DKIM failure?
Edit2: Yep, you're right, our company email provider coincidentally fell over. When it rains, it pours (lol).
Edit3: PM got back and said that he's passed it along for internal review.
Just something I found in the wild (r/machine learning): Please point me in the right direction for further exploring my line of thinking in AI alignment
I'm not a researcher or working in AI or anything, but ...
you don't say
from this article
Amazon asked Chun to dismiss the case in December, saying the FTC had raised no evidence of harm to consumers.
ah yes, the company that's massively monopolized nearly all markets, destroyed choice, constantly ships bad products (whose existence is incentivised by programs of its own devising), and that has directly invested in enhanced price exploitation technologies? that one? yeah, totes no harm to consumers there
And on the subject of AI: strava is adding ai analytics. The press release is pretty waffly, as it would appear that they’d decided to add ai before actually working out what they’d do with it so, uh, it’ll help analyse the reams of fairly useless statistics that strava computes about you and, um, help celebrate your milestones?
In other news, an AI booster got publicly humilitated after prompting complete garbage and mistaking it for 8-bit animation:
And now, another sidenote, because I really like them apparently:
This is gut instinct like my previous sidenote, but I suspect that this AI bubble will cause the tech industry (if not tech as a whole) to be viewed as fundamentally hostile to artists and fundamentally lacking in art skills/creativity, if not outright hostile to artists and incapable of making (or even understanding) art.
Beyond the slop-nami flooding the Internet with soulless shit whose creation was directly because of tech companies like OpenAI, its also given us shit like:
-
Google's unholy 'Dear Sydney' ad, and the nuclear backlash it got.
-
Apple crushing human creativity for personal gain and being forced to apologise for it
-
Mira Murati openly shitting on artists as gen-AI steals their artwork and destroys their livelihoods
-
Gen-AI boosters producing complete shit and calling it gold (with Proper Prompter and Luma Labs providing excellent examples)
-
And so much goddamn more, most of which I've likely forgotten
This is gut instinct like my previous sidenote, but I suspect that this AI bubble will cause the tech industry (if not tech as a whole) to be viewed as fundamentally hostile to artists and fundamentally lacking in art skills/creativity, if not outright hostile to artists and incapable of making (or even understanding) art.
As a programmer who likes to see himself more adjacent to artists (and not only because I only draw stuff — badly — and write stuff — terribly — as a hobby, but also because I hold the belief that creating something with code can be seen as artistic too) this whole attitude which has been plaguing the tech industry for — let's be real here — the last 15 years at least but probably much longer makes me irrationally angry. Even the parts of the industry where creativity and artistry should play a larger role, like game dev, have been completely fucked over by this idea that everything is about efficiency and productivity. You wanna be successful? You need to be productive all the time, 24/7, and now there's tools that help you with that, and these tools are now fucking AI-powered! Because everything is a tool for out lord and savior productivity.
(I really should get to this toxic productivity write-up I've been meaning to do for a year now,)
This response from her. Lol. Lmao.
can you and all your anti-ai bots just block me? we get it, you hate ai art
What a misunderstood wee lil smol bean, waah
Just ignore the inconsistent theming, blurry cars, people phasing in and out of existence, nonsense traffic signals, unnatural leaf rustling, the car driving on the wrong(?) side of the road and about to plow into a tree, the weirdly oversized tree, the tree missing a trunk, the nonsense traffic paint, the shoddy textures, and the fact that the scene is entirely derivative and no one feels any joy from watching it.
Phew
If you ignore all that it could be the end of animators!!