in the continuing series of russian bitching over kernel maintainers: Russia says it might build its own Linux community after removal of several kernel maintainers
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
So apparently the US government has been compromised by rustheads
https://thenewstack.io/feds-critical-software-must-drop-c-c-by-2026-or-face-risk/
https://www.cisa.gov/resources-tools/resources/product-security-bad-practices
Going outside awful's wheelhouse for a bit:
Logan Paul doxed and harassed a random employee for posting a sign saying Lunchly was recalled
You want my take, the employee in question (who also got a GoFundMe) should sue Logan for defamation - solid case aside, I wanna see that blonde fucker get humbled for once.
I know it's Halloween, but this popped up in my feed and was too spooky even for me 😱
As a side note, what are peoples feelings about Wolfram? Smart dude for sho, but some of the shit he says just comes across as straight up pseudoscientific gobbledygook. But can he out guru Big Yud in a 1v1 on Final Destination (fox only, no items) ? 🤔
Microsoft found a fitting way to punish AI for collaborating with SEO spammers in generating slop: make it use the GitHub code review tools. https://github.blog/changelog/2024-10-29-refine-and-validate-code-review-suggestions-with-copilot-workspace-public-preview/
we really shouldn’t have let Microsoft both fork an editor and buy GitHub, of course they were gonna turn one into a really shitty version of the other
anyway check this extremely valuable suggestion from Copilot in one of their screenshots:
The error message 'userld and score are required' is unclear. It should be more specific, such as 'Missing userld or score in the request body'.
aren’t you salivating for a Copilot subscription? it turns a lazy error message into… no that’s still lazy as shit actually, who is this for?
- a human reading this still needs to consult external documentation to know what userId and score are
- a machine can’t read this
- if you’re going for consistent error messages or you’re looking to match the docs (extremely likely in a project that’s in production), arbitrarily changing that error so it doesn’t match anything else in the project probably isn’t a great idea, and we know LLMs don’t do consistency
I want someone to fork the Linux kernel and then unleash like 10 Copilots to make PRs and review each other. No human intervention. Then plot the number of critical security vulnerabilities introduced over time, assuming they can even keep it compilable for long enough.
Does a kernel that crashes itself before it can process any malicious inputs count as secure?
today i found out that openai wanted to make their own chips https://neuters.de/technology/artificial-intelligence/openai-builds-first-chip-with-broadcom-tsmc-scales-back-foundry-ambition-2024-10-29/
can you imagine what kind of disaster would it be
that article misses one of the delicious parts of that story: they called saltman a “podcast bro” in derision
OpenAI considered building everything in-house and raising capital for an expensive plan to build a network of factories known as "foundries" for chip manufacturing.
Oh man, that's a delicious understatement. If the allegations are true, this was a plan that would make the military-industrial complex envious.
FastCompany: "In Apple’s new ads for AI tools, we’re all total idiots"
It's interesting that not even Apple, with all their marketing knowledge, can come up with anything convincing why users might need "Apple Intelligence"[1]. These new ads are not quite as terrible as that previous "Crush" AI ad, but especially the one with the birthday... I find it just alienating.
Whatever one may think about Apple and their business practices, they are typically very good at marketing. So if even Apple can't find a good consumer pitch for GenAI crap, I don't think anyone can.
[1] I'd like to express support for this post from Jeff Johnson to call it "iSlop"
google got legally beat down a fair bit in app store area https://www.eff.org/deeplinks/2024/10/court-orders-google-monopolist-knock-it-monopoly-stuff
Quick update - Brian Merchant's list of "luddite horror" films ended up getting picked up by Fast Company:
To repeat a previous point of mine, it seems pretty safe to assume "luddite horror" is gonna become a bit of a trend. To make a specific (if unrelated) prediction, I imagine we're gonna see AI systems and/or their supporters become pretty popular villains in the future - the AI bubble's produces plenty of resentment towards AI specifically and tech more generally, and the public's gonna find plenty of catharsis in watching them go down.
The AI lawsuit's going to discovery - I expect things are about to heat up massively for the AI industry:
Is there a group that more consistently makes category errors than computer scientists? Can we mandate Philosophy 101 as a pre-req to shitting out research papers?
Edit: maybe I need to take a break from Mystery AI Hype Theater 3000.
https://www.infoworld.com/article/3595687/googles-flutter-framework-has-been-forked.html/
I’m currently using Flutter. It’s good! And useful! Much better than AI. It being mostly developed by Google has been a bit of a worry since Google is known to shoot itself in the foot by killing off its own products.
So while it’s no big deal to have an open source codebase forked, just wanted to highlight this part of the article:
Carroll also claimed that Google’s focus on AI caused the Flutter team to deprioritize desktop platforms, and he stressed the difficulty of working with the current Flutter team
Described as “Flutter+” by Carroll, Flock “will remain constantly up to date with Flutter, he said. Flock will add important bug fixes, and popular community features, which the Flutter team either can’t, or won’t implement.”
I hope this goes well!