TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Can we all take a moment to appreciate this absolutely wild take from Google's latest quantum press release (bolding mine) https://blog.google/technology/research/google-willow-quantum-chip/
Willow’s performance on this benchmark is astonishing: It performed a computation in under five minutes that would take one of today’s fastest supercomputers 10^25^ or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years. This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe. It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.
The more I think about it the stupider it gets. I'd love if someone with an actual physics background were to comment on it. But my layman take is it reads as nonsense to the point of being irresponsible scientific misinformation whether or not you believe in the many worlds interpretation.
"Quantum computation happens in parallel worlds simultaneously" is a lazy take trotted out by people who want to believe in parallel worlds. It is a bad mental image, because it gives the misleading impression that a quantum computer could speed up anything. But all the indications from the actual math are that quantum computers would be better at some tasks than at others. (If you want to use the names that CS people have invented for complexity classes, this imagery would lead you to think that quantum computers could whack any problem in EXPSPACE. But the actual complexity class for "problems efficiently solvable on a quantum computer", BQP, is known to be contained in PSPACE, which is strictly smaller than EXPSPACE.) It also completely obscures the very important point that some tasks look like they'd need a quantum computer — the program is written in quantum circuit language and all that — but a classical computer can actually do the job efficiently. Accepting the goofy pop-science/science-fiction imagery as truth would mean you'd never imagine the Gottesman–Knill theorem could be true.
To quote a paper by Andy Steane, one of the early contributors to quantum error correction:
The answer to the question ‘where does a quantum computer manage to perform its amazing computations?’ is, we conclude, ‘in the region of spacetime occupied by the quantum computer’.
Tangentially, I know about nothing of quantum mechanics but lately I've been very annoyed alone in my head at (the popular perception of?) many-world theory in general. From what I'm understanding about it, there are two possibilities: either it's pure metaphysics, in which case who cares? or it's a truism, i.e. if we model things that way that makes it so we can talk about reality in this way. This... might be true of all quantum interpretations, but many-world annoys me more because it's such a literal vision trying to be cool.
I don't know, tell me if I'm off the mark!
Unfortunately "states of quantum systems form a vector space, and states are often usefully described as linear combinations of other states" doesn't make for good science fiction compared to "whoa dude, like, the multiverse, man."
There's a whole lot of assuming-the-conclusion in advocacy for many-worlds interpretations — sometimes from philosophers, and all the time from Yuddites online. If you make a whole bunch of tacit assumptions, starting with those about how mathematics relates to physical reality, you end up in MWI country. And if you make sure your assumptions stay tacit, you can act like an MWI is the only answer, and everyone else is being ~~un-mutual~~ irrational.
(I use the plural interpretations here because there's not just one flavor of MWIce cream. The people who take it seriously have been arguing amongst one another about how to make it work for half a century now. What does it mean for one event to be more probable than another if all events always happen? When is one "world" distinct from another? The arguments iterate like the construction of a fractal curve.)
Humans can't help but return to questions the presocratics already struggled with. Makes me happy.
Does it also destroy all the universes where the question was answered wrong?
One of these days we'll get the quantum bogosort working.
"lends credence"? yeah, that smells like BS.
some marketing person probably saw that the time estimate of the conventional computation exceeded the age of the universe multiple times over, and decided that must mean multiple universes were somehow involved, because big number bigger than smaller number
the grok AI is now available to free twitter users, evidently not enough paying users were interested
it's somewhat more tedious than Gemini and that's saying something
Really wise decision to open up the system that costs a lot of money per question to the world. Esp when it brings in none. Wonder if there are people working on the low orbital cannon equivalent of trying to mess with twitters finances
Jfc, when I saw the headline I thought this would be a case of the city being too cheap to hire an actual artist and instead use autoplag, but no. And the guy they commissioned isn't even some tech-brain LARP'ing as an artist, he has 20+ years of experience and a pretty huge portfolio, which somehow makes this worse on so many levels.
OK so we're getting into deep rat lore now? I'm so sorry for what I'm about to do to you. I hope one day you can forgive me.
LessWrong diaspora factions! :blobcat_ohno:
https://transmom.love/@elilla/113639471445651398
if I got something wrong, please don't tell me. gods I hope I got something wrong. "it's spreading disinformation" I hope I am
My pedantic notes, modified by some of my experiences, so bla bla epistemic status, colored by my experiences and beliefs take with grain of salt etc. Please don't take this as a correction, but just some of my notes and small minor things. As a general 'trick more people into watching into the abyss' guide it is a good post, mine is more an addition I guess.
SSC / The Motte: Scott Alexander's devotees. once characterised by interest in mental health and a relatively benign, but medicalised, attitude to queer and especially trans people. The focus has since metastasised into pseudoscientific white supremacy and antifeminism.
This is a bit wrong tbh, SSC always was anti-feminist. Scotts old (now deleted) livejournal writings, where he talks about larger discussion/conversation tactics in a broad meta way, the meditations on superweapons, always had the object level idea of attacking feminism. For example, using the wayback machine, the sixth meditation (this is the one I have bookmarked). He himself always seems to have had a bit of a love/hate relationship with his writings on anti-feminism and the fame and popularity this brought him.
The grey tribe bit is missing that guy who called himself grey tribe in I think it was silicon valley who wanted to team up with the red tribe to get rid of all the progressives, might be important to note because it looks like they are centrist, but shock horror, they team up with the right to do far right stuff.
I think the extropianists might even have different factions, like the one around Natasha Vita-More/Max More. But that is a bit more LW adjacent, and it more predates LW than it being a spinoff faction. (The extropian mailinglist came first iirc). Singularitarians and extropianists might be a bit closer together, Kurzweil wrote the singularity is near after all, which is the book all these folks seem to get their AI doom ideas from after all. (if you ever see a line made up out of S-curves that is from that book. Kurzweil also is an exception to all these people as he actually has achievements, he build machines for the blind, image recognition things, etc etc, he isn't just a writer. Nick Bostrom is also missing it seems, he is one of those X-risk guys, also missing is Robin Hanson, who created the great filter idea, the prediction markets thing, and his overcoming bias is a huge influence on Rationalism, and could be considered a less focused on science fiction ideas part of Rationalism, but that was all a bit more 2013 (Check the 2013 map of the world of Dark Enlightenment on the Rationalwiki Neoreaction page).
"the Protestants to the rationalists' Catholicism" I lolled.
Note that a large part of sneerclubbers is (was) not ex rationalists, nor people who were initially interested in it, it actually started on reddit because badphil got too many rationalists suggestions that they created a spinoff. (At least so the story goes) so it was started by people who actually had some philosophy training. (That also makes us the most academic faction!)
Another minor thing in long list of minor things, might also be useful to mention that Rationalwiki has nothing to do with these people and is more aligned with the sneerclub side.
There are also so many Scotts. Anyway, this post grew a bit out of my control sorry for that, hope it doesn't come off to badly, and do note that my additions make a short post way longer so prob are not that useful. Don't think any of your post was misinformation btw (I do think that several of these factions wouldn't call themselves part of LW, and there is a bit of a question who influenced who (the More's seem to be outside of all this for example, and a lot of extropians predate it etc etc. But that kind of nitpicking is for people who want to write books on these people).
E: reading the thread, this is a good post and good to keep in mind btw. I would add not just what you mentioned but also mocking people for personal tragedy, as some people end/lose their lives due to rationalism, or have MH episodes, and we should be careful to treat those topics well. Which we mostly try to do I think.
wasnt that grey tribe guy just balaji srinivasan https://newrepublic.com/article/180487/balaji-srinivasan-network-state-plutocrat
Adam Christopher comments on a story in Publishers Weekly.
Says the CEO of HarperCollins on AI:
"One idea is a “talking book,” where a book sits atop a large language model, allowing readers to converse with an AI facsimile of its author."
Please, just make it stop, somebody.
Robert Evans adds,
there's a pretty good short story idea in some publisher offering an AI facsimile of Harlan Ellison that then tortures its readers to death
Kevin Kruse observes,
I guess this means that HarperCollins is getting out of the business of publishing actual books by actual people, because no one worth a damn is ever going to sign a contract to publish with an outfit with this much fucking contempt for its authors.
Saw something about "sentiment analysis" in text. While writers have discussed "death of the author" and philosophers and linguists have discussed what it even means to derive meaning from text, these fucking AI dorks are looking at text in a vacuum and concluding "this text expresses anger".
print("I'm angry!")
the above python script is angry, look at my baby skynet
Openai are you angry? Yes -> it is angry. No -> it is being sneaky, and angry.
sentiment analysis is such a good example of a pre-LLM AI grift. every time I’ve seen it used for anything, it’s been unreliable to the point of being detrimental to the project’s goals. marketers treat it like a magic salve and smear it all over everything of course, and that’s a large part of why targeted advertising is notoriously ineffective
It's built upon such a nonsensical ontology. The sentiment expressed in a piece of language is at least partially a social function, which is why I can add the following
I AM BEYOND FUCKING LIVID AT EVERYONE IN THIS FUCKING INSTANCE
to this response and no one will actually assume I'm really angry (I am though, send memes).
Edit: not one meme. Not. One.
Edit2: thank you for the memes, @skillissuer@discuss.tchncs.de. This one is my favorite. It feels Dark Souls-y.
Image description
Live crawfish with arms spread in front of bowl of cooked crawfish with caption "Stand amongst the ashes of a trillion dead souls and ask the ghosts if honor matters".
OpenAI whistleblower found dead in San Francisco apartment.
Thread on r/technology.
edited to add:
From his personal website: When does generative AI qualify for fair use?
found a new movie plot threat https://www.science.org/doi/10.1126/science.ads9158
funded by open philanthropy, but not only and also got some other biologists onboard. 10 out of 39 authors had open philanthropy funding in the last 5 years so they're likely EAs. highly speculative as of now and not anywhere close to being made, as in we'll be dead from global warming before this gets anywhere close from my understanding. also starting materials would be hideously expensive because all of this has to be synthetic and enantiopure, and every technique has to be remade from scratch in unnatural enantiomer form. it even has LW thread by now hxxps://www.lesswrong.com/posts/87pTGnHAvqk3FC7Zk/the-dangers-of-mirrored-life
it hit news https://www.nytimes.com/2024/12/12/science/a-second-tree-of-life-could-wreak-havoc-scientists-warn.html https://www.theguardian.com/science/2024/dec/12/unprecedented-risk-to-life-on-earth-scientists-call-for-halt-on-mirror-life-microbe-research
Mirror bacteria? Boring! I want an evil twin from the negaverse who looks exactly like me except right hande-- oh heck. What if I'm the mirror twin?
I read the headline yesterday and thought, "This is 100% fundraising bullshit."
This strikes me as being exact same class of thing OpenAI does when they pronounce that their product will murder us all.
What do we call this? Marketerrorism?
i see how it's critihype but i don't understand where's money in this one
I'm definitely out of my depth here, but how exactly does a lefty organism bypass immune responses and still interact with the body? Seems like if it has a way to mess up healthy cells then it should have something that antibodies can connect to, mirrored or not. Not that I'm arguing we shouldn't be careful about creating novel pathogens, but other than being a more flashy sci-fi premise I'm not really seeing how it's more dangerous than the right-handed version.
Also I think this opens up a beautiful world of new scientific naming conventions:
- Southpaw Paramecium
- Lefty Naegleria
- Sinister Influenza
they way i understand it, because immune system is basically constantly fuzzing all potentially new things, what is important is how antigen looks like on the surface. what it is made from matters less, and whether aminoacids there are l- (natural) or d- (not) it shouldn't matter that much, antibodies are generated for nonnatural achiral things all the time including things like PEG and chloronitrobenzene. then complement system puts holes in bacterial membrane and that's it, it's not survivable for bacterium and does not depend on anything chiral. normally all components are promptly shredded, it's a good question if that would happen too but, like - this might not matter too hard - there's a way for immune system to smite this thing
the potential problem is that peptides made from d-aminoacids are harder to cut via hydrolases and it's a part of some more involved immune response idk details. there's plenty of stuff that's achiral like glycerol, glycine, beta-alanine, TCA components, fatty acids that mirrored bacteria can feed on without problems. some normal bacteria also use d-aminoacids so normal l-aminoacids should be usable for d-protein bacteria. there's also transaminase that takes d-aminoacids and along with other enzymes it can turn these into l-aminoacids. but even more importantly we're perhaps 30 years away from making this anywhere close to feasible, it's all highly speculative. there's a report if you want to read it https://stacks.stanford.edu/file/druid:cv716pj4036/Technical%20Report%20on%20Mirror%20Bacteria%20Feasibility%20and%20Risks.pdf
also look up cost of these things. unnatural aminoacids, especially these with wrong conformation but otherwise normal are expensive. l-tert-leucine is unnatural but can be made in biotechnological process, so it's cheaper. for example on sigma-aldrich, d-glutamine costs 100x more than l-glutamine, and for sugars it's even worse because these have more chiral centers
besides, it's not really worth it probably? it will take decades and cost more than ftx wiped out. other than making it work just to make it work, all the worthwhile components can be made synthetically, maybe there's some utility in d-proteins, more likely d-peptides, tiny amounts of these can be made by SPPS (for screening) and larger in normal chemical synthesis (for use). these might be slightly useful if slowed down degradation of peptides could be exploited in some kind of pharmaceutical, but do you know how we can make it work in other way? don't put amide bonds there in the first place and just make a small molecule pharmaceutical like we can do (as in, organic chemists)
another part of the concern is that these things could transform organic carbon in form unusable to other organisms. but nature finds a way, and outside of fires etc, there are bacteria that feed on nylon and PET, so i think this situation won't last long