blakestacey

joined 2 years ago
[–] blakestacey@awful.systems 15 points 1 week ago (1 children)

"Drinking alone tonight?" the bartender asks.

[–] blakestacey@awful.systems 7 points 1 week ago* (last edited 1 week ago)

I don't see what useful information the motte and bailey lingo actually conveys that equivocation and deception and bait-and-switch didn't. And I distrust any turn of phrase popularized in the LessWrong-o-sphere. If they like it, what bad mental habits does it appeal to?

The original coiner appears to be in with the brain-freezing crowd. He's written about the game theory of "braving the woke mob" for a Tory rag.

[–] blakestacey@awful.systems 10 points 1 week ago (4 children)

In the department of not smelling at all like desperation:

On Wednesday, OpenAI launched a 1-800-CHATGPT (1-800-242-8478) telephone number that anyone in the US can call to talk to ChatGPT via voice chat for up to 15 minutes for free.

It had a very focused area of expertise, but for sincerity, you couldn't beat 1-900-MIX-A-LOT.

[–] blakestacey@awful.systems 9 points 1 week ago (4 children)

Petition to replace "motte and bailey" per the Batman clause with "lying like a dipshit".

[–] blakestacey@awful.systems 44 points 1 week ago (4 children)

Wojciakowski took the critiques on board. “Wow, tough crowd … I’ve learned today that you are sensitive to ensuring human readability.”

Christ, what an asshole.

[–] blakestacey@awful.systems 20 points 1 week ago

Max Kennerly's reply:

For a client I recently reviewed a redlined contract where the counterparty used an "AI-powered contract platform." It had inserted into the contract a provision entirely contrary to their own interests.

So I left it in there.

Please, go ahead, use AI lawyers. It's better for my clients.

[–] blakestacey@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Adam Christopher comments on a story in Publishers Weekly.

Says the CEO of HarperCollins on AI:

"One idea is a “talking book,” where a book sits atop a large language model, allowing readers to converse with an AI facsimile of its author."

Please, just make it stop, somebody.

Robert Evans adds,

there's a pretty good short story idea in some publisher offering an AI facsimile of Harlan Ellison that then tortures its readers to death

Kevin Kruse observes,

I guess this means that HarperCollins is getting out of the business of publishing actual books by actual people, because no one worth a damn is ever going to sign a contract to publish with an outfit with this much fucking contempt for its authors.

[–] blakestacey@awful.systems 9 points 2 weeks ago (1 children)

There's a whole lot of assuming-the-conclusion in advocacy for many-worlds interpretations — sometimes from philosophers, and all the time from Yuddites online. If you make a whole bunch of tacit assumptions, starting with those about how mathematics relates to physical reality, you end up in MWI country. And if you make sure your assumptions stay tacit, you can act like an MWI is the only answer, and everyone else is being ~~un-mutual~~ irrational.

(I use the plural interpretations here because there's not just one flavor of MWIce cream. The people who take it seriously have been arguing amongst one another about how to make it work for half a century now. What does it mean for one event to be more probable than another if all events always happen? When is one "world" distinct from another? The arguments iterate like the construction of a fractal curve.)

[–] blakestacey@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago)

The peer reviewers didn't say anything about it because they never saw it: It's an unilluminating comparison thrown into the press release but not included in the actual paper.

[–] blakestacey@awful.systems 18 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

"Quantum computation happens in parallel worlds simultaneously" is a lazy take trotted out by people who want to believe in parallel worlds. It is a bad mental image, because it gives the misleading impression that a quantum computer could speed up anything. But all the indications from the actual math are that quantum computers would be better at some tasks than at others. (If you want to use the names that CS people have invented for complexity classes, this imagery would lead you to think that quantum computers could whack any problem in EXPSPACE. But the actual complexity class for "problems efficiently solvable on a quantum computer", BQP, is known to be contained in PSPACE, which is strictly smaller than EXPSPACE.) It also completely obscures the very important point that some tasks look like they'd need a quantum computer — the program is written in quantum circuit language and all that — but a classical computer can actually do the job efficiently. Accepting the goofy pop-science/science-fiction imagery as truth would mean you'd never imagine the Gottesman–Knill theorem could be true.

To quote a paper by Andy Steane, one of the early contributors to quantum error correction:

The answer to the question ‘where does a quantum computer manage to perform its amazing computations?’ is, we conclude, ‘in the region of spacetime occupied by the quantum computer’.

view more: ‹ prev next ›