blakestacey

joined 2 years ago
[–] blakestacey@awful.systems 1 points 3 months ago* (last edited 3 months ago)

Yeah, Krugman appearing on the roster surprised me too. While I haven't pored over everything he's blogged and microblogged, he hasn't sent up red flags that I recall. E.g., here he is in 2009:

Oh, Kay. Greg Mankiw looks at a graph showing that children of high-income families do better on tests, and suggests that it’s largely about inherited talent: smart people make lots of money, and also have smart kids.

But, you know, there’s lots of evidence that there’s more to it than that. For example: students with low test scores from high-income families are slightly more likely to finish college than students with high test scores from low-income families.

It’s comforting to think that we live in a meritocracy. But we don’t.

And in 2014:

There are many negative things you can say about Paul Ryan, chairman of the House Budget Committee and the G.O.P.’s de facto intellectual leader. But you have to admit that he’s a very articulate guy, an expert at sounding as if he knows what he’s talking about.

So it’s comical, in a way, to see [Paul] Ryan trying to explain away some recent remarks in which he attributed persistent poverty to a “culture, in our inner cities in particular, of men not working and just generations of men not even thinking about working.” He was, he says, simply being “inarticulate.” How could anyone suggest that it was a racial dog-whistle? Why, he even cited the work of serious scholars — people like Charles Murray, most famous for arguing that blacks are genetically inferior to whites. Oh, wait.

I suppose it's possible that he was invited to an e-mail list in the late '90s and never bothered to unsubscribe, or something like that.

[–] blakestacey@awful.systems 1 points 3 months ago

I often use prompts

Well, there's your problem

[–] blakestacey@awful.systems 1 points 3 months ago (2 children)

and hot young singles in your area have a bridge in Brooklyn to sell

on the blockchain

[–] blakestacey@awful.systems 0 points 3 months ago (13 children)

When you don’t have anything new, use brute force. Just as GPT-4 was eight instances of GPT-3 in a trenchcoat, o1 is GPT-4o, but running each query multiple times and evaluating the results. o1 even says “Thought for [number] seconds” so you can be impressed how hard it’s “thinking.”.

This “thinking” costs money. o1 increases accuracy by taking much longer for everything, so it costs developers three to four times as much per token as GPT-4o.

Because the industry wasn't doing enough climate damage already.... Let's quadruple the carbon we shit into the air!

[–] blakestacey@awful.systems 1 points 4 months ago

I actually don’t get the general hate for AI here.

Try harder.

[–] blakestacey@awful.systems 1 points 4 months ago* (last edited 4 months ago)

We have had readily available video communication for over a decade.

We've been using "video communication" to teach for half a century at least; Open University enrolled students in 1970. All the advantages of editing together the best performances from a top-notch professor, moving beyond the blackboard to animation, etc., etc., were obvious in the 1980s when Caltech did exactly that and made a whole TV series to teach physics students and, even more importantly, their teachers. Adding a new technology that spouts bullshit without regard to factual accuracy is necessarily, inevitably, a backward step.

[–] blakestacey@awful.systems 1 points 4 months ago (1 children)

AI can directly and individually address that frustration and find a solution.

No, it can't.

Quod grātīs asseritur, grātīs negātur.

[–] blakestacey@awful.systems 1 points 4 months ago (1 children)

I'd offer congratulations on obfuscating a bad claim with a poor analogy, but you didn't even do that very well.

[–] blakestacey@awful.systems 1 points 4 months ago

People just out here acting like a fundamentally, inextricably unreliable and unethical technology has a "use case"

smdh

[–] blakestacey@awful.systems 1 points 9 months ago

When the hashtag says "God is good" but the picture says "God is dead"

[–] blakestacey@awful.systems 1 points 9 months ago (1 children)

Let's invite Taylor & Francis to the party. This book chapter has a "results" section that reads like the whole thing came out of GlurgeBot, with the beginning clumsily edited to hide that fact:

An AI language model do not have access to data or specific research findings. However, in a research paper on advancing early cancer detection with machine learning, the experimental results would typically involve evaluating the performance of machine learning models for early cancer detection.

[–] blakestacey@awful.systems 1 points 9 months ago

I don't know why the Journal of Advanced Zoology would be publishing "Lexico-Stylistic Functions of Argotisms inEnglish Language", but there you go:

I apologize for the confusion, but as an AI language model, I don't have access to specific articles or their sections, such as the «Introduction» section of the article «Lexico-stylistic functions of argotisms in the English language». I can provide you with a general outline of what an introduction section might cover in an article on this topic

view more: ‹ prev next ›