gerikson

joined 2 years ago
[–] gerikson@awful.systems 3 points 2 days ago (1 children)

This is classic labor busting. If the relatively expensive, hard-to-train and hard-to-recruit software engineers can be replaced by cheaper labor, of course employers will do so.

[–] gerikson@awful.systems 6 points 2 days ago* (last edited 2 days ago) (2 children)

A hackernews doesn't think that LLMs will replace software engineers, but they will replace structural engineers:

https://news.ycombinator.com/item?id=43317725

The irony is that most structural engineers are actually de jure professionals, and an easy way for them to both protect their jobs and ensure future buildings don't crumble to dust or are constructed without sprinkler systems is to simply ban LLMs from being used. No such protection exists for software engineers.

Edit the LW post under discussion makes a ton of good points, to the level of being worthy of posting to this forum, and then nails its colors to the mast with this idiocy

At some unknown point – probably in 2030s, possibly tomorrow (but likely not tomorrow) – someone will figure out a different approach to AI. Maybe a slight tweak to the LLM architecture, maybe a completely novel neurosymbolic approach. Maybe it will happen in a major AGI lab, maybe in some new startup. By default, everyone will die in <1 year after that.

Gotta reaffirm the dogma!

[–] gerikson@awful.systems 0 points 6 days ago (1 children)

Worrying about a woke nanny AGI, and not the woke wirehead AGI (wireheading being a lot scarier).

This is very much the right-wing mainstream fear - not being able to generate Nazi memes with OpenAI

[–] gerikson@awful.systems 6 points 2 weeks ago (6 children)
[–] gerikson@awful.systems 6 points 2 weeks ago

how well aligned is the model’s answer with human values?

[angry goose meme] what human values, motherfucker??!!

Seriously though this is grade-school level, or some really convoluted way to write AI takeover fiction.

[–] gerikson@awful.systems 5 points 1 month ago

Looks like LW/Lightcone managed to convince enough people to give then $2M, which will totally not be used to settle sexual assault lawsuits in the future.

[–] gerikson@awful.systems 9 points 1 month ago (1 children)

I'm not reading that shit but for the masochists out there who like to read HN licking VC boots, here ya go

https://news.ycombinator.com/item?id=42682305

[–] gerikson@awful.systems 6 points 1 month ago

people in the Russian-speaking EA community are all busy with other things.

i.e. working for Putin, running from Putin, or dying for Putin

[–] gerikson@awful.systems 18 points 1 month ago

Yeah thought the same, "can't make it much worse"

[–] gerikson@awful.systems 7 points 1 month ago (5 children)

LessWrongers find cool reception for translated HPMOR copies sent to Russia's highest IQ youth

https://www.lesswrong.com/posts/onyiPaxnmiDdHn7SR/no-one-has-the-ball-on-1500-russian-olympiad-winners-who-ve

[–] gerikson@awful.systems 4 points 2 months ago

Yeah it's been decades since I read Rhodes' history about the atom bomb, so I missed the years a bit. My point is that even if we couldn't explain exactly what was happening there was something physically there, and we knew enough about it that Oppenheimer and co. could convince the US Army to build Oak Ridge and many other facilities at massive expense.

We can't say the same about "AI".

[–] gerikson@awful.systems 6 points 2 months ago (2 children)

Yeah, my starting position would be that it was obvious to any competent physicist at the time (although there weren't that many) that the potential energy release from nuclear fission was a real thing - the "only" thing to do to weaponise it or use it for peaceful ends was engineering.

The analogy to "runaway X-risk AGI" is there's a similar straight line from ELIZA to Acausal Robot God, all that's required is a bit of elbow grease and good ole fashioned American ingenuity. But my point is that apart from Yud and a few others, no serious person believes this.

view more: next ›