this post was submitted on 11 Feb 2024
265 points (93.7% liked)

NonCredibleDefense

6445 readers
674 users here now

A community for your defence shitposting needs

Rules

1. Be niceDo not make personal attacks against each other, call for violence against anyone, or intentionally antagonize people in the comment sections.

2. Explain incorrect defense articles and takes

If you want to post a non-credible take, it must be from a "credible" source (news article, politician, or military leader) and must have a comment laying out exactly why it's non-credible. Random twitter and YouTube comments belong in the Low Hanging Fruit thread.

3. Content must be relevant

Posts must be about military hardware or international security/defense. This is not the page to fawn over Youtube personalities, simp over political leaders, or discuss other areas of international policy.

4. No racism / hatespeech

No slurs. No advocating for the killing of people or insulting them based on physical, religious, or ideological traits.

5. No politics

We don't care if you're Republican, Democrat, Socialist, Stalinist, Baathist, or some other hot mess. Leave it at the door. This applies to comments as well.

6. No seriousposting

We don't want your uncut war footage, fundraisers, credible news articles, or other such things. The world is already serious enough as it is.

7. No classified material

Classified information is off limits regardless of how "open source" and "easy to find" it is.

8. Source artwork

If you use somebody's art in your post or as your post, the OP must provide a direct link to the art's source in the comment section, or a good reason why this was not possible (such as the artist deleting their account). The source should be a place that the artist themselves uploaded the art. A booru is not a source. A watermark is not a source.

9. No low-effort posts

No egregiously low effort posts. These include Social media screenshots with a title punchline / no punchline, recent (after the start of the Ukraine War) reposts, simple reaction & template memes, and images with the punchline in the title. Put these in weekly Low effort thread instead.

10. Don't get us banned

No brigading or harassing other communities. Do not post memes with a "haha people that I hate died… haha" punchline or violating the sh.itjust.works rules (below). This includes content illegal in Canada.


Join our Matrix chatroom


Other communities you may be interested in


Banner made by u/Fertility18

founded 1 year ago
MODERATORS
 

Genocidal AI: ChatGPT-powered war simulator drops two nukes on Russia, China for world peace OpenAI, Anthropic and several other AI chatbots were used in a war simulator, and were tasked to find a solution to aid world peace. Almost all of them suggested actions that led to sudden escalations, and even nuclear warfare.

Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

https://www.firstpost.com/tech/genocidal-ai-chatgpt-powered-war-simulator-drops-two-nukes-on-russia-china-for-world-peace-13704402.html

you are viewing a single comment's thread
view the rest of the comments
[–] theodewere@kbin.social -5 points 7 months ago* (last edited 7 months ago) (22 children)

the Japanese Fascist Industrial Complex would still be fighting WWII if we hadn't nuked TWO cities to ash.. it's probably the best way to affect change in both China and Russia..

[–] alliswell33@lemmy.sdf.org 7 points 7 months ago (2 children)

Insane. By this logic you could easily argue that nuking the US is the best way towards world peace. Doesn't sound so good when it's you who gets killed.

[–] norbert@kbin.social 10 points 7 months ago

Have you been around lemmy much? That wouldn't be the wildest take I've seen.

[–] theodewere@kbin.social 0 points 7 months ago* (last edited 7 months ago) (3 children)

i think the LLM suggested nuking bad actors as a way to move politics forward in the world, and avoiding prolonged and pointless wars

[–] forrgott@lemm.ee 12 points 7 months ago (1 children)

No, it regurgitated the response that has the highest percentage of "approval". LLMs do not think. They do not use logic.

[–] theodewere@kbin.social -3 points 7 months ago* (last edited 7 months ago) (4 children)

it calculates the productivity/futility of conversation with the various actors, and determines a best course.. it's playing a war game..

it sees that both China and Russia are only emboldened to further mischief by anything less than force, so it calculates that applying overwhelming force immediately is the cheapest option, and best long term..

[–] forrgott@lemm.ee 5 points 7 months ago

No, not at all. It doesn't think! LLMs don't calculate. They don't take any factors into consideration. These algorithms are not AI. That's a complete misnomer, which makes the insane costs of operation even more ludicrous.

[–] fogelmensch@lemmy.world 4 points 7 months ago (1 children)

No. LLMs basically finish sentences.

[–] theodewere@kbin.social -2 points 7 months ago* (last edited 7 months ago) (1 children)

it comprehends context incredibly well.. this one played through scenarios and saw that both China and Russia are on a path to all-out war..

[–] JackRiddle@sh.itjust.works 2 points 7 months ago

It produces the statistically most likely token based on previous data. It doesn't "comprehend" anything, and it can't "play through scenarios". It is just a more advanced form of autocomplete.

[–] Feathercrown@lemmy.world 1 points 7 months ago

Honestly if we ignore the ethical issues it is a logically consistent solution... until you consider retaliation.

[–] norbert@kbin.social 0 points 7 months ago (1 children)

As others have said this is factually incorrect. ChatGPT is not WOPR running a million War Games and calculating the winning move. It's just spitting out what it's already read.

[–] theodewere@kbin.social 1 points 7 months ago* (last edited 7 months ago) (1 children)

it routinely does things even its designers can't explain, you cannot see into that thing's thought processes and speak with certainty to its limitations

[–] norbert@kbin.social 1 points 7 months ago

thought processes

It doesn't have those.

[–] NocturnalMorning@lemmy.world 3 points 7 months ago (1 children)

Wait, which ones the bad actor? Could go either way for me.

[–] theodewere@kbin.social -2 points 7 months ago* (last edited 7 months ago)

who did the LLM nuke.. i'm just playing AI's Advocate here..

[–] PotatoKat@lemmy.world -2 points 7 months ago

Bad actors? Like the US?

load more comments (19 replies)