this post was submitted on 31 May 2024
10 points (100.0% liked)

Technology

37724 readers
636 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Networks in China and Iran also used AI models to create and post disinformation but campaigns did not reach large audiences

In Russia, two operations created and spread content criticizing the US, Ukraine and several Baltic nations. One of the operations used an OpenAI model to debug code and create a bot that posted on Telegram. China’s influence operation generated text in English, Chinese, Japanese and Korean, which operatives then posted on Twitter and Medium.

Iranian actors generated full articles that attacked the US and Israel, which they translated into English and French. An Israeli political firm called Stoic ran a network of fake social media accounts which created a range of content, including posts accusing US student protests against Israel’s war in Gaza of being antisemitic.

you are viewing a single comment's thread
view the rest of the comments
[–] eveninghere@beehaw.org 0 points 5 months ago (1 children)

I already write one reply to tell my main point. But whatever argument you come up with, I don't think that'll match the reality as viewed by AI researchers. If you give me specific short questions I'd be happy to engage in a discussion, with conditions on time.

In any case, I won't listen to metaphoric arguments like yours with guns because metaphoric arguments are very difficult to do scientifically. Every situation is different. I mean that anybody can always end the discussion saying "that's oranges vs apples", and everything time this happens you'd not have an objective way to counter that.

[–] frog@beehaw.org 1 points 5 months ago (1 children)

The metaphoric argument is exactly on point, though: the answer to "bad actors will use it for evil" is not "so everybody should have unrestricted access to this really dangerous thing." Sorry, but in no situation you can possibly devise is giving everyone access to a dangerous tool the correct answer to bad people having access to it.

[–] eveninghere@beehaw.org 1 points 5 months ago (1 children)

I can say it's both on point and not. For the not, you can ban the gun in the UK and it will be very difficult to bring one from the continent. Peace. But the same is not true for AI. If the UK government bans AI, Russia can still bring it through the internet.

And then I can still counter-argue that one, and then counter-argue this one also. See what a mess a metaphoric arguments bring.

[–] frog@beehaw.org 1 points 5 months ago

Had OpenAI not released ChatGPT, making it available to everyone (including Russia), there are no indications that Russia would have developed their own ChatGPT. Literally nobody has made any suggestion that Russia was within a hair's breadth of inventing AI and so OpenAI had better do it first. But there have been plenty of people making the entirely valid point that OpenAI rushed to release this thing before it was ready and before the consequences had been considered.

So effectively, what OpenAI have done is start handing out guns to everyone, and is now saying "look, all these bad people have guns! The only solution is everyone who doesn't already have a gun should get one right now, preferably from us!"