this post was submitted on 23 Jul 2023
81 points (92.6% liked)

Technology

59346 readers
7497 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Key Facts:

  • The AI system uses ten categories of social emotions to identify violations of social norms.

  • The system has been tested on two large datasets of short texts, validating its models.

  • This preliminary work, funded by DARPA, is seen as a significant step in improving cross-cultural language understanding and situational awareness.

top 35 comments
sorted by: hot top controversial new old
[–] captainlezbian@lemmy.world 44 points 1 year ago (3 children)

This will absolutely be used to oppress the neurodivergent at some point

[–] p03locke@lemmy.dbzer0.com 4 points 1 year ago

Then it's all Butlerian Jihad, bay-bee!

[–] Madrigal@lemmy.world 3 points 1 year ago

Everyone, starting with the neurodivergent. Or some other favoured boogieman-of-the-day such as LGBT+ people.

[–] betterdeadthanreddit@lemmy.world 0 points 1 year ago (2 children)

Could be helpful if it silently (or at least subtly) warns the user that they're approaching those boundaries. I wouldn't mind a little extra assistance preventing those embarrassing after-the-fact realizations. It'd have to be done in a way that preserves privacy though.

[–] overzeetop@lemmy.world 16 points 1 year ago

Like most scientific and technical advances, it could be an amazing tool for personal use. It won’t, of course. It will be used to make someone rich even richer, and to control or oppress people. Gotta love humanity.

[–] CheeseNoodle@lemmy.world 4 points 1 year ago

Still dangerous, an authority could subtly shift those boundries in order to slowly push your behaviour in a desired direction.

[–] NoufauxRiche@lemmy.blahaj.zone 27 points 1 year ago (1 children)

Making an AI that detects violations of social norms itself sounds like a violation of social norms

[–] FaceDeer@kbin.social 5 points 1 year ago

Odd, the AI says it isn't. It must be you that is in violation.

[–] Desistance@lemmy.world 17 points 1 year ago

Why are we letting an AI determine social norms? Social Norms changes like every 5 years.

[–] MasterBlaster@lemmy.world 14 points 1 year ago (1 children)

As long as this doesn't get repurposed to regulate "social credit", this is fine.

[–] hoshikarakitaridia@sh.itjust.works 6 points 1 year ago (2 children)

That's the scary part of it. Idc how good it is, but if it starts to be used to censor information and rate humans, that's the line.

[–] kroy@lemmy.world 4 points 1 year ago

The line will come far far FAR before that

[–] graphite@lemmy.world 4 points 1 year ago

but if it starts to be used to censor information and rate humans, that's the line.

That line has already been crossed. Since it's already been crossed, it's inevitable that this will be used in that way.

[–] Fedizen@lemmy.world 14 points 1 year ago (1 children)

there is no application for this that is actually good

[–] RetroEvolute@lemmy.world 1 points 1 year ago (2 children)

It could help identify and measure people on the autistic spectrum or similar.

[–] n00b001@lemmy.world 11 points 1 year ago (1 children)

And what good comes of possibly covertly testing individuals to an autism test.

What does the examiner do with the results? Or what does their boss do with the results?

No good!

[–] Sethayy@sh.itjust.works 5 points 1 year ago

Well I mean in a functioning system itd be private medical documents, and used to give the best treatment per patient

In our system it'll be used by a private company as "their" data and sold to whoever will pay

[–] captainlezbian@lemmy.world 5 points 1 year ago

Yeah I don’t think anyone using this to do that is going to do good things with that information

[–] RagingNerdoholic@lemmy.ca 9 points 1 year ago* (last edited 1 year ago)

My dumb autistic ass be all like

image

[–] luthis@lemmy.world 7 points 1 year ago (3 children)

Can we like.. maybe have some good as in morally use cases for AI?

I know we had the medical diagnosis one, that was nice. Maybe some more like that?

[–] starman2112@lemmy.world 4 points 1 year ago (1 children)

I'm extremely skeptical of medical diagnosis AIs. Without being able to explain why it comes to a conclusion, how do we know it won't just accidentally find correlations? One example I heard of recently was an AI that was extremely good at detecting TB... based on the age of the machine that took the x-ray. Because it turns out places with older machines tend to be poorer, and poorer places tend to have more TB.

The only positive use I can think of is time saving measures. A researcher can feed a study to ChatGPT and have it write a rough first draft of the abstract. A Game Master could ask it for inspiration on the next few game sessions if they're underprepared. An internet commenter could ask it for a third example of how it could save time.

But for anything serious, until it can explain why it comes to the conclusions it comes to, and can understand when a human says "no, you're doing it wrong," I can't see it being a real force for good.

[–] Priestofazathoth@lemmy.world 1 points 1 year ago

Ehh...at least we know we don't understand how the AI reached its conclusion. When you study human cognition long enough you discover that our beliefs about how we reach our conclusions are just stories the conscious mind makes up to justify after the fact.

"No, you're doing it wrong" isn't really a problem - it's fundamental to most ML processes.

[–] Melpomene@kbin.social 3 points 1 year ago

I was thinking that a good use of AI might be eliminating the human race, but I guess my Ultron is showing a little bit.

I tend to stay up late sometimes worrying about the moment AI becomes self-aware. I hope, by then, we're not so selfish, destructive, and petty.

[–] HikuNoir@lemmy.world -1 points 1 year ago (1 children)

What? Like ones that can quarantine you for being asymptomatic of Sea horse flu et al? Great idea.

[–] luthis@lemmy.world 5 points 1 year ago (1 children)

No, more like the ones that give early warning signs of like, dementia or something.

[–] HikuNoir@lemmy.world 1 points 1 year ago

It would be unethical for a calculator to offer up an alleged determination given the effects of nocebo. AI will never understand irony let alone health and well being ...

[–] PineapplePartisan@lemmy.world 5 points 1 year ago

Oh good, the family feud method of social norms. No value judgement, just “Survey says!”.

[–] FlickOfTheBean@lemmy.world 5 points 1 year ago

Unless this is just for identifying social norms violations in written communication for the purpose of government to government communication, this seems vastly.... Infeasible, I guess. Because norms change over time, and you're going to have to be updating this model when it's finally noticed that a change has occurred. If anything, it might generate a completely new form of grammar/phrasing expectations due to the feedback from this likely-to-not-change-very-much ruleset... As in, if you thought politically correct phrasing was annoying now, just wait until the ai says you're not doing it well enough.

Idk though, this isn't my specialty area, anyone care to tell me how I'm wrong? What good can this really do?

(I swear I did read the article, it just isn't clicking over the sound of my loud pessimism)

[–] DLSchichtl@lemmy.world 5 points 1 year ago

Nice use of the scary word 'violation," really gets the blood pumping. A more apt description would be 'deviations from social norms,' which more follows the stated DARPA desire to help with language and cultural barriers. It could give ~~invad-~~ er, liberating American forces a heads up when they are about to commit a cultural SNAFU. Helps build better relationships with the locals.

[–] KbinItTogether@kbin.social 4 points 1 year ago (1 children)

This sounds like a prequel to the show Psycho-Pass.

[–] FaceDeer@kbin.social 2 points 1 year ago

Psycho-Pass raises a lot of interesting questions and dilemmas because the technology depicted actually works in that setting (aside from a few outlying cases that help drive interesting plots). If there was a scanner of some sort that actually genuinely could detect violent intention before it was acted on then I think it would be reasonable and moral to use it in at least some manner.

[–] coco@lemmy.world 4 points 1 year ago

Like Black Mirror !!

[–] captainlezbian@lemmy.world 3 points 1 year ago

This will absolutely be used to oppress the neurodivergent at some point

[–] DarkThoughts@kbin.social 3 points 1 year ago

That's definitely not going to be abused, at all.

[–] Arotrios@kbin.social 2 points 1 year ago

Damn, now I want them to sample me just to see how much I can drive the model insane.