this post was submitted on 31 Jul 2023
44 points (100.0% liked)

Technology

37717 readers
367 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Prof. Yair Neuman and the engineer Yochai Cohen at Ben-Gurion University of the Negev have designed an AI system that identifies social norm violations. They trained a system to identify ten social emotions: competence, politeness, trust, discipline, caring, agreeableness, success, conformity, decency, and loyalty.

The system, which was tested on two massive datasets of short texts, successfully characterized a written situation under one of these ten classifiers and could perceive if it was positive or negative. The researchers claim that their models present significant predictive performance and show that even complex social situations can be functionally analyzed through modern computational tools.

top 3 comments
sorted by: hot top controversial new old
[–] itsAllDigital@feddit.de 23 points 1 year ago

Welcome to the future. Get ready to conform or perish

[–] lobster_teapot@lemmy.blahaj.zone 15 points 1 year ago (1 children)

This is one of the worst case of tech dude tries to solve social sciences with math I've ever read. The paper is not just bad as a whole, it deliberately disregard 200 years of research in at least 3 different academic fields and instead quotes Borat.
And then goes on to gleefully describe how the authors made a giant machine to reproduce their own (dangerous) biases about the universality of emotion-voicing with just chat-GPT and a zero-shot classifier, would you look at that? Yay science I guess?

[–] heartlessevil@lemmy.one 3 points 1 year ago* (last edited 1 year ago)

Yeah this is complete GPT hype. I would like to never hear about GPT again just like we never hear about NFTs now.

I can't even filter out GPT posts because people call them AI instead of LLM to sound cool :(