this post was submitted on 21 Jul 2023
241 points (100.0% liked)

Technology

37699 readers
482 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

This is the humanless future, hurray!

you are viewing a single comment's thread
view the rest of the comments
[–] TwilightVulpine@kbin.social 8 points 1 year ago (8 children)

Could a language model actually independently discern if a source is trustworthy? Seems that's something difficult to determine when it comes to possible leaks. The kinds of AIs that we have today can't really conceptualize a world outside the texts they process, they can only check based on other texts and user input.

[–] FaceDeer@kbin.social 1 points 1 year ago (5 children)

It would need to be told to do so, of course. I can think of a couple of approaches. You could have it use a database to track the identities of information sources, so the AI would know whether it was coming from new or well-established sources. It could check to see if the news is appearing in other sources. A lot of this isn't strictly large-language-model-based capability, but it would be using LLMs to interpret its inputs.

[–] MagicShel@programming.dev 1 points 1 year ago (4 children)

Analysis is social media through the lens of tracking source reliability would be damned useful without AI and if that could easily be done I think it would already be. I've thought about this for about five years, thinking we could track bots and disinformation based on the patterns of who promotes/upvotes it, but it's beyond my meager means.

[–] nickajeglin@lemmy.one 2 points 1 year ago* (last edited 1 year ago)

I think certain places (reddit?) Have been using algorithms to find and stamp out bots/vote manipulation for quite a while. I remember at least one major wave of bans for smurfed accounts participating in manipulation.

load more comments (3 replies)
load more comments (3 replies)
load more comments (5 replies)