this post was submitted on 28 Jul 2023
25 points (85.7% liked)

Lemmy

12465 readers
1 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to !meta@lemmy.ml.

founded 4 years ago
MODERATORS
 

I'm sure this is a common topic but the timeline is pretty fast these days.

With bots looking more human than ever i'm wondering what's going to happen once everyone start using them to spam the platform. Lemmy with it's simple username/text layout seem to offer the perfect ground for bots, to verify if someone is real is going to take scrolling through all his comments and read them accurately one by one.

you are viewing a single comment's thread
view the rest of the comments
[–] Lmaydev@programming.dev 3 points 1 year ago (1 children)

It's because it isn't fed facts really. Words are converted into numbers and it understands the relationship between them.

It has absolutely no understanding of facts, just how words are used with other words.

It's not like it's looking up things in a database. It's taking the provided words and applying a mathematical formula to create new words.

[–] RoundSparrow@lemmy.ml 1 points 1 year ago

It’s because it isn’t fed facts really.

That's an interesting theory of why it works that way. Personally, I think rights usage, as in copyright, is a huge problem for OpenAI and Microsoft (Bing)... and they are trying to avoid paying money for the training material they use. And if they accurately quoted source material, they would run into expensive costs they are trying to avoid.

!aicopyright@lemm.ee