Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
view the rest of the comments
And who gets to decide what is misinformation?
If the information is verifiably false, it doesn't matter who proves it.
verified by whom? Our own parents lie to us sometimes, and even spouses do, so what makes you think media or gov't won't?
My point is that this information is typically not verifiable in an independent sense. This get essentially to what is truth, the truth we should all agree is the truth no matter who's providing it to us. We can verify it for ourselves. As an example, if we have 14 videos of the same incident and they show roughly the same events then we can be fairly confident that that event occured because we trust that those 14 sources are real and non-colluding.
that isn't how human beings work. where have YOU been? China - where you can be thrown in prison for not having the same "truth" as another person? You do know that not all scientists even agree on "truth", right? I think it was Feinman who showed that Einstein was wrong once upon a time.
Those 14 videos would have to be from 14 different sources, but I guess that is what you meant. Even that only leads to an event being more probably, but doesn't guarantee its true.
Rather then step in as arbiters of truth, they should provide tools, links and APIs to let their users decide what is true and what is misinformation. If their users express some concensus, like 90%+ of real human users, then I think they're right to demote or provide warning labels on particular links.