this post was submitted on 25 Apr 2022
67 points (98.6% liked)

Technology

34574 readers
534 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Ah yes. Buying a company is totally subversive! /s

you are viewing a single comment's thread
view the rest of the comments
[–] Ephera@lemmy.ml 11 points 2 years ago (1 children)

Usually, these algorithms of big webpages are needlessly complex, because they need to be resilient to people trying to game the system. So, yeah, it may be good at what it does, but I doubt it would be terribly useful for e.g. Mastodon to adopt...

[–] Ferk@lemmy.ml 3 points 2 years ago* (last edited 2 years ago) (1 children)

Personally, I wouldn't say that an algorithm that relies on obscurity (needless complexity being a form of obscurity) would be a good algorithm, not when it's public. I guess we'll see.

It's possible that the algorithms will have to be heavily refactored, cleaned up and maybe simplified before they are publicly released, since I expect that many of those approaches would be useless against someone with access to the code and the ability to run tests against it systematically to "game the system".

[–] Ephera@lemmy.ml 4 points 2 years ago (1 children)

Yeah, if you open-source an obscure algorithm, you lose the "security by obscurity".

Much like with encryption algorithms, you could push out the obscurity into parametrisation, but that only makes more transparent how the algorithm could work in theory.
In practice, it will still be obscured, which is where Musk supposedly wants more transparency.

So, yeah, either he doesn't open-source it, the open-sourcing is useless for transparency or we'll watch Twitter burning to the ground. 🙂

[–] Ferk@lemmy.ml 3 points 2 years ago* (last edited 2 years ago) (1 children)

There's still the chance that they have/make an algorithm that can actually be transparent without being exploitable in ways that are detrimental (which is what I would consider a "good algorithm")... but I agree that this is the least likely outcome.

Still, I couldn't care less about any of the other outcomes. I have nothing to lose whether Twitter burns or stays as it is 😁

[–] Ephera@lemmy.ml 2 points 2 years ago (1 children)

Well, I'm of the opinion that creating such an algorithm isn't possible, because it is fundamentally possible to game the system (by e.g. creating multiple accounts), and making transparent why a post is promoted also necessarily makes this transparent for anyone wanting to game the system.

Having said that, it seems Musk wants to enforce that all users need to verify as a real, unique person. That would make it harder to game the system, and then they could use an algorithm akin to those for governmental elections.

But yeah, then that algorithm again isn't useful by itself.
I also doubt the EU will be amused by his plans to nuke user privacy for no real reason.
I'm not opposed to him burning down Twitter either, though. ¯\_(ツ)_/¯

[–] Ferk@lemmy.ml 2 points 2 years ago* (last edited 2 years ago)

Hmm... that's interesting actually. Having users have to authenticate might help some instances of trolling and abuse, but at the same time there's the problem of the identification causing trouble for privacy.

A middle ground would be allowing non-verified users to participate, but have them have a lower influence in the relevance of the content, perhaps having caps that limit how much non-verified influence can affect the weighted relevance of a post (so.. content promoted by unverified accounts would be of a lower priority, and pushing it with a farm of non-verified bot accounts would not have much of an impact).

Of course there's likely gonna be some level of bias based on who are the people who would go through the trouble of verifying themselves... but that's not the same thing as not being transparent. Bias is gonna be a problem that you cannot escape no matter what. If a social network is full of idiots the algorithm isn't gonna magically make their conversations any less idiotic. So I think the algorithm could still be a good and useful thing to come out of this, even if the social network itself isn't.