Not really my opinion, but there is a reasonable argument to be made about even benevolent algorithms ultimately increasing your engagement with online content and alienate you from your physical surrounding and people near you. Just because you set it up yourself does not mean that it healthy for you.
Fediverse
A community dedicated to fediverse news and discussion.
Fediverse is a portmanteau of "federation" and "universe".
Getting started on Fediverse;
- What is the fediverse?
- Fediverse Platforms
- How to run your own community
This is the exact same logic people use for drug prohibition, and it's a completely wrong way to look at it. The problem in both cases is that people feel the need to escape their physical surroundings. This problem must be solved by making better environments that we live and work in. Making work less stressful, giving people more free time to socialize, providing public services like parks, sports centres, and so on.
i think when it comes to algorithms that save you time, simple filters do the job perfectly. like only people you follow vs. specific hashtags, or just full posts vs. replies included, or chronological vs. "good friends" (like in instagram) first. part of the reason modern algorithms are so complex is so they can confuse us and we end up spending more time on the platform. if you're making an algorithm for ease of use, it should be the opposite of confusing. it should probably be clarified what people mean by algorithms, since that's a very general word, but most of the time they probably mean the complex and confusing stuff modern social media uses, rather than the simple filters that most of the fediverse uses.
more complex algorithms might be useful for a site like YouTube, since it's an entertainment platform not a socialization platform, so you just want to see anything that will be entertaining, and discover new content whenever possible.
I've always wished that social media sites would have their algorithm be user customizable through some kind of basic syntax. There could of course be a default - but the user would be able to see what it is, how it works, and be able to customize it to their liking. Of course, this would be complicated, but it's not like these algorithms don't already exist. They're just hidden.
The inherently wrong thing about algorithms is that since they are made to maximize profits by maximizing engagement, they produce addiction. I don't see anything wrong with an opt-in option for algorithms as long as there is a warning about the fact that it may produce addiction. The problem is that for an algorithm to work more data needs to be gathered. And since many FOSS software is privacy focused they don't tend to gather much data.
Addiction is a goal of the algorithms as they are designed. Better algorithms increase your screen-on time and mindless scrolling.
A good algorithm could, hypothetically, limit the number of content you see, or save you time from scrolling to find something interesting.
I agree. As you said, an algorithm doesn't necessarily need to be addictive or maximize your wasted time, that just happens to be the most common type we see today, due to the big platforms' drive for financial growth. It is perfectly reasonable to want an algorithm that helps you by filtering the massive pool of content available on the Internet into the most relevant stuff for you, while also be respectful of your time. The idea you mentioned of an entry-limited algorithm is certainly very interesting, in my opinion.
My big problem with "algorithms" (by which I don't mean the pedantic "well, pushing top-rated content to the top is ackshyouallee an algorithm, technically") for controlling feeds is that algorithms are biased in subtly devastating ways. We like to think that "algorithms" are neutral because computers are neutral, but the truth is "algorithms" are designed and implemented by human beings and reflect what those human beings think is "normal" or "correct" or "important" or the like. Indeed there's one huge, GLARING bias baked straight in from the outset: numericity. If it can't be factored in some way into a number, it isn't important to the "algorithm" because at its core the "algorithm" can't work without numbers of some form or another.
Every "algorithmically"-curated system I deal with I can break with ease just by thinking a wee bit outside the box and flustering the poor so-called AI by selecting things on criterion that they're likely not programmed for because in the biased view of the AI's programmers the criterion wasn't "important".
At some point years ago Facebook started defaulting to relatives/family algorithmically. This is extremely biased and problematic. It makes a lot of sensitive assumptions, as everyone's family structure is different. So "devastating" is a good choice of words.
Yep. Lemmy feed for example is algorithmic
Not really unless you have a really broad definition of algorithmic. It is just up and downvotes (and personalized subscriptions).