this post was submitted on 17 Feb 2024
71 points (100.0% liked)

Technology

37563 readers
328 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] jarfil@beehaw.org 35 points 6 months ago (9 children)

Only small services with fewer than 50 employees and annual turnover of under €10 million (around $10.8 million) are exempt.

Soon: large platforms firing everyone but 49 employees, and outsourcing all their operations to hundreds of companies with fewer than 49 employees each... all owned by the same shareholders.

[–] rimu@piefed.social 21 points 6 months ago (2 children)

or they could just comply with the law:

sites will have to provide a reason to users when their content or account has been moderated, and offer them a way of complaining and challenging the decision. There are also rules around giving users the ability to flag illegal goods and services found on a platform.

Doesn't seem like a big deal to me.

[–] tuhriel@discuss.tchncs.de 1 points 6 months ago (1 children)

It is at the scale they are working on, there's a reason you can't get an actual person to contact you... It's too expensive to have actual people working these cases

[–] Zworf@beehaw.org 1 points 6 months ago* (last edited 6 months ago)

It's mostly actual people. I know some of them at different platforms (for some reason this city has become a bit of a moderation hub). Most of these companies take moderation very seriously and if AI is involved it's so far just in an advisory capacity. Twitter being the exception because.. well, Elon.

But their work is strictly internally regulated based on a myriad of policies (most of which are not made public especially to prevent bad actors from working around them). There usually isn't much to discuss with a user nor could it really go anywhere. Before a ban gets issued the case has already been reviewed by at least 2 people and their 'accuracy' is constantly monitored by QA people.

Most are also very strict to their employees. No remote work, no phones on the workfloor, strong oversight etc.. To make sure cases are handled personally and employees don't share screenshots of private data.

And most of them have a psychologist on site 24/7. It's not much fun watching the stuff these people get to deal with on a daily basis. I don't envy them.

load more comments (6 replies)