Mercy craned forward, took a deep breath and loaded another task on her computer. One after another, disturbing images and videos appeared on her screen. As a Meta content moderator working at an outsourced office in Nairobi, Mercy was expected to action one “ticket” every 55 seconds during her 10-hour shift. This particular video was of a fatal car crash. Someone had filmed the scene and uploaded it to Facebook, where it had been flagged by a user. Mercy’s job was to determine whether it had breached any of the company’s guidelines that prohibit particularly violent or graphic content. She looked closer at the video as the person filming zoomed in on the crash. She began to recognise one of the faces on the screen just before it snapped into focus: the victim was her grandfather.

New tickets appeared on the screen: her grandfather again, the same crash over and over. Not only the same video shared by others, but new videos from different angles. Pictures of the car; pictures of the dead; descriptions of the scene. She began to recognise everything now. Her neighbourhood, around sunset, only a couple of hours ago – a familiar street she had walked along many times. Four people had died. Her shift seemed endless.

We spoke with dozens of workers just like Mercy at three data annotation and content moderation centres run by one company across Kenya and Uganda. Content moderators are the workers who trawl, manually, through social media posts to remove toxic content and flag violations of the company’s policies. Data annotators label data with relevant tags to make it legible for use by computer algorithms. Behind the scenes, these two types of “data work” make our digital lives possible. Mercy’s story was a particularly upsetting case, but by no means extraordinary. The demands of the job are intense.

  • Flying Squid@lemmy.worldM
    link
    fedilink
    English
    arrow-up
    11
    ·
    5 months ago

    I honestly don’t know the solution to this beyond, obviously, pay these people more. Because if you don’t train an AI to go through this sick shit and get rid of it, you need to get people to do it. The only other option is to just leave it up and I don’t think that’s a good option.

    AI sucks for a lot of applications and I’m sure it will not be effective enough in this instance to go without any humans involved in the process entirely, but if we don’t try to get humans out of this process, we’re forcing people to look at murders and child porn to get rid of it.

    • LadyAutumn@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      5 months ago

      Yeah. I think one fantastic step is not to outsource this to people in countries who can be paid several orders of magnitude lower than the minimum wage they can be paid in the states (which is already pitifully low).

      I also feel like this can’t be someone’s full time job. You just can’t do this full time. People who do content moderation should be rotated on and off of checking content. They also shouldn’t have KPI metrics. They should have enough time to process after seeing one of these things. Whole thing is criminally inhumane.

      And yeah, idk why AI can’t auto-remove video/image content and it’s only human reviewed if it’s appealed.

      • KevonLooney@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 months ago

        A real solution is to require every single employee in the company to review these videos equally. If the Executive Leadership Team experiences this once, the issue will be fixed. Probably by banning repeat problem users.