Is seeing still believing
Social media content moderators perform a critical function of modern civic society: they act as first responders on social media platforms, getting confronted with harmful digital content on a daily basis. Without them, the Internet would be at the mercy of the unpredictable, global social climate, likely making the Internet a less safe, less peaceful place to be.
Nevertheless, disturbing content screening cannot easily be addressed with the future’s planned full integration of artificial intelligence. Content moderation is not a technical challenge that can be solved by employing artificial intelligence as the solution to the content moderation related issues. Automated tools, for example, which are built to recognise creative permutations of language, fail their job. Cultural nuances and linguistic specificities are challenging to algorithms.
For this reason, it is important to keep content moderation as a human process, because algorithms are not infallible in the content screening workflow and despite all the struggles content moderators face with screening content. The design concept explores how the emergence of deep fakes can be tackled by social media content moderators without relying too much on algorithms and computer power, but on human skills