AI Chatbot Labeling Trauma, Kenyan Workers Battling for Mental Health

Kenyan Laborers Endure Psychological Strain from AI Chatbot Labeling Tasks


In the midst of label work, Kenyan AI laborers face 'bullet-like' impact, recounting trauma

Kenyan AI workers shed light on the mental trauma and psychological toll they endure while ensuring the technical safety of artificial intelligence (AI) systems.


The global proliferation of AI software, like powerful chatbot services such as 'ChatGPT,' has led companies to employ 'moderators' responsible for fine-tuning these bots to ensure user safety. The aim is to equip AI to recognize extreme content nature and prevent its dissemination.


The chatbot industry is rapidly growing, amounting to billions of dollars, generating jobs not only in tech innovation powerhouses like China and the US, but also in low-income nations like Africa, India, and the Philippines. However, establishing safety nets for chatbots can significantly impact those tasked with reviewing distressing content.


Modbot (27) expressed, "It felt like life was over. Hope vanished, and I felt like I'd lost everything."


He revealed how trauma shattered his marriage, personal relationships, and led to depression. Other colleagues also spoke of experiencing post-traumatic stress disorder (PTSD) following exposure to traumatic content.


Modbot worked at 'Sama,' a company that provided ChatGPT services.


Assigned to label extreme content alongside colleagues, Modbot's role as a 'data labeler' involved verifying and assessing labeled data for suitability. He had to review content related to violence, hate speech, self-harm, and sexual content daily.


"It was truly horrifying and shocking."


"After seeing this content for four months, my perspective changed."


"My behavior changed, my family and wife left, and I still grapple with depression."

Modbot, alongside fellow workers with similar roles, submitted a petition to the Kenyan parliament. The petition called for an investigation into the working conditions of Kenyan tech firms that outsource content moderation and other AI tasks to foreign tech giants.


He asserts that employees lacked adequate training to handle extreme textual content and were not provided with sufficient professional counseling. However, Sama strongly refutes these claims.


According to Dr. Veronica Eung-Ah Park, a PTSD expert and psychologist, exposure to extreme online content is termed 'secondary trauma.' In severe cases, it can lead to effects similar to the 'primary trauma' experienced by actual victims of abuse.


"The danger with secondary trauma is the profound sense of helplessness."


"There's no next step. You're just watching without anyone to report to. There's nothing you can do."


"Additionally, content moderation work is unpredictable. You don't know what you'll see and how the effects will amplify."


Symptoms of secondary trauma, as explained by Dr. Park, encompass nightmares, aversion to others, diminished empathy, and anxiety about anything that could remind you of what you've read or heard.


Dr. Park emphasized that providing support in this context is "essential" for companies.


"Efforts should be made not only to shield employees from their tasks but also to ensure their overall well-being."

Kenya's content moderator plight is not a recent issue.


In May, over 150 content moderators from Africa who provided services for various AI solutions used by major tech companies formed the first-ever content moderator association through a vote.


What Modbot desires from big tech companies and their billions of users is this kind of recognition and understanding.


"People need to know that content moderators are the ones making platforms safe. Many people don't even realize this role exists." Sama countered all allegations and stated that all applicants underwent a "resilience assessment" and reviewed examples of potentially distressing content before being hired.


Furthermore, employees reviewed and signed consent forms before the project began, emphasizing the potentially discomforting nature of the content once again.


A company spokesperson stated, "Employees selected for the project were provided with psychosocial support throughout their participation, and the company's wellness professionals offer 24/7 support on-site, accessible year-round, and through the company's health insurance."


Modbot now awaits the parliamentary consideration of the petition. While striving to recover his mental well-being, he finds solace in the thought that his time as a data labeler could aid his own recovery.


"I am proud of myself. I felt like a soldier. Thanks to me and my colleagues taking the bullets, now anyone can use ChatGPT safely."

댓글