Workers who reduced ChatGPT’s negative effects ask lawmakers to stop suspected Big Tech exploitation. Kenyan employees who assisted in removing offensive information from ChatGPT, OpenAI’s intelligent search engine that produces content in response to user input, have petitioned the country’s lawmakers to open probes into Big Tech’s outsourcing of content moderation and AI work to Kenya.
The petitioners demand investigations into the “nature of work, the conditions of work, and the operations” of large tech firms that subcontract work to firms like Sama in Kenya. Sama is at the center of several legal battles over claims of exploitation, union-busting, and unauthorized mass layoffs of content moderators.
The petition comes in response to a Time article that exposed the Sama employees’ pitiful pay and the nature of their labor, which required them to read and classify gruesome literature that included descriptions of scenes of murder, bestiality, and rape. According to the article, OpenAI hired Sama in late 2021 to “label textual descriptions of sexual abuse, hate speech, and violence” as part of the effort to create a tool (that was integrated into ChatGPT) to identify toxic content.
The employees claim that while not receiving psychological treatment and being abused, they were exposed to hazardous material that caused “severe mental illness.”
Legislators should “regulate the outsourcing of harmful and dangerous technology” and safeguard the workers who do it, according to the workers.
They are urging them to pass laws that “protect workers who are engaged through such engagements” and that “outsource harmful and dangerous technology work.”
Workers who reduced ChatGPT’s negative effects ask lawmakers to stop suspected Big Tech exploitation.
Sama claims to have clients from 25% of the Fortune 50 organizations, including Google and Microsoft. The core line of business for the San Francisco-based company is computer vision data annotation, curation, and validation. Over 3,000 people are employed by it across all of its centres, including the one in Kenya. Sama let off 260 employees earlier this year when it stopped providing content moderation services in favor of concentrating on machine vision data annotation.
In reaction to the alleged exploitation, OpenAI acknowledged the difficulty of the labor while also stating that it has set and shared ethical and wellness criteria with their data annotators to ensure that the work was completed “humanely and willingly.”
They pointed out that human data annotation was one of the many streams of their work to gather user feedback and direct the models toward safer behavior in the actual world in order to construct safe and beneficial artificial general intelligence.
The representative for OpenAI stated, “We know this is tough work for our researchers and annotation workers in Kenya and throughout the world – their efforts to assure the safety of AI systems have been enormously helpful.
TechCrunch was informed by Sama that it was willing to cooperate with the Kenyan government “to ensure that baseline protections are in place at all companies.” It added that employees have multiple ways to voice concerns and that it has “performed numerous external and internal evaluations and audits to ensure we are paying fair wages and providing a working environment that is dignified.” It stated that it welcomes third-party audits of its working conditions.