(C) BoingBoing
This story was originally published by BoingBoing and is unaltered.
. . . . . . . . . .
Essays explore the hellscape of freelance AI model training [1]
['Jennifer Sandlin']
Date: 2024-05-12
Ever wondered what it's like to train AI models? Sounds cutting-edge and cool, maybe? Seems like something that might be interesting and where you might learn some helpful new skills, right? Well, according to some people who have recently done this work for one of the biggest AI companies in the world, the work of training AI is chaotic and inconsistent at best. And, according to Cathy Glenn in a new piece about her work training at Outlier, which is part of Scale-AI, AI model trainers are subjected to "predatory labor practices" that "create authoritarian cultural conditions for workers, not just abroad, but also here in the U.S."
In that piece Glenn describes her very recent work at Outlier, which sounds frustrating, to say the very least. Here are some excerpts from her eye-opening piece (which you should absolutely read in full, here):
In the US and abroad, complaints about Scale-AI's workplace practices abound. The lack of communication channels for workers, inaccurate information offered by management, lack of communication from management, withheld or missing compensation without cause or recourse, lack of consistent performance standards, lack of consistent work quality standards, and the constant threat of losing access to the Outlier or Remotasks platform without justification or recourse are norms for workers in these workplace cultures . . . When I started at Outlier at the end of January, there were approximately 33K members in the Slack channels. Currently, there are approximately 173K members, a 424% worker increase over just three months. The number of workers at the bottom far outnumbers team leads, so many new workers are left without a team lead, a project or group, or any support. Scale's industry employment numbers, however, appear impressive from the outside . . . Over two months, I was moved 18 times to different projects. . . Extensive training and four evaluation tasks were necessary for me to be allowed to work on the Ostrich project for Open-AI. Before starting my first two tasks, the only training was reading the convoluted instructions. Everyone working toward the project was promised feedback on their first and second tasks so that we could adjust and improve our performance on the following two tasks. No evaluation criteria were offered, and the promised reviews were not accessible. After the Ostrich team admitted to losing the first two tasks from workers who completed them – each task takes up to 6 hours to complete – anxiety, fear, frustration, and chaos ensued on the Slack channels. No reviews of work – or rushed, hostile reviews that made no sense – were the norm for hundreds working toward admission to OpenAI's Ostrich. Without apology for the lost work, Jad Faraj, Scale's new Strategic Projects Lead, unilaterally decided to "adjust the parameters" by throwing away the first two tasks and considering only the 3rd and 4th, which were undertaken without training beyond reading instructions or feedback on previous work. Not only did this choice create fear, confusion, and anxiety, but devalued and demeaned workers whose work was lost. Mr. Faraj is directly responsible for modeling the authoritarian practices that create chaos in Outlier-AI's workplace culture. Currently, lower hourly earners (Tier 1 and 2, $15-25 an hour) have been moved to all projects except Ostrich. Meanwhile, hundreds of $40 an hour (Tier 3 of 3) contributors – professional experts in their fields – are ostensibly moving toward Ostrich project admission and wait without work in virtual lines behind a backlog of lost tasks, missing or specious reviews from opaque sources, and inaccurate information offered to keep them hoping and waiting. (UPDATE: On May 10th, Scale's Outlier cut pay to all T3 experts on its platform, from $40 to $25 per hour without justification and without recourse.)
Last summer, Josh Dzieza wrote a great piece for The Verge highlighting his experiences working for the same company, alongside the experiences of AI trainers in Kenya he interviewed. Here are some excerpts from his piece (and here's the whole thing, definitely also worth a read):
According to workers I spoke with and job listings, U.S.-based Remotasks annotators generally earn between $10 and $25 per hour, though some subject-matter experts can make more. By the beginning of this year, pay for the Kenyan annotators I spoke with had dropped to between $1 and $3 per hour. That is, when they were making any money at all. The most common complaint about Remotasks work is its variability; it's steady enough to be a full-time job for long stretches but too unpredictable to rely on. Annotators spend hours reading instructions and completing unpaid trainings only to do a dozen tasks and then have the project end. There might be nothing new for days, then, without warning, a totally different task appears and could last anywhere from a few hours to weeks. Any task could be their last, and they never know when the next one will come. This boom-and-bust cycle results from the cadence of AI development, according to engineers and data vendors. Training a large model requires an enormous amount of annotation followed by more iterative updates, and engineers want it all as fast as possible so they can hit their target launch date. There may be monthslong demand for thousands of annotators, then for only a few hundred, then for a dozen specialists of a certain type, and then thousands again. "The question is, Who bears the cost for these fluctuations?" said Jindal of Partnership on AI. "Because right now, it's the workers."
Finally, earlier this week a Reddit user who has also been tasking on Outlier wrote up their experiences as a sort of overview/warning to people who are new or are considering signing up as freelancers. It's also worth a read, as it outlines some truly astounding(ly bad) company practices. Here's an excerpt (and the link to the Reddit thread):
There are hundreds and in some cases thousands of people across the world being brought in at any given time. The first Slack group I was put in had 965 people. They hire in mass. You aren't special, even if they hired you at the Tier 3 $40/hr level. I was brought in at that level with, if I'm right, about 300 other people on the same day. Because of that volume, your individual questions in Slack will rarely be answered until you happen to get put on a team with a Team Leader (TL). I've seen people be put in the general onboarding Slack channel and plead and beg for someone to respond to them for sometimes weeks at a time. I'm impressed they kept trying. Fact is, the volume is such that people fall through the cracks . . . You will be assigned to and removed from projects without warning. You will be placed in and pulled out of Slack channels without warning. You will "train" (sometimes without being paid) for projects that you will never have a chance to work in. (Normally this happens because you'll be placed on a different project just after or during the period you're reading the training materials.) Training materials and procedures will change without warning or notification . . . You will be told you will receive feedback on tasks or training tasks, but it never happens. (Sometimes you will receive feedback that makes little sense or seems contradictory to the training. This is because the "taskers" are often "reviewers" as well, and the quality of the reviews and feedback depends on the person who happens to be reviewing your work. Sometimes they will be reviewing you on outdated versions or understanding of the training. Sometimes this means that you will have your pay Tier lowered or even be let go unfairly. There is, to my knowledge, no reliable way to appeal this. Some people have stories about doing it, but when you try to repeat their steps, the system or platform may have changed.) "Team Leaders" or other supervisors will disappear, be furloughed, be turned into regular taskers like you, without notice or explanation. (They also, as a rule, don't know what's going on. They have usually only been there for a month or two before you and are obviously working on instructions given them immediately prior to passing along information, so they're not really "part of the company," either.)
Yikes! For all the talk of the futuristic utopic promise of AI, the experiences recounted above sure are giving off decidedly oppressive and dystopic vibes.
Previously: Photobucket archives may sell for billions to train AI
[END]
---
[1] Url:
https://boingboing.net/2024/05/12/essays-explore-the-hellscape-of-freelance-ai-model-training.html
Published and (C) by BoingBoing
Content appears here under this condition or license: Creative Commons BY-NC-SA 3.0.
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/boingboing/