Share this post:
Have you ever read an article or a piece of work in your field and thought that it isn’t quite right? As if yes, it does seem like something someone worked on, but it isn’t quite deep enough to actually work for whatever you’re working on.
Say hello to AI workslop.
What is it, and is it here to stay? Keep reading to see what top publications and experts have to say about it.
Allison Morrow analyzed for CNN the shortcomings of AI and came back with workslop as a probable good, not for AI, but for workers.
Now, exactly what is AI workslop? It’s the buzzy name for tasks that are delivered and might look good at first glance, might be well-formatted and hit every point necessary, and yet it isn’t deep enough to cover the work that needs to be done.
“While some people are using AI tools to “polish good work,” others are using them to ‘create content that is actually unhelpful, incomplete, or missing crucial context about the project at hand.’”
The problem is that this is really prevalent, with 40% of workers saying that they have received workslop in the past month. And here’s the kicker: it actually is estimated to lose more money than it generates for companies. By Harvard Business Review’s estimates, if a company has 10,000 employees, workslop costs $9 million a year in fixing AI’s mistakes.
Not only that, an MIT study has OpenAI in hot water, as it found out that 95% of AI pilots failed in the first stages but that didn’t deter investors from pouring more money into it.
It’s not only CNN that says that AI is in trouble, Gene Marks from the Guardian agrees that workslop is a big problem for employees wasting time at work. Problem is, obviously, that this is the scenario that AI was supposed to prevent.
Studies keep piling up, with KPMG finding that only 8.5% of people surveyed trust AI search results. All the while, McKinsey found that 80% of companies that have invested in AI have no significant bottom-line impact, with 42% of projects being abandoned in the early stages.
Enter workslop into the picture.This is part of the reason people are worried about the AI bubble bursting–AI hasn’t brought in the promised profits, and yet companies keep investing into it, and now, it’s costing them.
“But in the workplace, the buck always stops with the boss. The responsibility for AI’s “workslop” lies fully at the feet of the employer.”
The problem with AI seems to be that it is not being properly utilized, with many companies rolling out AI projects or demanding that employees use them to improve productivity—except that employees haven’t been adequately trained on how to use AI. Not only that, but those who do use it have to waste time cleaning up the slop that it produces. This means that AI might be a failed project for big tech that many companies bought into with no clear goal in sight, other than the nebulous productivity.
Now, according to a CNBC piece written by Jennifer Lui based on a BetterUp study, part of the problem is actually detecting what AI workslop is, how to detect it upon delivery, and what to do about it.
“It created a situation where I had to decide whether I would rewrite it myself, make him rewrite it, or just call it good enough.”
So, the problem seems to be, like with AI art, that to the untrained eye, workslop is undetectable. Do not think about the bug-eyed cats, but rather deepfakes that reveal subtle clues that what you’re watching is not real. In professional services and tech work, how can you tell? Purple prose, or long answers that could have been a simple bullet point, yet don’t say anything.
The problem is that this erodes trust and productivity within teams, with workers finding themselves unsure of what to do with it. A director in retail chronicled that he decided to address workslop, wasting time in setting up meetings with other supervisors to discuss it, and then having to redo it.
Not only that, but coworkers stop trusting one another, adding to the emotional cost, and rethinking the abilities of their coworkers.
This ultimately leads to a fork in the road: companies want workers to use AI or be replaced, yet if people use it, they’ll be judged for it. The best way forward is clear: companies should train workers on how to use it, including cleaning up any slop before handing it over as finished work.
AI has been under fire for the last few months, when it was once thought to be an invaluable tool, it’s turned out to be much harder for companies to actually make a profit on. Still, they’re sticking to their guns, even when it has begun costing them money rather than saving it. The best way forward should be to teach workers how to use it, and thus any slop and necessary rework that ends up costing time and money.
Share this post:
WHAT DO YOU NEED TO FIND?