ChatGPT Is About to Dump More Work on Everyone

Artificial intelligence could spare you some effort. Even if it does, it will create a lot more work in the process.

An illustration of a hand holding marionette strings. The hand is filled with binary code.
Tyler Comrie / The Atlantic

Have you been worried that ChatGPT, the AI language generator, could be used maliciously—to cheat on schoolwork or broadcast disinformation? You’re in luck, sort of: OpenAI, the company that made ChatGPT, has introduced a new tool that tries to determine the likelihood that a chunk of text you provide was AI-generated.

I say “sort of” because the new software faces the same limitations as ChatGPT itself: It might spread disinformation about the potential for disinformation. As OpenAI explains, the tool will likely yield a lot of false positives and negatives, sometimes with great confidence. In one example, given the first lines of the Book of Genesis, the software concluded that it was likely to be AI-generated. God, the first AI.

On the one hand, OpenAI appears to be adopting a classic mode of technological solutionism: creating a problem, and then selling the solution to the problem it created. But on the other hand, it might not even matter if either ChatGPT or its antidote actually “works,” whatever that means (in addition to its limited accuracy, the program is effective only on English text and needs at least 1,000 characters to work with). The machine-learning technology and others like it are creating a new burden for everyone. Now, in addition to everything else we have to do, we also have to make time for the labor of distinguishing between human and AI, and the bureaucracy that will be built around it.


If you are a student, parent, educator, or individual with internet access, you may have caught wind of the absolute panic that has erupted around ChatGPT. There are fears—It’s the end of education as we know it! It passed a Wharton MBA exam!—and retorts to those fears: We must defend against rampant cheating. If your class can be gamed by an AI, then it was badly designed in the first place!

An assumption underlies all these harangues, that education needs to “respond” to ChatGPT, to make room for and address it. At the start of this semester at Washington University in St. Louis, where I teach, our provost sent all faculty an email encouraging us to be aware of the technology and consider how to react to it. Like many institutions, ours also hosted a roundtable to discuss ChatGPT. In a matter of months, generative AI has sent secondary and postsecondary institutions scrambling to find a response—any response—to its threats or opportunities.

That work heaps atop an already overflowing pile of duties. Budgets cut, schoolteachers often crowdsource funds and materials for their classrooms. The coronavirus pandemic changed assumptions about attendance and engagement, making everyone renegotiate, sometimes weekly, where and when class will take place. Managing student anxiety and troubleshooting broken classroom technology is now a part of most teachers’ everyday work. That’s not to mention all the emails, and the training modules, and the self-service accounting tasks. And now comes ChatGPT, and ChatGPT’s flawed remedy.

The situation extends well beyond education. Almost a decade ago, I diagnosed a condition I named hyperemployment. Thanks to computer technology, most professionals now work a lot more than they once did. In part, that’s because email and groupware and laptops and smartphones have made taking work home much easier—you can work around the clock if nobody stops you. But also, technology has allowed, and even required, workers to take on tasks that might otherwise have been carried out by specialists as their full-time job. Software from SAP, Oracle, and Workday force workers to do their own procurement and accounting. Data dashboards and services make office workers part-time business analysts. On social media, many people are now de facto marketers and PR agents for their division and themselves.

No matter what ChatGPT and other AI tools ultimately do, they will impose new regimes of labor and management atop the labor required to carry out the supposedly labor-saving effort. ChatGPT’s AI detector introduces yet another thing to do and to deal with.

Is a student trying to cheat with AI? Better run the work through the AI-cheater check. Even educators who don’t want to use such a thing will be ensnared in its use: subject to debates about the ethics of sharing student work with OpenAI to train the model; forced to adopt procedures to address the matter as institutional practice, and to reconfigure lesson plans to address the “new normal”; obligated to read emails about those procedures to consider implementing them.

At other jobs, different but similar situations will arise. Maybe you outsourced some work to a contractor. Now you need to make sure it wasn’t AI-generated, in order to prevent fiscal waste, legal exposure, or online embarrassment. As cases like this appear, prepare for an all-hands meeting, and a series of email follow-ups, and maybe eventually a compulsory webinar and an assessment of your compliance with the new learning-management system, and on and on.


New technologies meant to free people from the burden of work have added new types of work to do instead. Home appliances such as the washing machine freed women to work outside the home, which in turn reduced time to do housework (which still fell largely to women) even as the standards for home perfection rose. Photocopiers and printers reduce the burden of the typist but create the need to self-prepare, collate, and distribute the reports in addition to writing them. The automated grocery checkout assigns the job of cashier to the shopper. Email makes it possible to communicate rapidly and directly with collaborators, but then your whole day is spent processing emails, which renews the burden again the next day. Zoom makes it possible to meet anywhere, but in doing so begets even more meetings.

ChatGPT has held the world’s attention, a harbinger of—well, something, but maybe something big, and weird, and new. That response has inspired delight, anxiety, fear, and dread, but no matter the emotion, it has focused on the potential uses of the technology, whether for good or ill.

The ChatGPT detector offers the first whiff of another, equally important consequence of the AI future: its inevitable bureaucratization. Microsoft, which has invested billions of dollars in OpenAI, has declared its hope to integrate the technology into Office. That could help automate work, but it’s just as likely to create new demands for Office-suite integration, just as previous add-ons such as SharePoint and Teams did. Soon, maybe, human resources will require the completion of AI-differentiation reports before approving job postings. Procurement may adopt a new Workday plug-in to ensure vendor-work-product approvals are following AI best practices, a requirement you will now have to perform in addition to filling out your expense reports—not to mention your actual job. Your Salesforce dashboard may offer your organization the option to add a required AI-probability assessment before a lead is qualified. Your kids’ school may send a “helpful” guide to policing your children’s work at home for authenticity, because “if AI deception is a problem, all of us have to be part of the solution.”

Maybe AI will help you work. But more likely, you’ll be working for AI.

Ian Bogost is a contributing writer at The Atlantic.