Labor Saving Technology That Creates More Work

ChatGPT is just the latest example of a perverse phenomenon.

3D rendering artificial intelligence AI research of robot and cyborg development for future of people living. Digital data mining and machine learning technology design for computer brain.

In The Atlantic, professor and video game designer Ian Bogost explains why “ChatGPT Is About to Dump More Work on Everyone.” After some preliminary introduction to the software, he observes,

The machine-learning technology and others like it are creating a new burden for everyone. Now, in addition to everything else we have to do, we also have to make time for the labor of distinguishing between human and AI, and the bureaucracy that will be built around it.

Like so many “labor-saving” devices before it, it winds up creating more work.

If you are a student, parent, educator, or individual with internet access, you may have caught wind of the absolute panic that has erupted around ChatGPT. There are fears—It’s the end of education as we know it! It passed a Wharton MBA exam!—and retorts to those fears: We must defend against rampant cheatingIf your class can be gamed by an AI, then it was badly designed in the first place!

An assumption underlies all these harangues, that education needs to “respond” to ChatGPT, to make room for and address it. At the start of this semester at Washington University in St. Louis, where I teach, our provost sent all faculty an email encouraging us to be aware of the technology and consider how to react to it. Like many institutions, ours also hosted a roundtable to discuss ChatGPT. In a matter of months, generative AI has sent secondary and postsecondary institutions scrambling to find a response—any response—to its threats or opportunities.

That work heaps atop an already overflowing pile of duties. Budgets cut, schoolteachers often crowdsource funds and materials for their classrooms. The coronavirus pandemic changed assumptions about attendance and engagement, making everyone renegotiate, sometimes weekly, where and when class will take place. Managing student anxiety and troubleshooting broken classroom technology is now a part of most teachers’ everyday work. That’s not to mention all the emails, and the training modules, and the self-service accounting tasks. And now comes ChatGPT, and ChatGPT’s flawed remedy.

You’ve likely seen quite a bit of this discussion already, as it has been all the rage in recent weeks. Still, you might reason, if you’re not a teacher or educational administrator, this really only effects you at the margins. Not so, argues Bogost.

The situation extends well beyond education. Almost a decade ago, I diagnosed a condition I named hyperemployment. Thanks to computer technology, most professionals now work a lot more than they once did. In part, that’s because email and groupware and laptops and smartphones have made taking work home much easier—you can work around the clock if nobody stops you. But also, technology has allowed, and even required, workers to take on tasks that might otherwise have been carried out by specialists as their full-time job. Software from SAP, Oracle, and Workday force workers to do their own procurement and accounting. Data dashboards and services make office workers part-time business analysts. On social media, many people are now de facto marketers and PR agents for their division and themselves.

No matter what ChatGPT and other AI tools ultimately do, they will impose new regimes of labor and management atop the labor required to carry out the supposedly labor-saving effort. ChatGPT’s AI detector introduces yet another thing to do and to deal with.

This isn’t a novel observation, of course. While technology often makes jobs easier and faster, it tends to either shift work onto someone else or indirectly create additional work. Just off the top of my head, we’ve seen these things just in my own experience:

Word processing software has made it much easier to create readable documents. I’m old enough to remember having to type reports and the like and either having to backspace and erase, x-out, Wite-Out, or otherwise correct typos. Or, in the case of those requiring footnotes, start over multiple times.

But this has simultaneously increased the demand that everything be “typed” and upped the standard for what an acceptable product looks like but led to the elimination of a vast, skilled clerical workforce, transferring the load to senior workers who are much more highly paid. Executive-level managers might still have secretaries but the rest of us—including the likes of college professors, Marine colonels, and corporate vice presidents—are now typists, layout designers, and data entry clerks.

Self-service gasoline pumps have turned us all into gas station attendants.

UPC scanners turned cashiers from semi-skilled to unskilled laborers and, eventually, all of us into unpaid cashiers.

And, while automatic clothes washers and dryers have been around as long as I can remember, they have had the impact of increasing the amount of laundry we do. People own a lot more personal clothing than they used to as a direct result and we tend to wash clothes that aren’t actually dirty out of habit and laziness.

Readers will likely supply countless other examples.

Back to ChatGPT:

Is a student trying to cheat with AI? Better run the work through the AI-cheater check. Even educators who don’t want to use such a thing will be ensnared in its use: subject to debates about the ethics of sharing student work with OpenAI to train the model; forced to adopt procedures to address the matter as institutional practice, and to reconfigure lesson plans to address the “new normal”; obligated to read emails about those procedures to consider implementing them.

On the one hand, listing some of this stuff comes across as whiny. How hard is it to read a couple of emails? On the other, it makes a larger point about the sheer amount of unseen work that has crept into our lives as a function of technology. None of these tasks will make it onto the professor’s CV or annual appraisal. They don’t really fall into the traditional Teaching, Research, and Service categories on which the profession is assessed. But it has to be done.

And, again, it’s not just academia:

At other jobs, different but similar situations will arise. Maybe you outsourced some work to a contractor. Now you need to make sure it wasn’t AI-generated, in order to prevent fiscal waste, legal exposure, or online embarrassment. As cases like this appear, prepare for an all-hands meeting, and a series of email follow-ups, and maybe eventually a compulsory webinar and an assessment of your compliance with the new learning-management system, and on and on.

Bogost closes:

The ChatGPT detector offers the first whiff of another, equally important consequence of the AI future: its inevitable bureaucratization. Microsoft, which has invested billions of dollars in OpenAI, has declared its hope to integrate the technology into Office. That could help automate work, but it’s just as likely to create new demands for Office-suite integration, just as previous add-ons such as SharePoint and Teams did. Soon, maybe, human resources will require the completion of AI-differentiation reports before approving job postings. Procurement may adopt a new Workday plug-in to ensure vendor-work-product approvals are following AI best practices, a requirement you will now have to perform in addition to filling out your expense reports—not to mention your actual job. Your Salesforce dashboard may offer your organization the option to add a required AI-probability assessment before a lead is qualified. Your kids’ school may send a “helpful” guide to policing your children’s work at home for authenticity, because “if AI deception is a problem, all of us have to be part of the solution.”

Maybe AI will help you work. But more likely, you’ll be working for AI.

That’s assuming the AI doesn’t simply replace you.

FILED UNDER: Science & Technology, , , , , , , , ,
James Joyner
About James Joyner
James Joyner is Professor and Department Head of Security Studies at Marine Corps University's Command and Staff College. He's a former Army officer and Desert Storm veteran. Views expressed here are his own. Follow James on Twitter @DrJJoyner.

Comments

  1. gVOR08 says:

    I’m old enough to remember having to type reports and the like and either having to backspace and erase, x-out, Wite-Out, or otherwise correct typos. Or, in the case of those requiring footnotes, start over multiple times.

    I wrote a lot of test procedures back in the day. It wasn’t cut and paste, it was cut and scotch tape. Hand write a draft, get it typed, proofread it and get it retyped, then put it through review, cut out what you could use, hand write the new material, get it typed, proofed, retyped, then cut and tape it all together, rinse, and repeat. Fortuitously we were getting our first copying machines so you didn’t have to pass around the single, in some place six sheets thick, draft. (Yes, I’m that old. My first programming class used punch cards. I still have a box of pencil drafting equipment. And get your ass off my lawn.)

    3
  2. Stormy Dragon says:

    we also have to make time for the labor of distinguishing between human and AI, and the bureaucracy that will be built around it

    Outside of a few industries (academia, publishing, etc.), are most businesses actually going to care if the monthly status report was generated by a human or an AI as long as it fulfills its purpose?

    1
  3. Kathy says:

    I recall worries along these lines about students cutting and pasting Wikipedia articles as homework.

    Back in the day, I did ok with a typewriter. The work took far longer, though, as I never learned proper typing (amazing how few have even to this day, when keyboards are far more ubiquitous). The word processor software even in the old, old Apple ][e was a huge improvement.

    I think the last time I used a typewriter was in 2008-9, when we had to fill out a form and it couldn’t be done by hand.

  4. JKB says:

    If your class can be gamed by an AI, then it was badly designed in the first place!

    Sorry to inform you, but bad design or not, your class has been being gamed by the students. The incentive of schooling is to get good grades, not real learning. (Paul Graham, ‘The Lesson to Unlearn’) In fact, going beyond what the professor wants you to repeat is a good way to get a bad grade, even barring teacher malevolence, since you’ll likely give to much nuance and not the “textbook” answer.

    With AI out in the open now, schooling, especially college, is going to have to become more about learning than grades. And real learning is hard and time consuming to impose on students against their will.

    1
  5. Kathy says:

    A bit off-topic, I recall hearing some pundit in the 90s advice people to buy stock on paper manufacturers, whenever they heard of the next, revolutionary paperless office software.

    I thought it pure cynicism until we began to do online proposals.

    Old in person style: you copied large bunches of documents, as well as printed others in letterhead paper, and presented anywhere from two to five binders full of paper.

    New online style: you print the letterhead stuff anyway, because you sign it and scan it for upload. You scan rather than print documents. You upload the whole thing. Then the government agency downloads and prints all your uploads.

    Net result, slightly more paper is used, as the letterhead stuff gets printed twice.

    Granted, the objective of the online proposals wasn’t to reduce or eliminate paper use. It did very sharply reduce the need to attend question meetings, or to be somewhere far away in person very early in the morning with a large box of binders in tow. We’d actually like to see more states adopt an online model. Few have.

  6. Just nutha ignint cracker says:

    @Stormy Dragon:

    are most businesses actually going to care if the monthly status report was generated by a human or an AI as long as it fulfills its purpose?

    Read several articles about this topic while I was in Korea (makes it roughly a decade or so ago). Turns out the answer is “no.”

  7. Just nutha ignint cracker says:

    @JKB: “. The incentive of schooling is to get good grades, not real learning. (Paul Graham, ‘The Lesson to Unlearn’) ”

    As much as I hate to disillusion you (heh heh), this incentive dates back to my days in junior high school (now called middle school for about 45-50 years so how old does that make me?). It has to do with GPA being the measure of “real” learning. Yeah, it’s been and continues to be a problem, but it’s not new by any measure. But you DO get kudos for using a contemporary source this time.*

    *Which prompts me to wonder whether you pick your sources according to the relevance of the information or if you pick them for closest match to your argument.

    2
  8. Just nutha ignint cracker says:

    @Just nutha ignint cracker: Nah. I don’t actually wonder about that.

    1
  9. Kurtz says:

    @JKB:

    In fact, going beyond what the professor wants you to repeat is a good way to get a bad grade, even barring teacher malevolence, since you’ll likely give to much nuance and not the “textbook” answer.

    In my experience, this is not the case for most teachers – – secondary or post-secondary.

    Of course, this could turn on what you mean by beyond. I mean, if you take a physics class and try to argue for the Electric Universe, you will likely get a poor grade, unless you manage to prove General Relativity wrong or something. Good luck with that.

    If you go into a Biology class and try to argue that the Earth is 6000 years old and evolution is a lie planted by Satan, then I hope you get an F–not out of support for tyranny, but because the science of Biology is not the forum for a view that is directly contradicted by observable evidence. If one wants to hold on to that belief, there are departments for that, but they fall outside of science.

    I’ve had very few teachers (if any) that expected verbatim repetition. If a student takes Introduction to IR and is asked to describe the tenets of realism, there is nothing wrong with evaluating the student on whether they correctly describe it. After all, if one is to criticize that paradigm, one should probably know the basic tenets, no? But even if a student was asked to write an evaluation of the weaknesses of Realism, I suspect few teachers would expect a regurgitation of anything in particular.

    Is this true across the board? I have no idea. Neither do you. But my experience is the opposite of your portrayal.

    Correct me if I’m wrong, but I’m not sure that Graham is arguing your second claim about what teachers expect of their students. I would agree his criticism of grades is probably on the mark, but using it to justify your particular view of the education system is something else entirely.

    1
  10. @JKB:

    n fact, going beyond what the professor wants you to repeat is a good way to get a bad grade, even barring teacher malevolence, since you’ll likely give to much nuance and not the “textbook” answer.

    So, if I ask a student to answer a question about the role of the legislative branch and they instead give me a good answer on the role of the executive branch, what would you want me to do with the grade?

    (But, sure, in any given class any given student is more interested in the grade as a means to an end than they are in learning. Is this supposed to be some kind of revelation?).

    1
  11. @Kurtz:

    But even if a student was asked to write an evaluation of the weaknesses of Realism, I suspect few teachers would expect a regurgitation of anything in particular.

    Indeed. Having asked such questions I would be looking for a) an understanding of realism (which could be demonstrated in any number of ways) and b) a logical critique (which could be done in any number of ways).