Behold the decade of mid tech!
That is what I want to say every time someone asks me with breathless anticipation, “What about AI?” I’m far from a Luddite. It is precisely because I use new technology that I know mid (average or mediocre) when I see it.
Academics are rarely good stand-ins for typical workers. But the mid-technology revolution is an exception. It has come for us first. Some of it has even come from us, genuinely exciting academic inventions and research science that could positively contribute to society. But what we’ve already seen in academia is that the use cases for artificial intelligence across every domain of work and life have started to get silly really fast.
Most of us aren’t using AI to save lives faster and better. We are using AI to make mediocre improvements, such as emailing more. Even the most enthusiastic papers about AI’s power to augment white-collar work have struggled to come up with something more exciting than “A brief that once took two days to write will now take two hours!”
Mid tech’s best innovation is a threat.
AI is one of many technologies that promise transformation through iteration rather than disruption. Consumer automation once promised seamless checkout experiences that empowered customers to bag our own groceries. It turns out that checkout automation is pretty mid — cashiers are still better at managing points of sale. AI-based facial recognition similarly promised a smoother, faster way to verify who you are at places like the airport. But the Transportation Security Administration’s adoption of the technology (complete with unresolved privacy concerns) hasn’t particularly revolutionized the airport experience or made security screening lines shorter. I’ll just say, it all feels pretty mid to me.
‘So-so’ technologies
Economists Daron Acemoglu and Pascual Restrepo call these kinds of technological fizzles “so-so” technologies. They change some jobs. They’re kind of nifty for a while. Eventually they become background noise or are flat-out annoying, say, when you’re bagging two weeks’ worth of your own groceries.
Artificial intelligence is supposedly more radical than automation. Tech billionaires promise us that workers who can’t or won’t use AI will be left behind. Politicians promise to make policy that unleashes the power of AI to do … something, though many of them aren’t exactly sure what. Consumers who fancy themselves early adopters get a lot of mileage out of AI’s predictive power, but they accept a lot of bugginess and poor performance to live in the future before everyone else.
The rest of us are using this technology for far more mundane purposes. AI spits out meal plans with the right amount of macros, tells us when our calendars are overscheduled and helps write emails that no one wants. That’s a midrevolution of midtasks.
Of course AI, if applied properly, can save lives. It has been useful for producing medical protocols and spotting patterns in radiology scans. But crucially, that kind of AI requires people who know how to use it. Speeding up interpretations of radiology scans helps only people who have a medical doctor who can act on them. More efficient analysis of experimental data increases productivity for experts who know how to use the AI analysis and, more important, how to verify its quality. AI’s most revolutionary potential is helping experts apply their expertise better and faster. But for that to work, there have to be experts.
That is the big danger of hyping mid tech. Hype isn’t held to account for being accurate, only for being compelling. Mark Cuban exemplified this in a recent post on the social media platform Bluesky. He imagined an AI-enabled world where a worker with “zero education” uses AI and a skilled worker doesn’t. The worker who gets on the AI train learns to ask the right questions, and the numbskull of a skilled worker does not. The former will often be, in Cuban’s analysis, the more productive employee.
The problem is that asking the right questions requires the opposite of having zero education. You can’t just learn how to craft a prompt for an AI chatbot without first having the experience, exposure and, yes, education to know what the heck you are doing. The reality — and the science — is clear that learning is a messy, nonlinear human development process that resists efficiency. AI cannot replace it.
AI is already promising that we won’t need institutions or expertise. It does not just speed up the process of writing a peer review of research; it also removes the requirement that one has read or understood the research it is reviewing. AI’s ultimate goal, according to boosters like Cuban, is to upskill workers — make them more productive — while delegitimizing degrees. Another way to put that is that AI wants workers who make decisions based on expertise without an institution that creates and certifies that expertise. Expertise without experts.
AI’s darker side
That tech fantasy is running on fumes. We all know it’s not going to work. But the fantasy compels risk-averse universities and excites financial speculators because it promises the power to control what learning does without paying the cost for how real learning happens. Tech has aimed its midrevolutions at higher education for decades, from TV learning to smartphone nudges. For now, AI as we know it is just like all of the ed-tech revolutions that have come across my desk and failed to revolutionize much. Most of them settle for what anyone with a lick of critical thinking could have said they were good for. They make modest augmentations to existing processes. Some of them create more work. Very few of them reduce busywork.
Mid-tech revolutions have another thing in common: They justify employing fewer people and ask those left behind to do more with less.
If you want to see the actual revolutionary use case for AI, don’t look to biological sciences or universities. Look at Elon Musk’s so-called Department of Government Efficiency, which has reportedly considered using AI to help it find waste. The issue of whether workers and work is wasteful is a subjective call that AI cannot make. But it can justify what a decision-maker wants to do. If Musk wants waste, AI can give him numbers to prove that waste exists.
This sort of mid tech would, in a perfect world, go the way of classroom TVs and massive open online courses. It would find its niche and mildly reshape the way white-collar workers work, and Americans would mostly forget about its promise to transform our lives.
But we now live in a world where political might makes right. DOGE’s monthslong infomercial for AI reveals the difference that power can make to a mid technology. It does not have to be transformative to change how we live and work. In the wrong hands, mid tech is an anti-labor hammer.
Tressie McMillan Cottom is a New York Times columnist.