This entire article by Kevin Drum is worth reading, at least for people who don’t pay much attention to AI threat (if you do pay attention, it will be old news). But I’d especially highlight one quote:
#3: Okay, maybe we will get full AI. But it only means that robots will act intelligent, not that they’ll really be intelligent. This is just a tedious philosophical debating point. For the purposes of employment, we don’t really care if a smart computer has a soul—or if it can feel love and pain and loyalty. We only care if it can act like a human being well enough to do anything we can do. When that day comes, we’ll all be out of jobs even if the computers taking our places aren’t “really” intelligent.
Drum is writing about technological unemployment, the threat that robots will one day take our jobs. He devotes a portion of his essay to rebutting common objections to this scenario, and this particular objection really needs a good strong rebuttin’. Philosophical chestnuts like this are a constant source of distraction.
So far as immediate threats are concerned–putting aside, that is, the truly apocalyptic sci-fi scenarios–it doesn’t matter whether machines ever become recognizably human. It doesn’t even matter whether they become approximately human. All they have to do is get really good at doing a few human things.
I think of the objection above as the “but can it enjoy a sandwich?” argument. Sure, AI can play Go. Sure, AI can write news articles. Sure, AI can outfox military strategists. Sure, AI can drive cars. Sure, AI can pick stocks. Sure, AI can write music. Sure, AI can parse and respond to speech. But can it enjoy a sandwich?
This isn’t necessarily a trivial point. It would be really, really difficult to build a machine that could demonstrate, to the satisfaction of informed human observers, that it had truly eaten and enjoyed a sandwich. Enjoyment is the ultimate embodied experience, blending body chemistry and subjective self-awareness. Simple digestion isn’t enough. Nor is it sufficient to have a program spit out some serviceable approximation of things humans say when they enjoy sandwiches. To replicate this ordinary event would be a daunting challenge. And so long as machines don’t enjoy sandwiches, it will be true that organic beings have some claim to distinction.
So what? That doesn’t have much bearing on the questions AI researchers are worried about. And to the extent it does bear on those questions, it offers scant comfort. It throws us back on quasi-religious conceptions of intrinsic worth. Try showing up to a job interview and telling the hiring committee, “Well, no, I don’t have any relevant skills–but what about the irreducible miracle of simply being me?”*
AI doesn’t need to replicate the human mind to do most things humans do, and do them better. Most work in an industrialized market economy is already mechanical and repetitive. If it weren’t, workers wouldn’t be able to barter services for steady income. Machines don’t have to think human thoughts to do our jobs. Quite the opposite. They have to think the kinds of thoughts that most of us spend all day trying to think, while our pesky human minds get in the way.
This is true even for creative work. It’s true in part because a lot of so-called creative work isn’t especially creative at all, as with the intricate corporate systems that churn out our television shows and pop songs. But it’s also true because any useful kind of creativity is by nature programmable. Detecting patterns, recombining familiar elements to generate new patterns, producing–i.e. “brainstorming”–huge sets of combinations and using applied rubrics to weed out weaker entries, running elaborate simulations to model hypothetical events–or as it’s sometimes called, “using the imagination”–these are all things computers are rather good at. I suspect any confusion on this point comes from conceptual slippage between different notions of creativity. There’s self-help creativity: a vague, fuzzy handle for whatever makes people feel special. And then there’s real creativity: a grueling regimen of practice, study, experimentation, and revision that produces new art and ideas.
Many people are skeptical–rightly, in my view–that we’ll ever have machines that resemble human beings in more than a few respects. But that’s partly because such machines would be impractical, uneconomical, and pointlessly redundant. Why take the trouble to create a single integrated program endowed with all the various aptitudes and oddities of the human mind when you can build a hundred different machines, each as powerful as a human brain, each tailored for a particular task? Why bother teaching a machine to feel human hunger, human lust, human fatigue, human bowel discomfort–all our many bodily sensations–when you can edit out such encumbrances? Why invest a truck-driving module with religious longings, a combat drone with cowardly scruples, a stock-picking program with suicidal tendencies? Why go to all the trouble of recreating humans when by most calculations we already have too many of them?
So, yes, Drum’s raising important issues. Still, I have some objections to the way he develops his argument.
First, I think Drum is way too optimistic–or pessimistic, as the case may be–about implementation. Even if we have superior robot surgeons by 2053, that doesn’t mean human surgeons will be out of work by 2060. And the same goes for truck drivers, novelists, CEOs, doctors, and everyone else he mentions in his article.
I have little doubt a computer could do my own job right now, for instance. A significant portion of my job actually consists of trying to arrange for a computer to replace me, since so much of what I do is repetitive to a mind-numbing degree. I do clerical work for a library, for Christ’s sake. Most people assume my job is already outdated.
So what’s the holdup? What’s taking so long? Why am I still in the office every day, dithering around on a desktop?
One answer: bureaucracy. It ought to be possible, using current technology, to have a machine do everything I do. But someone would have to requisition, design, and pay for that machine, a huge up-front hassle. More importantly, someone would have to pull off the legal and administrative wizardry needed to embed that machine in the institutional architecture of my workplace, which would involve a lot more than writing a few scripts. It would mean bending the ear of influential people at six or seven different organizations, convincing them to change the way they run things. It would mean retraining other staff members. It would mean having someone–perhaps another machine–troubleshoot a host of new problems.
Another answer: physics. The cognitive aspects of my job are simple. The interpersonal aspects are marginally more complex. But I also do a lot of dumb stuff like pushing carts around, putting things on shelves, opening boxes, loading staplers, fishing paperclips out of cracks, going to meetings, dealing with people who don’t like dealing with computers, and so on. I know, I know: big deal. But this kind of stuff gets overlooked precisely because it’s so ubiquitous. And it’s much harder to build robots for these kinds of varied daily tasks than to extend and upgrade software.
There’s no reason all this can’t be automated. It’s the essence of unskilled labor. But plugging these innumerable little automation gaps becomes increasingly expensive and unwieldy. And most jobs are full of this stuff.
None of this is fatal to Drum’s argument. It just means technological unemployment will be a slow disaster instead of a fast one. Even when machines can do certain jobs, those jobs won’t suddenly disappear. They’ll slowly dwindle away, becoming less common and more soul-crushing, as large forces of workers give way to skeleton crews of reps and troubleshooters–human handmaidens to a suite of applications.
Second, I think Drum is too easy on the Industrial Revolution, which caused a lot more disruption–and over a longer period of time–than he lets on. Drum mentions the early Luddites, who smashed mechanical looms to protest weaving jobs they’d lost. But he makes it sound as if this was a short-term hiccup:
The Industrial Revolution was all about mechanical power: Trains were more powerful than horses, and mechanical looms were more efficient than human muscle. At first, this did put people out of work: Those loom-smashing weavers in Yorkshire—the original Luddites—really did lose their livelihoods. This caused massive social upheaval for decades until the entire economy adapted to the machine age. When that finally happened, there were as many jobs tending the new machines as there used to be doing manual labor. The eventual result was a huge increase in productivity: A single person could churn out a lot more cloth than she could before. In the end, not only were as many people still employed, but they were employed at jobs tending machines that produced vastly more wealth than anyone had thought possible 100 years before. Once labor unions began demanding a piece of this pie, everyone benefited.
To be fair, Drum’s writing this way out of deference to his critics, who pooh-pooh AI Cassandras by citing the Industrial Revolution as a counterexample. What’s so scary about automation? It worked out fine last time, right?
To which Drum responds, Last time, yes. But this will be different.
I see his point. But this blithe historical summary concedes too much to his critics. The Industrial Revolution didn’t just put weavers and buggy-whip manufacturers out of business. It devastated small communities, displaced huge rural populations, replaced cottage craftsmanship with hired-hand wagework, ramped up environmental devastation, undermined political systems, forced workers to adapt themselves to dehumanizing routines, provoked a backlash in the form of Communism from the left and nativism from the right, and contributed to all the discontents and upsets of modernity, many of which we’re still grappling with today.
If the upheavals of the first Industrial Revolution lasted centuries, destabilized the world, and plunged humanity into a lingering existential crisis, and the second Industrial Revolution will be even bigger–hoo, boy.
That’s not to downplay the benefits of automation, by the way. But we’re still struggling to distribute those benefits in a fair and sustainable way. We haven’t come near to solving the problems of the first revolution, and now we’re getting hit with a bigger one.
My third objection is that I think Drum underestimates how creative people can be about attaching value to, well, bullshit. He acknowledges that this is true for material goods:
Intelligent robots will be able to manufacture material goods and services cheaply, but there will still be scarcity. No matter how many robots you have, there’s only so much beachfront property in Southern California. There are only so many original Rembrandts. There are only so many penthouse suites.
Yes, sure. But I have screens in my house right now that can simulate the view from a luxury location. I have cheap transportation that can take me to the beach. I have access to as many Rembrandt reproductions as I want, along with countless other images.
If a Rembrandt can still be considered scarce when the pleasure of seeing a Rembrandt isn’t scarce at all, why shouldn’t the same be true for people?
This already applies to huge tracts of our economy, where money, jobs, and whole careers are conjured out of sentimental, wishy-washy concepts like celebrity and glamour and authenticity. Sure, you can hear Beyonce’s music for free, but have you seen her in concert? Have you met the key influencers who have what it takes to help you become a key influencer? Have you talked to the helpers and coordinators and assistants who control access to the influencers?
Postmodern professors, anyone? Modern artists? Models? Gatekeepers? Rent-seekers? The flocks of factotums that staff our culture industries? The world’s many flocks of flacks, quacks, and hacks? Drum would probably say that to the extent these people offer anything valuable, they’ll eventually be replaced by machines. But that implies that these jobs have clear, measurable outputs–or, to put it bluntly, that these people are genuinely useful.
Right now, huge chunks of the economy work like pyramid schemes, with hordes of hopeful people borrowing and wasting money so they can gamble for a chance to become rich superstars. And most middle-to-upper-class careers are shaped to some extent by vague advantages: contact, signaling, status, reputation. These are very different from personal traits like charisma and beauty, which can be manufactured (and already are). The essence of reputation is that it floats free of merit. You might get a reputation for being a great musician, but you can lose that reputation, through some unfair fillip of fate, without actually losing your skills.
All the money that feeds this game of aspirer’s poker can be traced back to older parts of the economy, the parts that make stuff and service stuff and grow stuff. That’s dangerous for everyone, because the system is fueled with infusions of daily wages. When the supply dips too low, it has to be topped off with resource surpluses and complex credit arrangements and other volatile supplements. But if the revolution Drum foresees comes about, and wage-work becomes a trivial part of the economy …
Well, why can’t everyone have a bullshit job? Sure, machines will do all the building and designing and growing and transporting, will even handle most forms of service and entertainment, but there will be a parallel human economy in which value is indistinguishable from status. Suzie will make handcrafted whatever which is incredibly rare because it is, by definition, only made by Suzie. Rajit and Bob and Sunil will be Suzie’s assistants, who get paid for having lunch with other movers and shakers. Kwame and Lee will be waiters at the restaurant where power lunches take place, adding their human je ne sais quoi to a fleet of robot waiters. The restaurant will be owned by Julie, who uses her ineffable taste and discernment to select menu options from the most fashionable AI chefs. Those chefs will be a subject of obsessive study for people like Bohi and Mandy and Tim, who also comment on and critique the words of AI food critics, explaining how these critics, despite their popularity, fail to comprehend the finer distinctions of the human palate. And yadda yadda yadda, and yakkety-yak-yak, through countless groups and social classes, all the way down to the equivalent of people who pass their time critiquing someone else’s comments on a let’s-play video. All of this will be ludic and narcissistic and otiose, to be sure. But this is already how thousands of people–including Kevin Drum himself–spend their time. So far as I can tell, they love it.**
I don’t think Drum’s a hypocrite; he would probably argue that he himself, as a pundit, is eminently replaceable. And we’ll soon reach a point at which software can perform all the superficial functions of his job: making predictions, composing clear sentences, interpreting data. But is that what makes punditry profitable? Don’t make me laugh.
A pundit’s value can’t be meaningfully disentangled from his authenticity: the idea that there’s some human guy, made up of the same juicy stuff as the rest of us, who’s thinking and feeling various loony or perspicacious things. Heck, I could probably write a chatbot this weekend that would tell me all the things I like to hear from pundits. “That Trump, he be very bad human.” “Democrats guilty of big epic tone-fail.” “More slime make better movie.” But you know what? It wouldn’t be the same.
This won’t change when robots get better at providing services, however “service” is defined. It will only change when people place so little value on their own humanity that they lose interest in the humanity of others. It will only happen, that is to say, when people take no interest in their own lives. And how likely is that?
As it happens, Drum thinks it’s not only likely, but inevitable, though he doesn’t say so in his article. In a separate blog post, Drum argues that once machines take our jobs, people will realize that we never had much worth in the first place:
This is actually the scenario I consider most likely. After a while, humans will finally be forced to accept that, yes, robots are so much smarter and more knowledgeable that we’ll never even come close to catching up with them. That literally leaves us with no purpose. Over time, we’ll get listless and depressed, stop having children, and eventually just die out of our own accord. This will take a little while, but probably only two or three hundred years. This might explain why we’ve never seen signs of life elsewhere in the universe. For biological life, the window of time between the invention of advanced technology (i.e., things that can be detected across long distances, like radio signals) and the end of the race is only a few centuries. Every few million years there’s a very brief spark of intelligent biological life and then it winks out.³ The odds of two of them happening at the same time is slim.
He goes on to predict that intelligent machine will eventually die of futility, too–naturally, since the astrophysicists haven’t seen any sign of robot ETs, either. And here we come to my fourth, last, and biggest objection to his thinking, though to call it something as mild as an objection hardly seems apt.
One of my favorite catchphrases from online arguments is “I just don’t understand,” as in:
“If someone hasn’t actually said the word ‘Yes,’ they haven’t consented. I just don’t understand how people fail to grasp this simple concept.”
“I just don’t understand why people make it so complicated. A man has a penis and a woman has a vagina. Simple.”
“I just don’t understand how people can still believe in God, after everything science has taught us.”
I just don’t understand why you refuse to accept that you’re a rapist, or pathologically delusional, or a benighted primitive. Why is that so hard?
I find that people who sound the alarm about AI are especially guilty of this rhetorical maneuver. They begin their arguments in seductive fashion. Wake up, folks: AI’s improving by leaps and bounds, and it’s really going to shake things up. Then they turn up the heat: it could even be a threat to our very civilization.
But they’re never content to leave things there. They always take that fatal extra step.
AI is coming, people, and it’s going harness all matter in the universe for the construction of a giant computer implementing an arbitrarily chosen program for eternity!
AI is coming, and it’s going to reinvent the world as a giant simulation!
AI is coming, and in fact it probably already came, and we’re living in a simulation right now, but only one in a hierarchy of simulations of increasing and various complexity, merely to contemplate the existence of which is to risk disrupting its stability!
AI is coming, and it will teach us all that life as we know it is empty and pointless, a ruthless grind of meaningless competition that ultimately terminates in a hollow and empty death, and that nothing has mattered or could ever matter, and that the only reality is futile suffering and the word for truth is Void.
And then, the inevitable addition: I just don’t understand why people don’t take this stuff seriously.
Gee, I wonder? Why do so many people have this idea that people who worry about AI are really seeking an outlet for their own hidden fears? Could it be becau–
OH MY GOD LIFE IS MEANINGLESS WITHOUT MY CAREER AND SUPERIOR SMARTS I WILL BE WORTHLESS NO ONE WILL EVER LOVE ME ONLY SOULLESS EMPTINESS WILL REMAIN AFTER THE RUTHLESS LAWS OF DARWINISM EXTERMINATE US ALL!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Whoops. ‘Scuse me. Don’t know what happened there.
This post has gone on long enough, so I’ll wrap up. In the past, technological revolutions–agricultural, informational, industrial–brought a lot of power to human groups, but at a cost to human individuals. They threatened to reduce millions of people to the equivalent of pack animals, or mechanisms, or voiceless masses. The solution in each case was to develop new systems of meaning and morality–new religions, philosophies, laws–to restore the dignity and agency that had been lost.
Those changes involved much more than better distributions of resources. They depended on new justifications for distributing both resources and power: new visions of human worth.
If the AI revolution has the effects that many of us expect, we’ll need to do much more than come up with a more progressive tax policy. We’ll need to develop better ideals.
The deeper thinkers on the subject already understand this. But they tend to focus on training computers to comprehend our existing morals. That’s not enough. We need to change. As in the past, we need to become worthy of our creations.
This isn’t something that can be done in a day, or by one person. Certainly it can’t be done in a single article or blog post. Baby steps. And the first of those steps, I think, is to scrub away the taint of misanthropy that sours so much writing on this subject.
Soon, Kevin Drum argues, our computers will be smart enough to teach us that our lives are worthless after all.
Call me crazy, but I’d say that’s probably not the best banner to bring to a revolution.
*Here’s another way to look at it. Humans are, most people would agree, smarter than spider monkeys. Does this mean a human can simulate the behavior and thoughts of a spider monkey, become virtually indistinguishable from a spider monkey, blend into spider monkey society? Isn’t there something about the immanent experience of spider monkeys that is irreducibly and irreplaceably spider monkeyish? If all the spider monkeys went extinct, we could probably hire humans to live in the jungle and imitate aspects of spider monkey behavior. But this imitation would be imperfect. Wouldn’t something have been lost?
Absolutely. I’m of the belief that something inexpressibly precious would have been lost. But that doesn’t stop humans from putting spider monkeys in zoos, destroying spider monkey habitats, killing spider monkeys at will, and generally treating spider monkeys as if their dubious economic value translates to a total lack of intrinsic value.
**It’s an essential feature of this vision that human crafts and services are inferior to machine-made counterparts. Suzie’s handcrafts are cruder than AI-made equivalents, human waiters are sloppier, human writers less lucid, and so on. These goods are valued over robot offerings only for having been made by real people.
If your response is to say, “But that’s silly: once Suzie’s ‘crude’ handcrafts come into fashion, smart machines will simply learn to imitate her style”–whoa, stop. Can’t you see you’ve just raised the value of an original Suzie even further by flooding the market with cheap knockoffs? What’s more, because AI imitations are so good, only a few people know the difference between a knockoff and an original. How do they come by their special knowledge? By tracing the provenance of Suzie’s artworks. But computers also generate the records of artistic provenance, as well as brilliant forgeries of those records. And if AIs can learn to detect those forgeries, they can also learn to produce better forgeries. And on and on forever. Ultimately, everything comes down to knowing the right people: an expert who knows someone who knows someone who is willing to vouch that a given Suzie was actually made by Suzie, because he was in the room when she produced it, or because he personally knows her agent, or because he employs a computer scientist who works with an AI that’s trained to detect forgeries, or whatever. Reputation itself becomes the final scarce resource.
And if you say, “That’s all well and good for art enthusiasts, but what is everyone else going to do?” then I beg you to expand your definition of the term “art enthusiast.” The basis of this kind of reputational economy is a group of people with enough leisure to bid up the value of trivial stuff. Because people who had this leisure in the past secured it through systems like land inheritance and education, we associate leisure with aristocratic habits: high fashion, deipnosophistry, appreciation of fine wine and art. But that’s old-hat. When other people get leisure time, they gossip about tabloids and games and TV shows, and build up status around those pursuits. As we used to say when I was in middle school: “Same difference.”
Finally, if you’re tempted to say something like, “But all this falls short of true AI, which involves digital creations that have a will of their own … or augmented humans that have been uploaded into machines … or godly supercomputers that can predict the future”–well, then, I think mass unemployment is the least of our worries.