Yes, I’m a Luddite

In this Slate article, David Auerbach makes the case for a nuanced Ludditism. He does a good job, I think, of reclaiming the label from techno-zealots who use it as a term of abuse, and a brilliant job of summing up Luddite history in a short space. But I don’t think he goes quite far enough in advocating Ludditism as a moral position.

Auerbach gets at the heart of the matter, which is that our current, much ballyhooed clash between techno-optimism and Ludditism is really part of an old, old debate about materialism. Are humans merely grinding, pumping, pulsing, locomoting meat-machines? Or is there something more to us? As Auerbach points out, you don’t have to have a satisfactory answer to that question, or even a sophisticated answer, to feel uneasy about materialism as a worldview. And if you are uneasy about materialism, or about its social implications, or about the way it’s manifested in what we call technology, then you are, at heart, something of a Luddite.

I’d go farther. The problem isn’t just that some technologies have harmful effects, or that technology has become the handmaiden of a strain of reductive positivism. The concept of technology itself entails an unworkable contradiction. This makes anything called “technology” inherently suspect. If Ludditism is defined as an instinctive distrust of technology, the only reasonable option is for us all to be Luddites.

Here’s what I mean. Auerbach structures his argument as follows. Technology is the manipulation of physical systems for human ends. The body is one such physical system. Thus, from a technological perspective, the body comes to be seen as a kind of machine–a tool, as Auerbach puts it–in a way that ends up stripping human beings of dignity.

But if the body is a tool, who’s using it? You? Is your mind using the the tool that is your body? That can’t be right; from a technological perspective, the mind is reducible to the brain, and the brain is part of the body. Your peers? But they’re just machines, too, other versions of the same kind of tool. So who’s using all these tools? The government? Corporations? The nation? Society? But these are all just names for groups of tools, systems containing other systems. Why should these abstractions have agency if an individual person does not?

We’ve come back to those old philosophical chestnuts: agency, consciousness, free will. What I want to stress right now is that technology, as we use the word, is a kind of rhetorical trick for making all these issues disappear. If I say that something is an example of technology, I mean that it’s designed and used by someone for some purpose. The very word technology assumes agency, will, intent, subjectivity, consciousness–all those quantities and qualia that stand in opposition to strict materialism. Technology is a fundamentally spiritual concept. It implies the work of an intelligent designer, but leaves unstated who that designer might be.

Yet Auerbach is surely right that the concept of technology, in our own culture, is married to another set of assumptions: that nothing is sacred, that the mind is merely an artifact of the body, and that the body is a mechanical and chemical construct–in essence, that we can change and manipulate ourselves just as we change and manipulate our environment. The body, in this view, may be something more than a tool, but it is certainly less than a divine creation. Likewise, the brain may be a machine of baffling complexity, but that needn’t dissuade us from tinkering with its parts. In short, this attitude holds that because all systems in the world are material, all systems can be redesigned–including whatever system is doing the designing.

All of which begs a rather important question: where, in all this mechanical reductivism, do we locate the agency on which technology depends?

The answer seems to be that we can locate it wherever we feel like locating it, so long as the most important assumptions are left unexamined.

If we’re arguing from an economic perspective, we assume that consumers and sellers have essentially unlimited agency, and that the use of technology just reflects the will of the marketplace.

If we’re arguing from a psychological perspective, we assume that people don’t have much agency, that they can be manipulated by techniques and boondoggles they don’t comprehend, and that one application of technology is to guide and control people in ways they can’t understand.

If we’re arguing from a scientific or medical perspective, we assume that subjects and patients have virtually no agency, that they’re just complex physical systems like storms or ponds or computers, and that they can be controlled and modified to pretty much about any degree by outside intervention.

If we’re arguing from a criminal justice perspective, we make whatever assumptions suit our particular aims–usually, we grant people agency when we want to punish them, and we deny them agency when we want to exonerate them.

And so on, across a range of fields and intellectual purlieus that usually combine many different unstated ideas. (For example, in pharmacological contexts we assume that consumers have total agency when they choose to buy painkillers, that they have less agency when they’re subjected to unscrupulous advertising about painkillers, that they have a very limited kind of agency when they’re addicted to painkillers, and that have virtually no agency while under the effects of an overdose of painkillers.)

In this moil of contradictory philosophies and practices, the concept of technology serves mostly as a hedge for concealing what sort of agency we’re talking about. This makes it an ideal tool for granting agency to some people while denying it to others, in whatever way is most convenient for the speaker. And that, to me, makes the concept of technology inherently suspicious. It’s all too often a devious dodge for ladling out blame or escaping culpability.

Consider. When Facebook tweaks its algorithms as part of an experiment on its users, how should we interpret that event? Should we take an economic perspective and say that consumers have full agency, then sit back to let the market handle the problem? Should we take a psychological perspective and assume that Facebook has agency in its running of the experiment, while users are effectively subjects under manipulation? Should we take a legal perspective and assume that agency in such a case is portioned out by statues and precedents? Should we take a humanist perspective, dismiss constructs like “Facebook” and “corporations” as social fictions, assume that only individuals have agency, and blame whoever actually planned and implemented the scheme? Should we take a political perspective and blame whoever involved has the most personal authority (e.g. Mark Zuckerberg, the head of the FCC, Barack Obama)? Should we take a social perspective, assume that individual people don’t matter, and grant agency mostly to larger groups? Should we take a historical perspective and combine all the above perspectives? Or should we take a scientific perspective, assume that there is no agency in the matter at all, and wave it all away as a bunch of stuff that happened in accordance with inviolable physical laws, statistical principles, and predictable trends?

Better to reach for that magic word, that empty signifier, and paste it over the whole confusing mess.

It’s no one’s fault. It’s everyone’s fault. It’s society’s fault. It’s the CEO’s fault. It’s nothing we change. It’s an invented problem. It’s what we asked for. It’s what we never asked for. It’s the province of rare geniuses. It’s historically inevitable. It’s what gave us civilization. It’s what civilization gave us. It’s everyone’s responsibility. It’s no one’s responsibility.

It explains everything and it explains nothing, and it will make you sound smart and important and up-to-date.

Blame technology.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply