Men Are Alt-Right MRA Misogynist Dudebro Pigs, Women are Oppressive Totalitarian PC Ideological Feminazis

A Google employee recently ended up in hot water for arguing that women and men tend to seek out different activities and careers because they have, on average, different values and interests.

This is an idea that–I exaggerate only slightly–every single person who has ever lived has probably entertained at some time.

So why did this guy’s memo provoke a firestorm of angry commentary?

We can talk about swelling resentment toward the tech industry, anger at the claims of evolutionary psychology, longstanding anxiety around sex and feminism. But having the read the memo, I think the writer made three critical decisions.

First, he wrapped his argument about sex differences in drawn-out attack on PC thought-policing, presenting himself as a victim of liberal groupthink. He picked a fight, and he got one.

Second, he used the trigger word “biology,” which always sets off rhetorical microwars between scientistic contrarians and postmodernist ideologues.

Third, he set up his discussion of sex and personality by asking what makes women different from men. No matter how you answer that question, liberals get angry, because it implies that men are the standard sex from which women represent a deviation. The memo writer might have had better luck flipping his argument. What, he should have asked, is different about men?

Consider what the reaction might have been like if he’d written something along these lines.

Google is admirably committed to hiring and advancing more women. Currently, the company pursues this goal through forms of diversity training meant to reduce discrimination. The hope is to make the workplace more welcoming by changing employee attitudes and behavior.

But what if that’s not enough? Are there other methods that might help our company–and the tech industry in general–recruit and retain more talented women?

That’s a question worth asking. Mandatory diversity training has documented drawbacks. It can result in superficial compliance while provoking a backlash that results in more discrimination behind the scenes. It puts the onus for change on individual employees, without addressing institutional factors that might be driving women out of the field. It draws inspiration from scientific research that has recently come under fire (see, for example, the literature on stereotype threat and implicit bias). And it focuses only on one half of the problem. Instead of asking, “What drives women away from computer science?” we at Google might want to ask, “What can help to draw them in?”

Here, too, current research can offer guidance. Tech is famously a male-dominated field. And the psychological literature reports that men are, on average, more likely to exhibit certain traits, preferences, and handicaps.

Many of these will sound depressingly familiar. Men have been found to be less cooperative. Men are more aggressive, and, as a result, perhaps, more willing to put up with stress and pain. Studies have found that men are less agreeable, less gregarious, and less empathetic.

Significantly, men have also been found, on average, to show less interest in other people. Sometimes this is described as a preference for things over feelings, sometimes as a taste for systems over relationships. Whatever the terms, the trend is the same. Overall, men put a lower value on their interactions with others.

None of this is true for all men. The average differences are often small, and individuals show wide variation. And it’s not entirely clear where these traits come from. Some have been linked to high levels of prenatal testosterone. Evolutionary psychology suggests that, in an unkind phrase, men are simply more “disposable” from a genetic perspective, therefore more likely to engage in risky and solitary behaviors. It goes without saying that culture plays a role; boys are often encouraged to pursue status at all costs and punished for exhibiting stereotypically feminine traits.

What can be said with certainty is this. At the age when young people are contemplating a choice of profession, men are more likely to consider jobs that are stressful, combative, abstract, and lonely.

That sounds a lot like the tech industry.

So what can be done?

We should all be cognizant of the baleful effect that discrimination has on workplace culture. But we should also entertain the possibility that structural and systemic factors help to make tech a male-centric field.

Instead of putting the blame for our lopsided labor force entirely on employee attitudes, here are steps Google might take to attract a more diverse pool of candidates.

  1. Make software engineering more people-oriented with pair programming and collaboration. Find more outlets and uses for employees who value cooperation. The tech industry in general should make a greater effort to attract talented candidates who thrive on social connections.
  2. Offer more stress reduction courses and similar benefits, and work to make leadership positions less stressful. The tech industry is famous for its punishing demands; this drives away sensitive but capable recruits.
  3. Offer more flexible hours and more opportunities for part-time work, making accommodations for employees who want a reasonable work/life balance.
  4. Focus critical attention on male gender roles, challenging cultural influences that drive men to seek status at the expense of human connection.

In pursuing these goals, we should be mindful of the company’s larger mission. It makes sense for competitive, hardworking employees to get ahead at Google, whether they’re women or men. As always, we have to balance the benefits of greater diversity with the costs of changing the way we do business.

But isn’t change what we do? In striving for greater diversity, we can’t allow ourselves to lapse into a dogmatic insistence on what’s always been done. We should experiment with new approaches, new views, new sources of information. We should let science guide and inspire us. Above all, we should entertain a diversity of views in our pursuit of a diverse workplace–and, as always, keep an open mind.

If he’d written that memo, people might have objected to its gender essentialism. Critics would have quibbled with its facts, assailed its logic. Male rights advocates might have griped about its depiction of loutish underscocialized men. But would furious debates have blazed across the internet? I doubt it. And yet the substance of the argument is largely the same.

There’s a lesson here about the ways in which different groups compete for victim status. But I’m more interested in the tech angle. As I’ve written many times, the web is something of a giant gossip machine, and nothing illustrates this better than our endless online tiffs and spats over some person’s rhetorical choices. There are probably millions of extant documents saying that men tend to be less empathetic, or that women are more likely to put a high value on intimacy, or that women are prone to beat themselves up over trivial matters, or whatever. God knows everyone talks about this in private life. But it doesn’t light up the circuits of cyberspace.

Then some day, somehow, someone pushes all the right buttons, and the gossip machine goes into effect, sputtering, churning, shutting down servers, doxxing names, spitting out takes and takedowns and downvotes, elevating some reputations, ruining others, doing its work as loudly and efficiently as a woodchipper. Pop, fizzle, like, tweet, ding. At the end of the process, a few more ads have been clicked, a few gigabytes of data have been harvested, a few new ulcers have formed, and a few more citizens have been politically radicalized.

Does it have to be this way? Does a worldwide network have to devote some share of its bandwidth to running a virtual gossip machine? Or is this a function of the way we built out the software layer of the internet, going back to Google’s original use of links as an index of popularity, popularity as an index of relevance, and relevance as an index of informativeness? Almost everything about the current design of the web–everything about the way it organizes information–is based on a few crude assumptions:

  1. Something popular is preferable to something unpopular.
  2. Something current is preferable to something old.
  3. Something fast is preferable to something slow.
  4. Something familiar is preferable to something unfamiliar.

These masquerade as rules for sorting information well. But they’re really just shortcuts for sorting lots of information quickly.

I don’t deny that as shorcuts, they do a good job of approximating human decision making. If I’m presented with an overwhelming range of choices–a mountain of unsorted books, a bevy of similar newspaper stories–I’m more likely to choose something popular, new, short, and familiar. But why should I want my web tools to emulate me at my most mercurial, scatterbrained, hasty, and impatient? I want tools that help me make better decisions, not tools that always play to my worst tendencies.

It’s bad enough that the web works this way with content sorting, product recommendations, site rankings, and news. But when it comes to social connections–reputation, relationships, group affiliations, any kind of conversation–this quick-and-dirty approach is a disaster. The gossip machine takes in the signal of human intelligence and spits out the noise of human stupidity. It brute-sorts people by the roughest metric available, asking of every user: what drives you crazy? It’s like a piano that only makes music when you bang the keys with a sledgehammer. It’s like a bathroom scale that thinks you weigh more when you’re shouting. It’s like a superpowered calculator that only calculates common denominators.

Worse, it’s a machine that treats its own design as input. The system’s built to favor whatever’s recent, popular, and familiar. That’s what users see, so that’s what they respond to. Whatever’s popular gets more popular, whatever’s familiar becomes overfamiliar, and the timeframe for what qualifies as recent narrows to a razor-thin margin. The Google bosses like to imagine that the web will one day evolve into a vast AI. But it’s already an AI, thinking one idiot thought over and over: “Wow, people sure love trivial shit!”

I really believe we’re living in a dismal age for information technology. Not a dark age, exactly–it’s not like we’ve forgotten how to compute. A white-hot age, perhaps, an age illuminated by the light of ten billion digital suns, until our eyes are fried and the atmosphere ignites. We’re over the early flirtations, past the engagement, through with the honeymoon, recovering from the warm glow of infatuation. Now the house is full of unwashed laundry, there are dirty diapers under the bed, something’s burning in the oven, and the sink just broke. Someone you used to love is yelling that it’s all your fault, and the air smells like despair and charred shit.

It’ll get worse, I think, before it gets better. Like that part of the agricultural revolution when Egyptian peasants and slaves were dying by the thousands to build temples and tombs for crazy despots who happened to have a monopoly on literacy. Like that part of the industrial revolution when troops of workers were marching into the mouths of Satanic mills and children’s corpses were piling up in Irish workhouses.

We now have proofs of concept for the dream devices of science fiction–a worldwide network, virtual worlds, portable computers in every pocket–but the tools themselves are worse than bad, because they all rely on cheap fixes for major challenges. We’ve sorted the world’s information, but we used the crudest possible methods. We have computers in our pockets, but the interface is so constrained and clunky that all you can do is poke at them like an impatient toddler. We have virtual worlds that look increasingly lovely, but we get around them using clunky overlays that constrict attention and stupefy the senses. We traded our privacy for a giant pile of data, and we have no idea what to do with it.

Almost everything about the way we use computers today comes down to a simple trick: making computers better by making people worse. We haven’t mastered the nuances of human behavior, so we reached for the easy money. We staked everything we had on addiction, anxiety, distractability, and anger. We can’t even come close to emulating knowledge, but we’ve invented ten gazillion workarounds.

The crazy thing is that I think engineers know this. They know they’ve built a bloated, teetering cybereconomy on a clumsy hack of the human pleasure center. They know they’ve created an enormous complex of flashy but inflexible devices that can only function if users are trained to repeat a few compulsive and restricted actions. They know that those two features will only work if they work together, that the world of popular software is mostly a pile of Skinner boxes, training users to keep pounding at a few simple levers because the system would fall apart if they tried to do anything else.

They know, above all, that they’ve tapped into a tiny, tiny part of what computers or humans can do.

They know this is their world, and it drives them crazy, which is why they’re so frantic about taking the money they’ve made off their gossip machine and pouring it into moonshot projects, AI research, robotics, and space exploration. When I look at the Google bosses or Mark Zuckerberg, I see a bunch of aging pop stars using the money they made licensing bubblegum hits for car commercials to self-fund an experimental jazz career. “Look,” they’re telling the world. “Tech doesn’t have to be top-forty all the time. Computer science can be so much more.”

But the public isn’t listening. The public is stroking its beloved smartphone, a miracle device that lets you do almost anything as long as it involves poking clumsily at large icons. The public is writing rhapsodies about the latest video game, which looks absolutely amazing as virtual wallpaper but mostly consists of following simple instructions to earn stupid upgrades. The public is on Facebook or Snapchat or Twitter, where everyone and his second cousin just posted a link to a poorly worded argument that you absolutely have to respond to. The public is cheerfully feeding the gossip machine one kind of information, over and over and over–here’s the easiest, cheapest, quickest, simplest way to get our attention–and the engineers have no choice but to respond. Because they built this goddamn thing. And now they’re stuck inside it, with the rest of us.

Posted in Uncategorized | Leave a comment

Osita Nwanevu Almost Gets It

I’ve been thinking about David Brooks’s sandwich gaffe.

A few weeks ago, Brooks wrote a column arguing that subtle cultural codes–what novelists call “manners”–were a bigger drag on social mobility than so-called structural barriers. As an illustration now made notorious by an orgy of viral snark, Brooks offered a visit to a bourgie sandwich shop as an example:

Recently I took a friend with only a high school degree to lunch. Insensitively, I led her into a gourmet sandwich shop. Suddenly I saw her face freeze up as she was confronted with sandwiches named “Padrino” and “Pomodoro” and ingredients like soppressata, capicollo and a striata baguette. I quickly asked her if she wanted to go somewhere else and she anxiously nodded yes and we ate Mexican.

Liberals had a field day. What a joke! How condescending! What a trite and tone-deaf anecdote!

Osita Nwanevu, at Slate, had a smarter take. He pointed out that this is the point intersectionalists have been making for years about race and gender:

The concept of intersectionality, which Brooks dismisses in his column as an empty cultural signifier no more meaningful than a membership at a barre studio, is partially rooted in the idea that class can erect invisible barriers to mobility and respect similar to—and in fact linked to—the barriers imposed by race, gender, sexual orientation, ability, and other components of individual identity … Social justice warriors thus have no difficulty incorporating the discomfort working-class people feel in unfamiliar situations into their broader analyses of how society leaves them behind.

Hmmm … In my experience, social justice warriors usually bring up the subject of class to explain how poor white men can still have privilege. As in: “Hey white doodz, just because you lack class privilege doesn’t mean you don’t have race and gender privilege, okay? *drops mic*.” That’s a valid argument, but not exactly what Nwanevu has in mind here.

Still, the larger point stands. If Brooks cares so much about class-based microaggressions, why not other microaggressions? Indeed, it’s easy to imagine a Tumblr-style version of Brooks’s anecdote:

Listen up, white people!

I know you think it’s totally cool and OK and not at all problematic to take friends out to eat wherever you feel like, because you absolutely need your soppressata or pomodoro or whatever high-class shit you want for lunch, and you just can’t imagine why people wouldn’t be delighted to bask in your hip white foodie taste. But let me tell you, this way you get about your food, this is straight-up white supremacist bullshit, and it is Not Fine. And pretending like it is–like you just want to go out and get an innocent bite to eat–that is some deeply ignorant liberal bullshit there, and I’m calling you on it.

I’m not even talking about cultural appropriation. Because I know how you get when you hear those words. I’m talking about stuff you think is totally mainstream (gotta love that word, “mainstream”), when you go bringing friends to some Eurocentric sandwich shop where the whole menu’s in Italian (read: white-centric) and the breads are some kind of organic hand-peeled-oat stuff (read: white-privilege) and anything not from Europe is advertised as “exotic” or some colonial bullshit like that, and even the water bottles are done up like little signifiers of capitalist status anxiety, and the whole place is basically a giant exercise in code-switching for anyone who didn’t grow up in the radiant gated epicenter of postcolonial race-segregated America …

I’m not saying I don’t have my own shit to work on. I’m not saying I’m immune to this stuff. In a racist society No One is immune. But you really need to hear this. Pulling a move like that, taking people to a place like that, it is NOT innocent. It is NOT harmless. And I don’t want to hear some kind of white-fragility whining about universality and intentionality and all that kind of liberal-tears shit. If you think ignorance is an excuse, if you need to come at the people fighting this fight and ask them to explain why that kind of colonial bougie foodie-culture is wrong and evil in eight thousand ways, if you’re going to lay that burden on marginalized people along with everything else they’re carrying, well, all I can say is you really need to unpack why you feel that way. Because when you pretend like a sandwich shop is just a sandwich shop–when you can’t even see the codes written there, codes that for a lot of people are powerful triggers of personal and historical trauma–that’s the pathology of privilege, right there. That’s aggression, dominance, power, dehumanization. That’s an act of implicit violence, and you need to deal with it.

If David Brooks had written something like that … well, all I can say is, it will be a delightful day when David Brooks writes that column. But is there any doubt that if this had popped up on Medium under a pseudonym, a conservative who read it would start foaming at the mouth with alarmist rants about totalitarian snowflakes? And that a centrist liberal who saw it would either stay conspicuously silent or pen a hesitant and fretful defense of Western humanism? And that an intersectional leftist responding to the piece would applaud its bravery and relevance, or make criticisms not of the argument itself but of the writer’s presumed identity? And that an alt-leftist reading the piece would denounce it for putting trendy virtue signaling ahead of collective class interests?

And yet this little pastiche of mine makes essentially the same argument as the Brooks column, with only a slight shift in subject matter and a major shift in a style. So what makes David Brooks a moral exemplar to conservatives and a convenient punching bag for leftists?

In his column, Nwanevu looks for the answer in Brooks’s past writings. He makes a good case. I’d point to three aspects of the sandwich anecdote itself:

  1. Topic. Brooks wrote about class instead of race, which these days, thanks to Trump, reads as “putting the desires of poor, bigoted whites ahead of the needs of minorities.” Not good.
  2. Voice. Brooks wrote his piece in a confessional mode: I exercised my privilege in a clueless fashion, I later came to regret it, now I’m reflecting on the experience, etc. etc. … In lefty culture, this is a big no-no, because it comes across as a form of humblebragging. Let me show you how much privilege I have by fretting about my treatment of those with less privilege. Yerk. The accepted protocol–even for upper-class whites–is to call out other privileged people and then tag on a reminder that you’re working on your own biases, too.
  3. Style. Brooks eschews the left’s academic jargon, even blithely dismisses it as an empty class marker. But what really stands out is the highminded earnestness of his style, as if he’s stroking his chin or adjusting his spectacles after every line. The favored rhetorical style on the social-justice left, by contrast (in online circles, anyway) is a kind of impassioned tongue-lashing, equal parts anger and scathing contempt–a mashup of spoken-word feminism, Black Power sermons, and the drawling, acerbic wit of gay TV characters, with perhaps an occasional dash of the voluble dudespeak associated with David Foster Wallace. The original aim of these various rhetorical modes, I think, was to embody alternatives to the self-important detachment of WASP culture–which arrogantly assumes that personal opinions can be legitimately recast as dispassionate social analysis–and so Brooks’s adoption of the dry culture-critic style makes him an irresistible target.

I don’t think the importance of that last point can be understated. It’s hard not to see this clash itself as a triumph of cultural style over structural substance. Everyone agrees that Americans are trussed and trammeled and tangled in shifting webs of norms and fashions. But efforts to escape or resist those webs inevitably become norms and fashions of their own.

Brooks’s studied impersonation of a sociologist makes him the perfect foil for people who see the whole post-Enlightenment intellectual tradition as an elaborate sham. Leftists are justifiably leery of the stuffy pseudo-objectivity of conservative scolds. But in response, they’ve settled on a slangy, hyperpersonal style that often makes them sound like snotty teenagers giving hell to their parents. That style has itself become a powerful shibboleth in certain circles, and frankly, I don’t think it can ever be more. The rhetorical modes of the twenty-first century Left are well-suited for dressing down pompous authority figures. But what happens when you’re the one in authority?

Posted in Uncategorized | Leave a comment

On the World-Threatening Danger of Supersized Machines

One of the great things about having a blog is that you get to push back against junk science.

Scientists like to say that their “research” and “reason” and “empiricism” lend them credibility. But every smart person knows this is ridiculous, and that scientists are just as ignorant as the rest of us. More ignorant, in fact, because they don’t even realize how ignorant they are.

Fortunately, if enough ordinary people speak up, we can smash scientific groupthink and purge the world of their dangerous ideas.

As a case in point, take this “scientific” paper. The authors argue that it’s impossible to create supersized machines. What do they mean by this? Any machines larger than a human. They literally think it’s impossible to build machines larger than a person. They put this idea in print!

I know, right? Crazy. And this is supposed to be a reasoned analysis. With charts and everything. The mind reels.

But it’s even worse than that. Not only is this argument totally ridiculous and uninformed, it’s actively dangerous.

The truth is that supersized machines aren’t just possible. They pose an existential threat to humanity, life on Earth, and quite possibly the entire cosmos.

We only need to look at the authors’ reasoning to see why.

Garfinkel et al begin by observing that the “history of life is often understood as one of growth,” starting with chains of tiny molecules and working up to big mammals like humans. That’s true, of course. But the history of machines is also one of growth, and machines have grown in size much, much faster than biological organisms.

The best way to understand this is to think back through history and pick out a few random examples of large and small machines, then line them up along a crude timeline. It quickly becomes apparent that machine size is increasing at exponential rates.

Aleutian handaxes held comfortably in the palm eventually gave way to axes as long as limbs, to saws as long as people, and finally to gigantic hewers and choppers bigger than most dinosaurs. The coracles in which our ancestors used to ply coastal waters, and the nets and poles with which they fished, have given way to gargantuan trawlers and nets that scrape the ocean floor.

Enlarging our scope a little, we note that even dwellings, bridges, and cities–in essence, large stationary machines–have sprawled and swelled and ramified, until whole swathes of the globe might fairly be considered large complexes of interlinked devices.

That’s not all. The trend is accelerating. The pace of machine growth itself grows apace. The shoe served for tens of millennia before the larger saddle arrived. Primitive saddles ruled roadways for a few thousand years before the carriage appeared. The carriage endured for a single millennium before yielding to the semi and the double-decker bus. Only in the last hundred years have kites and projectiles–small, windborne tools–lengthened into jet planes and ballooned into zeppelins.

Small devices are still with us, of course. But their presence is, in a phrase, small comfort. The boundary on machine size moves ever upward, ever outward. Why shouldn’t it go on expanding forever?

So obvious is this trend that only our small size keeps us from seeing it. Comfortable with our own modest amplitude, we fail to take large entities seriously. The giant is reduced to a folkloric phantasm, the elephant to a figure of fun. Adapted to sluggish rates of natural growth–the child’s gain in stature, the gourmand’s gain in girth–we struggle even to imagine a process of accelerating expansion. Like an ant seeing only the ground on which it crawls, not the colossal shoe descending toward it, we’re inhibited by our relative smallness from noticing the threat largeness poses.

Most people walk by a construction crane without even glancing twice. After all, what harm is it doing? What do they have to fear? Why should anyone be worried about supersized machines?

This indifference is a danger in itself. It’s quite possible that one day, large machines will destroy us all.

How do we know this?

Garfinkel et al note that size can be measured in many ways: by height, by volume, by weight, etc. They also note that there are different kinds of size, including very subtle kinds. They write:

[The] second kind of largeness is the one evoked whenever someone is described as “larger than life” or “living large” (Tom, 2004). Largeness of this sort is a non-physical (i.e. non-natural) property, separate from the mundane physical property that “largeness” most often denotes. To build a large machine, then, in the meaningful sense, we would first need to solve the “hard problem” of determining what this non-physical property is and how it arises.

This is exactly why scientists can’t be trusted, with their pointless pedantry and academic quibbles. Who cares if size can’t be precisely defined? When it comes to size, the only question that should concern us is this:

Why does size matter? Why is it important? What is about size that makes it worthy of attention?

Let’s think it through. Looking at large organisms, we see that apart from their size–whatever exactly that means–they have three qualities in common. First, they consume more resources. Second, they’re more capable. Third, they’re more complex.

This is true whether we’re talking about amoebas and alligators, ants and orangutans, or tulips and trees. The big organisms use more material and energy–more stuff. They have more adaptations, more tricks for surviving. And in terms of structure and behavior, they’re more complex.

All these traits are interrelated. Harnessing more resources allows for more complexity, which allows for a greater variety of adaptations, which in turn demands more resources. And so on.

This is also true for machines. A large nail contains more iron than a small nail. A larger engine consumes more fuel. If we consider space itself as a resource–and why shouldn’t we?–then large machines are by definition more demanding.

And capability? Large machines came into existence precisely because they can do more than small machines. Why else would their inventors have put in all those extra resources? And large machines aren’t just more powerful; they have a greater range of powers, too. A dinghy is little, does little, and has little use. A modern cargo ship is larger, travels farther, almost pilots itself, and can be used to transport almost anything imaginable.

So we come to the third property, which is by far the most important. Some rude machines, like hammers and levers, can attain huge sizes without evincing greater complexity. But as a rule, the larger the machine, the greater its sophistication. Even apparently simple devices like giant shovels and huge drills are usually coupled to elaborate motors and regulatory systems, and it’s something of a truism that large machines are basically intricate combinations of smaller machines. So notable is this tendency that I’ll leave it to naysayers to come up with persuasive counterexamples.

These, then, are the three Cs of size as a meaningful quality: consumption, capability, and complexity. It doesn’t matter whether these traits are intrinsic to size, or only correlated with size. What matters is that they all go together. It’s only logical to subsume them in a general property of sizeliness, or sizism, or sizitude–or, more scientifically, a General Size Factor (GSF).

Now we can see why why size is so important, and why large machines should make us afraid. GSF isn’t just a static property. It’s a feedback loop. As GSF increases, machines become more complex and more capable. This leads them to consume more resources, which in turn makes them more complex and more capable, which drives a need for still more resources, and on and on. Eventually a critical point is reached. We find ourselves facing a Transformative Size Explosion (TSE), the consequences of which are unimaginable.

Remember: machines are going through this process at a much faster rate than the organisms of evolutionary history. It took evolution billions of years to get from animalcules to Brachiosaurus. Machines went from pulleys to hundred-foot cranes in just a couple of thousand.

So the question isn’t whether a TSE will occur. The question isn’t even when. The question is: Will it be soon, or very soon? And will we be prepared?

At present, we can only make an educated guess. Judging by the growth of machines to date, I estimate that a TSE will occur on or before Father’s Day 2024. In fact, I think it would be irresponsible to say otherwise.

Nevertheless, a few objections must be addressed.

In their paper, Garfinkel et al note that humans already augment our size in various ways: by increasing caloric intake, by wearing sweaters, and by standing on one another’s shoulders. What they don’t note–another failure of scientific thinking!–is that none of these measures increases GSF.

In fact, considering the history of life on Earth, and surveying examples of large machines, we see that it’s easy to increase size in superficial respects without a corresponding increase in GSF.

For example, there are simple fungal growths larger than any mobile organism. Rocks and trees can become very large without notable gains in GSF. Some species of whales are much larger than others, yet it’s not obvious that they have higher levels of GSF. Looking only at humans, we see that they vary widely in GSF–even though humans on average have a much higher GSF than, say, sparrows.

Even large machines have certain limitations. Enormous aircraft carriers, with all their capabilities, make poorer paperweights than the humblest toy boat.

And it goes without saying that today’s entire complement of large machines, most of which have reached superhuman size by simple metrics, still fail to rival humans in the critical quality of GSF.

This is the paradox of Artificial Size (AS) research: that benefits accruing to GSF are obvious and frightening–as we see in the advantages a human has over a toad, or an ocean liner over a paddleboard. But no simple measure of size can be equated with General Size Factor, which remains, for now, a poorly understood quantity.

In particular cases, then, GSF seems unimportant, even harmless. But if we take the long view of historical trends, statistical averages, and probabilities, it’s all-important. Size is power. The surprising thing is that, while other entities in the universe are, by various measures, larger or smaller than human beings, no entity in the known universe can do what human beings do.

How can this be? How is it that humans are so high in GSF without being high in size-related attributes like length or weight?

Work on this question is ongoing, but suffice it to say, there must be some complex of Size-Relevant Attributes (SRAs), or some imperfectly understood Size-Potential-Maximizing-Endowment (SPME) that unlocks the potential of GSF and gives rise to the unique suite of human accomplishments: throwing spears, riding horses, dodging between the legs of giraffes, wearing XXL sweaters while still managing to pass through built-to-code doorways, using keyboards without crushing them to powder, having sufficient mass to enjoy squeezing Whoopie cushions–all abilities that are uniquely human, and all dependent on human levels of GSF.

This, then, is the Holy Grail of Artificial Size research: not simply to build large machines, but to build machines with Human-Equivalent General Size Factor (HE-GSF).

In sum, and to put the matter as clearly as possible, if we assume that HE-GSF depends on some input to AS-GSF of potential SPMEs plus n>0 SRAs, then the rate at which the difference between HE-GSF and AS-GSF diminishes is given by the formula:

dS/dt = (D{ISRA})(D{SPME}):D

Where D{ISRA} is the discovery rate of SRAs; D{SPME} is the discovery rate of SPMEs; and 😀 is the human-faith-amplifier, that is, the tendency of humans to invest more credulity and energy in AS research as it yields rewards.

Note that in this equation, any of the contributing factors can be made arbitrarily large. If we assign a high value to human credulousness alone–surely not an unwarranted assumption–then the discovery process rapidly accelerates and a TSE becomes unavoidable.

Let’s take a moment to ask, then: what are the likely effects of this imminent machine-size explosion?

There’s no way to be sure. But in a spirit of sober speculation, we can predict something like the following.

In a very short time, due to their enormous capabilities and correspondingly high need for resources, supersize machines will harness all matter and energy in the known universe. Because the primary function of size is to exert force–much as the primary function of intelligence is to implement plans and instructions–these goliath machines will surely seek to exert a supreme amount of force on the fabric of existence itself, compacting every suitable deposit of matter into a singularity, reducing the cosmos to a froth of gravitational distortions and zones of intense quantum fluctuation. Out of these rents and tears new seed cosmos will be birthed, some of which will have mathematical constants allowing for the existence of machines of even larger size and greater capability, and so on through a possibility space covering all realizable differentials in the influence of force. This will result in the eventual maximization, somewhere in this garden of branching cosmic paths, of every variable contributing to the state we know as “reality”–including, of course, maximization of subjective pain and maximization of subjective pleasure.

If we embrace the utilitarian project of maximizing happiness among beings capable of experiencing it, then the question we have to ask ourselves, before tinkering recklessly with augmentations to machine size, is whether the hedonic value of happiness-maximizing universes can ever be sufficient to counterbalance the suffering in pain-maximizing universes, which compels us to ask in turn how many Infinite Pain Units (hyperalgons) ought to be considered equal to one infinite pleasure unit (hyperhedon), the answer to which is seven.

Again, this all follows naturally and inexorably from the existence of today’s large machines.

In conclusion, the existence of large machines compels us to predict the existence of supersize machines, which are characterized by possession of a high degree of the quantity known as GSF. The essential feature of GSF is the ability of large entities to harness more resources for self-augmentation, which will inevitably lead to a runaway process by which machines come to dominate or eradicate everything that exists. To deny or doubt this obvious truth makes one implicitly culpable for the anguish of untold billions of souls in an alternate universe.

The critical point is that size, however arbitrary it may seem, is the aspect of beings, mechanical or human, that ultimately gives them value. Next time you see the nail clippers in your bathroom, picture a set of nail clippers as long as the galactic arm, snipping through starbelts, severing worlds. Then ask yourself: do you want that to happen?

Anything else would simply be small-minded.

Posted in Uncategorized | Leave a comment

Making the Perfect the Enemy of Humanity

As I write this, people are debating whether or not it’s okay to have “genital preferences” when it comes to sex.

Suppose you’re attracted to women, but you don’t want a girlfriend with a penis. Is that wrong? Does it make you transphobic? Wouldn’t it be better–more loving, more egalitarian–if people could get past these crude linkages between sex and gender and anatomy?

Better, perhaps. But possible?

I understand the sentiment. I think it would be better, in some ideal sense, if we were all highly fluid, even numinous, in our sexual tastes. That’s how I’d like to be: open to the beauty in every human body, sensitive to the spiritual interplay of sex, turned on by touch and attention and tenderness, attracted by the singular alchemy of soul and substance that animates each unique individual.

Wouldn’t it be excellent if we could decouple the good parts of sex–the sweet sensations, the sympathetic communion, the cleansing exercise–from the gross old baggage of anxiety and jealousy and disgust? Sex would be better in a world where people felt that way. Life would be better in a world where people felt that way.

But that’s not how most people feel. I’m not sure that’s how anyone feels. So asking whether it’s wrong to fall short of this ideal seems to me about as useful as asking whether it’s wrong to feel anger, or struggle with addiction, or think stupid thoughts. We all have those problems. We’d probably be happier if we didn’t. The question isn’t one of right and wrong. The question is how to cope.

I’m usually suspicious when people compare progressivism to religion. But old-school Christians and cutting-edge wokists do seem to fall into a lot of the same traps. When I was going to church, we were told that all mortals were inherently sinful, and that therefore we should treat our fellow humans with humility and compassion and focus on finding personal redemption through loving communion with God. Which doesn’t sound like such a bad message.

But somehow, when you look at the way Christianity is actually practiced, that message gets twisted, over and over and over, into the idea that all mortals are inherently sinful, and that sin must be purged through punishment, and that therefore anyone is susceptible to being punished at any time, and that the best defense against the arbitrary infliction of torment is to make darn well sure that you’re the one doing the punishing.

I see something similar happening with this genital-preference debate. How is it that what starts as an aspirational ideal–let us try to be free of hate, let us strive to overcome fear, let us work to be universally welcoming–degenerates so easily into a punitive standard?

Is the temptation to punish others so irresistible that higher ideals inevitably end up serving as a convenient excuse for cruelty? You say we’re all sinners, all prone to greed and selfishness, that we all have biases and sexual hangups? Fantastic! This means I can wage all-out reputational warfare against anyone I want! Thanks for the rhetorical ammo, Jesus.

Is it that higher ideals make people who aspire to them feel frustrated and inadequate, and the only way to assuage those feelings is to go on the attack against others? I really wanted to get over my hangups. I tried so hard. And I failed! What does this say about me? What if people find out? Am I fraud, a phony? Or is it … hmm … is it someone else’s fault? Yes, that must be it. I wouldn’t have gone astray myself if I weren’t living in a wicked world. The sinners have to be brought in line if the rest of us are to be redeemed …

Or is it that social sorting ends up reinforcing the connection between idealism and fanaticism, through a kind of 1-9-90 principle? At the beginning of the process, people come together around shared values. Voluntary communities form. But over time, a few crazed fanatics start jockeying for power and hogging all the attention. Those fanatics end up competing for the allegiance of a small but cohort of devoted followers. They pick fights over doctrine and wage sectarian warfare. They get absorbed in personal vendettas, denouncing enemies in the name of the cause. They give the ideals they supposedly stand for a bad name. Meanwhile, most believers, the laity of the movement, sit back and watch with a mix of dismay, disgust, and bemusement, or simply drift away. The original ideals are fine–they’re not the problem. The problem is that the ideals helped to bring together a large group of people, at which point problems of social organization kicked in, concentrating power in the hands of a few attention-hungry maniacs.

Whatever the case, these visions of perfection, inspiring as they are, always seem to become, over time, the enemies of humanity. Is there any way for large groups of people to rally around a conception of The Good without using it as a cudgel to beat up The Not Quite So Good?

Posted in Uncategorized | Leave a comment

Clickers Anonymous

In a post on free speech, Scott Alexander envisions the following scenario:

Alice writes a blog post excoriating Bob’s opinion on tax reforming, calling him a “total idiot” who “should be laughed out of the room”. Bob feels so offended that he tries to turn everyone against Alice, pointing out every bad thing she’s ever done to anyone who will listen. Carol considers this a “sexist harassment campaign” and sends a dossier of all of Bob’s messages to his boss, trying to get him fired. Dan decides this proves Carol is anti-free speech, and tells the listeners of his radio show to “give Carol a piece of their mind”, leading to her getting hundreds of harassing and threatening email messages. Eric snitches on Dan to the police.

His point is that each participant is technically exercising a right to free speech. And yet, on the whole, human expression has suffered. At the end of this demolition derby, each participant probably feels less able to speak freely.

So what do we do? Scott’s scenario is a parable, but it reads like a summary of a real social-media rumble; I wouldn’t be surprised to learn he’d summarized a recent Twitter war and changed the names.

Scott thinks we should solve the problem by promoting norms instead of enforcing them. So Bob, for instance, after being called a “total idiot” by Alice, would write a genteel blog post urging people not to call each other idiots. Carol would pen an essay denouncing sexual harassment, but never mention Bob by name. Dan would use his radio show to promote tolerance of diverse views. Eric would have nothing to snitch on.

Each person would focus on championing desired values instead of punishing unwanted behaviors. And over time, as the new values propagated through society, there would be no need to coordinate shaming campaigns or snitch on people, because new norms and pressures would dissuade people from engaging in this kind of rude behavior in the first place. At the very least, free speech abuses would become so rare that we’d be able to punish them effectively through official channels. Instead of targeting random people for widespread sins (“All conservatives are racists! Racists should be fired!”), we’d punish rare violations of widely accepted rules (“Wearing blackface for any reason is explicitly forbidden by the college charter.”)

Alan Jacobs goes one step further. He thinks we should treat this behavior as a kind of addiction, and take the necessary measures to curtail it. Effectively, the people engaged in shaming campaigns would themselves be treated as shameful. Stigmatized as antisocial and unproductive, they would be excluded from polite social gatherings. Good, decent people would teach their children not to tipple, toke, or tweet.

Interestingly, both authors use substance abuse as a metaphor, but they use the metaphor to underscore contradictory arguments.

Scott compares social media to marijuana. So many people smoke dope that it’s futile and unfair to criminalize the behavior. By the same token, if everyone’s a jerk on social media, it’s futile and unfair to persecute people for airing controversial opinions. We might not like the behavior–no mother wants her son to be a dithering pothead–but we have to accept that it’s the new normal.

Jacobs goes the crackhead route. Social-media squabbles are so destructive, he argues, and so fiendishly absorbing, that we should counsel our children and our friends to “just say no.”

I’d name a different drug, and ask a different set of questions.

To me, social media abuse looks a lot like alcoholism. Most people get in occasional Facebook feuds, think that Twitter is both fun and a waste of time, and cultivate an image of themselves as web-savvy commentators. Millions of ordinary people also drink too much on occasion, admit that drinking is unhealthy but do it anyway, and choose to see themselves as oenophiles or beerhounds or single-malt snobs. They generally find a way to incorporate both booze and social media into their daily lives, with occasional forays into abstinence or excess.

Some people forswear booze or Facebook entirely, out of proud iconoclasm or stiff-spined puritanism.

And some people, especially but not exclusively young people, have a real problem with self-control, drink till they black out or tweet till they go crazy, cause grief to others and do harm to themselves, and give both drinking and tweeting a bad name.

So what do we do about that last group of people?

Or better yet: what will we do?

I don’t think it’s an accident that people debating these issues glide into comparisons to substance abuse. Connecting social-media use to drug abuse justifies the adoption of a hortatory tone. Here’s how to kick the habit, folks. Here’s how to raise your kids. Here’s how to fight online harassment. Just follow these twelve steps …

But everyone already knows this kind of thing is wrong. People who get in Twitter fights don’t say, “Wow, I just I had an awesome pointless argument with a mob of rando idiots today. Great experience! Really got to blow of some steam.” They say, “Ugh, I got in a stupid fight on social media today.” Then they spend twenty minutes explaining how they were goaded into such an obviously dumb behavior. In the same way, a chronic binge drinker will tell you, “Man, I really overdid it last night,” and go on to explain between bursts of self-deprecating laughter how Charlie kept refilling his glass when he wasn’t looking, and one thing led to another, and you really had to be there, and that’s how he ended up sleeping on the porch floor last night after losing his keys, and boy, wow, he’s really got to stop hanging out with that Charlie character, ha ha, he’ll never make that mistake again, no way.

The problem isn’t just that alcohol is addictive. Alcohol abuse gets paired with tropes and cliches that end up glamorizing destructive behavior. Hey, it’s no big deal, the budding drunkard tells himself. He’s just a party animal who likes to cut loose. Or an artist who’s getting in touch with his creative side. Or a writer, or a rocker, or a working joe who needs a beer to unwind. Or she’s a sharp-tongued socialite living the high life. Or a hard-charging businesswoman burning the candle at both ends.

And you? Well, of course you overdo it at times. What do people take you for, some ordinary boring white-collar office drone? You’re a creature of daring and risk and passion. Drinking is part of how you express yourself.

It all sounds wonderful and edgy and exciting, until you run over the neighbor’s kid, or rape your ex-wife, or ending up sleeping in your sister’s guest room and begging for a part-time job in her pet-grooming business.

And that’s when you realize: all this time you were telling yourself an awesome story about how you were living for bravery and passion and truth, while the normies were a bunch of suckers …?

No, buddy. You were the sucker. You were the one who got played for a dupe, because everyone else who trotted out that party-animal line, they were just having a big old game of make-believe. Only you were dumb enough to take the fantasy seriously.

This is partly why Prohibition doesn’t work. It reinforces the association between addiction and exceptionalism. All the bureaucrats and hypocrites say drinking is wrong, and meanwhile the free spirits are down at the speakeasy, talking about the latest trends and dancing to good music.

I don’t know if Jacobs’s “lecture your kids” recommendation has much effect either, except perhaps in a subtle, long-term, hard-to-pinpoint way. My parents told me not to drink or smoke or do drugs. I did anyway, for all the usual reasons.

Scott’s recommendations? His post is so speculative that I’m not sure what to think. But I wonder if he may have fallen prey, himself, to the twisted logic of addiction.

After, all Scott’s arguing that, as a society, we should frown on certain behaviors–certain speech acts, he might say–including dogpiling, viral shame campaigns, and harrassing people online.

But who approves of this stuff? Who’s he talking to? People who spend all their time on Twitter? Alt-right monsters who pass their days sending pictures of dead babies to feminist bloggers? Professional wokists who think Justine Sacco had it coming? Suey Park? People who send hate mail to Suey Park? He might as well head down to a local frat house this Friday night and start lecturing the drunk bros there about date rape and liver damage.

What matters–what drives the creation of social norms–is that over time, people get sick of bad behavior. They burn out. They wise up. They quit. And they urge others to do the same.

Right now we have two generations who’ve essentially come of age in a state of permanent addiction to social media. It started with blogs, it got worse with Facebook, and it exploded with smartphone push-notifications. Now we’re all slaves to the pleasure center, all looking for ways to retake control. But the bigger problem is that a lot of people are still telling themselves an exciting story, which goes something like this:

Old people just don’t get it. Those stuffy, moralizing hypocrites–white men, centrists, liberals, globalists, establishmentarians–are too crusty and enfeebled to embrace real change. Their tired sermons about speech and social media are just a ruse to help them cling to power. They don’t understand that the authentic voices–the outsiders, the agitators, the agents of unrest–are online now, and that young people, with our social-media dogfights, are going to shape the future. Everyone who truly understands the internet knows that mixing it up online is the only way to shout back at the voices of oppression and build a better world.

So you can spare me your lectures. I’m not some boring, conformist office-drone. I’m a warrior, a renegade, a free spirit, and nothing can stop me from preaching my private truth in my public feed. Yes, it’s draining. Yes, things can get out of hand. But every time some tired, irrelevant scold tries to shut me up, that just convinces me to fight harder.

Which is pretty much the kind of thing young people have always told themselves. The distressing thing is not that twenty-somethings say this stuff. It’s that older people hear the message and take it at face value.

“Sure, I get it,” they say, jumping into the fray. “I agree with everything you’re saying! I even wrote an academic thesis explaining how my generation got here first! And you’re right! Twitter pile-ons are fresh and groovy and totally hip. See? Still got it, baby. This old cat is relevant.”

“I don’t get it,” they say, recoiling in scandalized alarm. “When I was a lad, we respected the norms of polite society and comported ourselves in a civilized fashion. Now, young people think that screaming abuse on social media is what passes for acceptable conversation. These hooligans are going to wreck the country if we give them half a chance. They’re wild and spoiled and uncontrolled, and if we don’t bring down the rod, they’ll never learn.”

And so we get a world where being nasty on social media impresses a lot of people as a form of edgy, vital rebellion, much as getting hammered in the twenties, or high in the seventies, seemed like the fast track to a dynamic life.

The thing is, young people turn into old people. Old people get older. And what looks brave and rebellious in a twenty-year old, and might pass as youthful and modish in a forty year-old, eventually just seems pathetic.

In time, I imagine, we’ll see the emergence of a generation of mature adults who view outrage trolling and Twitter shaming and Facebook fighting as another set of fun but destructive habits, like mixing mystery liqueurs at a house party or spraypainting graffiti on a factory wall. Oh, of course, when they were in college, they did their share of that stuff. They even believed in it. And they have fond memories of the limbic thrill that comes from living at the lightspeed pace of the online gossip machine.

But they’ve also seen people who took it too far. People who got too deep into that scene. People who let it consume them and control them. The clickheads.

Back in college, the clickheads were exciting. They were always worked up about something, jumping into digital battles, riding the highs and lows of the reputation economy. Everyone wanted to be like that. To have that kind of courage and devotion. To rack up all those views and followers. To be involved.

But then … well, it’s hard to say exactly what changed. Years went by. People moved on. It got old. After years of repetitive arguments with strangers, the arguments started to seem a lot less interesting. The followers came and went. The jokes got passé. Services were canceled, data got deleted, trends and communities evanesced like vapid fads. Things that had seemed permanent and important turned out to be frail illusions. All that remains of those old scenes now, the white-hot centers of the cyber zeitgeist, is a muddle of memories and a jumble of numbers. Man, remember that whole Laci Green thing? Remember Hal whatever-his-name-was? Remember doge?

And yet … in this hypothetical future world … there are people who are still into that stuff. Sparring online, day and night. Acting like their feed is the center of the universe. Fighting for their fifteen seconds of viral fame. They’re always talking about some dustup they got involved in that no one else cares about. Somebody outraged them, or maybe they outraged someone else–it’s hard to keep track. They’re boring to talk to and unpleasant to be around. Viral drama ate them alive.

And yes, in this possible timeline of the decades to come–yes, the clickheads still talk about organizing hate-campaigns, ruining a journalist’s reputation, piling on some poor associate professor to “teach her a lesson” and “make a statement.” But people don’t pay as much attention to that stuff anymore, because it all started to seem … well, kind of sad. The hate campaigns backfired as often as they succeeded. The leaders ended up turning on each other, or fell out over obscure disagreements. The authorities stopped paying so much attention, because the public stopped paying attention, because ordinary people got jaded and tired. So someone online is mad about something, they think. Uh-huh. What else is new? Yeah, sure, media outlets still write articles reporting that “Twitter is angry” and that “social media erupted in rage yesterday,” but everyone knows that’s just another form of reality entertainment.

And more time goes by, and subcultures dissolve and reform, and the web outgrows its early reputation as a valiant countercultural frontier. We’re a couple of generations down the road, now, deep into a future in which constant connectivity is as familiar as mass literacy. The clickheads have become figures of contempt and pity, by this time. They insist they can quit the habit, but keep going back. They write essays confessing that their online behavior is actually a worrying symptom of mental illness–and keep going back. They accuse each other of being self-destructive and irresponsible–and keep going back.

Recovering clickheads give public talks about how online flame wars ruined their lives. Maybe they show up at middle schools to warn impressionable kids against going down the same dangerous path. Repentant trolls issue tearful testimonials. Children are subject to a steady barrage of PSAs and workshops about “posting responsibility” and “knowing when a friend has a problem with social media.” Social-media abuse is connected to a host of personal and medical problems, treated as a synonym for poor social functioning. Outrage-posters from all political factions are stereotyped as overgrown children, unreconstructed basement-dwellers, unprepossessing narcissists. Why can’t they just get it together?

We already see this world taking shape. I think we’re passing out of the “social-media addiction as edgy rebellion” phase and into the “moral panic over an epidemic” phase. Everyone now admits that clickheads are a problem, but people focus on framing the problem in a way that supports pet political causes. (“The alt-right gave us Donald Trump!” “No, SJWs and tumblr liberals gave us both Donald Trump and the alt-right!” “No, the alt-left gave us Donald Trump!” And while we’re at it, did the War on Drugs make the crack epidemic worse, or did the crack epidemic make the War on Drugs worse? Does the opiate epidemic cause social dysfunction, or does social dysfunction exacerbate the opiate epidemic?)

As the generations turn over, even this debate will get old. In everyday life, click-addiction itself will come to seem more important than whatever excuses people give for it. “I don’t care why you’re posting that stuff, I just don’t want to see you ruin your life with it.” Already, I think, we have a silent majority that’s fed up with all the online warriors, or rather, with the behavior itself. People say things like, “I love my sister, but I wish she could control herself online,” or, “I had to go cold-turkey Twitter, it was ruining my productivity,” or, “I’m worried about my friend; he’s getting way too involved in one of those weird online communities,” or, “I wish my husband would put down his phone once in a while; he’s on there all the time, yelling at people online, and it makes everyone in his life miserable.”

So maybe we don’t need to worry about creating new norms around online speech. In some inchoate form, the norms are already here.

Posted in Uncategorized | Leave a comment

Where Every Ruthless Cutthroat is Above Average

Scott Alexander at SlateStarCodex recently put up a post asking what was so bad about meritocracy:

The intuition behind meritocracy is this: if your life depends on a difficult surgery, would you prefer the hospital hire a surgeon who aced medical school, or a surgeon who had to complete remedial training to barely scrape by with a C-? If you prefer the former, you’re a meritocrat with respect to surgeons. Generalize a little, and you have the argument for being a meritocrat everywhere else.

The Federal Reserve making good versus bad decisions can be the difference between an economic boom or a recession, and ten million workers getting raises or getting laid off. When you’ve got that much riding on a decision, you want the best decision-maker possible – that is, you want to choose the head of the Federal Reserve based on merit.

This has nothing to do with fairness, deserts, or anything else. If some rich parents pay for their unborn kid to have experimental gene therapy that makes him a superhumanly-brilliant economist, and it works, and through no credit of his own he becomes a superhumanly-brilliant economist – then I want that kid in charge of the Federal Reserve. And if you care about saving ten million people’s jobs, you do too.

As Scott sees it, the problem isn’t with meritocracy per se, but with a kind of sham credentialism that only poses as meritocracy. When we force people to study British novels for four years before we let them be doctors, or give ourbest jobs to dutiful toadies who pad their resumes with silly clubs and extracurriculars, or tolerate a system that hands out plum appointments to the guy who’s friends with the boss’s son, that’s not true meritocracy at all. In fact, all this waste and corruption is the main impediment to genuine merit-based advancement. So when people complain about this sort of ersatz meritocracy, they’re implicitly praising genuine meritocracy.

I’m skeptical.

Scott’s argument creeps awfully close to that old trick of political philosophy where you say something like, “Communism just means a fair distribution of resources–if you think resources should be distributed fairly, congratulations, you’re a communist.” Or, “Liberalism is about granting freedom to individuals–if you’re an individual and you like being free, congratulations, you’re a liberal.” Or, “Patriotism just refers to the natural human affection for one’s homeland–if you feel an attachment to your home, congratulations, you’re a patriot.”

In each case, the definition transfers the burden of argument onto a word too vague to support it. What do you mean by fair? What do you mean by freedom? What do you mean by homeland?

And so it is with meritocracy. The whole problem is that people disagree about what “merit” means.

Take Scott’s accomplished surgeon. Let’s tweak the example. Would you rather have your surgery done by a brilliant surgeon who’s also a violent white supremacist in his spare time, or by a mediocre surgeon who’s a wonderful person? The honest answer, for most people, I suspect, is: “I would like to have my surgery done by the brilliant surgeon, and I would like not to know that he’s a violent white supremacist in his spare time.” Or possibly, “I would like to have my surgery done by the brilliant surgeon, and then I would like for him to be fired because he’s a horrible person.”

What about Scott’s hypothetical head of the Fed? Suppose it comes out that he beats his wife and molests his children? Do we still let him be chairman, because he’s just so darn good at it? Should we throw him in jail, but ask him to keep on making economic decisions? Or do we quietly choose to ignore his treatment of his family, because ethics are a bitch and if you save ten million jobs, that will probably result in less domestic abuse in the long run, and mumble-garble-arble-utilitarian-ends-justify-the-greatest-good-for-the-something-or-other? Or do we convince ourselves that no child molester could possibly be a good economist, because to be a smart Fed chairman you also have to be a decent family man?

And surgeons and economists, those are the easy cases! What does it mean to identify the best Congressperson for the job? Is she the one who knows the most politics? The one who’s most loyal to her constituents? The one with the most party loyalty? The craftiest negotiator? The most charismatic campaigner? The one who does the most legislating, whatever exactly that means? The one who happens to champion policies that you think are good policies? The one who in some way has made the world a better place fifty years down the road? How can we even determine that?

What about the best cop? Most busts? Toughest warrior? Liked by the community?

What about jobs like day-care worker, where the big concern is weeding out horribly unqualified candidates–like the ones who might hurt children–not making finicky, fine-grained distinctions among candidates whose main qualifications are personality traits that are notoriously hard to test for, such as patience and decency and conscientiousness?

Best CEO? One who maximizes profits? Treats employees well? Adopts green policies? Spurs innovation?

Best teacher? If you know the answer to that one, please tell me, because I’m not sure I’ve ever met two people who hold the same opinions on the subject.

It’s awfully easy to throw out words like “merit” and “qualifications” and assume that people agree about what they mean. A lot of the commenters on Scott’s pose are computer engineers, and they seem to collectively assume that computer programming is a field in which metrics of merit are clear and well-established. “Can you code? Great, take this test. Ace it, and you’re hired.”

Maybe that’s true. I don’t doubt that merit is relatively well-defined in that field. But I still wonder. If we asked programmers what makes a good programmer, and then asked the wider public what they want from programmers, would the definitions agree? Should the insider definition take priority? Should we rely on professionals to establish all their own standards? Why? Don’t we want to get some public benefit out of our programmers? Isn’t that, in a way, the point of things like markets and democracies, to give broad populations of nonspecialists ultimate control over standards of merit? If someone’s a brilliant programmer, and all the other programmers agree that he’s one of the best around, but he spends all his time writing programs that let a couple of day traders screw over small investors, is it fair for the public to say, “We don’t care how talented that dude is; he sucks”?

For that matter, suppose we took a bunch of programmers who absolutely agreed that meritocracy is great and that the best programmers ought to get the best jobs and so on, and then asked them to spell out in detail exactly what they meant by merit and what qualifications and qualities they thought programmers ought to have? Would they all agree? My guess is that they’d agree on a few core traits and principles, but argue incessantly over a broader set of secondary characteristics. “Programmers ought to be really smart and have a solid grasp of basic math, abstraction, and logic.” “Agreed, and they also need to extremely independent.” “No, it’s better for them to work well in teams.” “But they definitely need to know calculus.” “No, no, better for them to be good with language; that’s too rare a skill in comp-sci fields.” “What’s really important is that they know [whatever the latest trendy programming language is].” “Please, that’s just a passing trend. A programmer with deep knowledge of a few old-school languages can pick up a new one, easily.” “What’s really important is for a programmer to be a neat and detail-oriented, follow best practices, comment frequently …” “No, I’ve worked with people like that; they’re all hidebound college grads who want to be micromanaged and earn gold stars … Give me a big-picture thinker and I’ll teach him my own best practices …”

And so on. I would bet it’s the same for surgeons, or economists, or any group of specialists who can be broken down into groups of subspecialists with slightly different values, and again into smaller groups with still other values, and finally into throngs of idiosyncratic souls, each of whom operates according to complicated sets of private principles. Once you start shaking that word, “merit,” all kinds of complicated assumptions fall out.

And that’s where the debates about meritocracy come from. It’s all well and good to say, “Let’s put the best people in charge,” but what the heck does that word “best” really mean? Even if we can decide what it means, how can we be sure we’re using the right measurements? The right systems for sorting and selecting candidates? How can we ensure we’re punishing bad traits as well as promoting good traits?

It isn’t just that any real-world meritocracy is an uncontrolled social experiment in the limits of applied measurement theory. It’s that any truly meritocratic system comes with an intrinsic flaw.

Meritocracies are inherently competitive: the whole idea is to develop a system for measuring and ranking people, then give more social power to those with higher ranks. No matter how you tweak, skew, shape, slice, or dice your particular system–no matter what qualities you choose to measure–this will always be true.

Consider a hypothetical meritocracy, one very much like our own. Let’s assume that each job in our meritocracy requires different innate talents. To be a top-ranked surgeon, it helps to have:

High IQ

Dexterity

Ability to focus

Taste for competition

Etc.

 

To succeed as a politician, one needs:

Charisma

Taste for competition

Etc.

 

To succeed as an athlete, one needs:

Strength, agility, endurance

Taste for competition

Etc.

 

To succeed as a writer, one needs:

Introversion

Verbal fluency

Taste for competition

Etc.

You get the idea. We can try to minimize the extent to which a taste for competition is rewarded, but I don’t see how we’ll ever eliminate it as an advantageous attribute in a system that is by design highly competitive. No matter what sorting system you put in place, folks still have to show up and take tests, put in practice, pump their percentages, go through the grind. Get a bunch of people who are closely matched in other abilities, and drive to compete will become a decisive factor. That’s to say nothing of the likelihood that any human sorting system can be at least partially rigged or gamed, and that in a pool of uniformly superlative candidates, those adept at rigging and gaming systems will have a natural advantage.

Which means that over time, a meritocracy starts to churn up an elite class of people whose most notable shared trait is a penchant for coolly calculating self-serving competitiveness. Over in one corner you have a bunch of mental lightweights who are nevertheless great at singing and acting and looking good in underwear. In another corner you have some people who happen to be great at running or throwing small objects. You have some not-too-bright folks who are nevertheless super-talented at charming and manipulating people. You have a few eccentric psychopaths with extraordinary technical gifts. You have surgeons who go home and abuse their spouses. You have a whole lot of people who are good at “leading,” whatever the hell that means.

You have, in short, a class of people with a whole host of gifts and talents and eerie natural endowments, who nevertheless are likely in the aggregate to be characterized by one salient trait: a burning desire to succeed at the expense of others.

How are all these people going to get along? Can they get along? What do they have in common? What defines them as a class?

The superhumanly cunning CEO takes a lunch date with the superhumanly risk-tolerant hedge-fund manager who brings along his superhumanly intelligent top quant. At the club they run into a superhumanly charismatic state representative who introduces them to her superhumanly conscientious aide and the superhumanly talented actress who donated to her campaign. This is a lucky break for the CEO, whose company is negotiating a licensing agreement with the superhumanly fleet-footed athlete whom the actress met through her superhumanly loquacious publicist–and for the hedge-fund manager, who wants to make connections with the superhumanly creative journalist who’s been putting together a profile on the representative. At the end of this tiring day, the CEO goes home to the superhumanly beautiful model who recently became his third wife, the representative goes home to the superhumanly dogged academic who’s currently her husband, the others go home to their superhuman surgeons and artists and machers and accountants, and they all talk about the superhuman accomplishments they’re planning for the next day.

How does it all work? How can these rare, hypercompetitive, and utterly dissimilar superhumans function as a coherent elite? How can they make the connections they need to make, maintain the relationships they need to maintain, without emphasizing the few experiences and traits they have in common? What gives them a class identity if not a shared fascination with narcissistic self-promotion?

And this is the problem I think we have now, or the problem we’re developing. It’s not that meritocracy doesn’t work. It works too darn well. The longer and better it works, the more likely you are to develop a ruling class of self-obsessed individualistic strivers. And what’s scariest is that there’s no good solution. To solve this problem, you have to actively penalize personal ambition. And that cure sounds worse than the disease.

Posted in Uncategorized | Leave a comment

HBO’s Confederate and the False Promise of Prestige TV

In my last post I wondered why people were so leery of HBO’s upcoming show about the Confederacy–and specifically why they expected it to be bad and vulgar in particular ways.

Here’s my theory.

People love to talk about how peak TV is serious art and great literature and the high culture of our time and so on. But everyone secretly knows (or fears) that these shows are actually glitzy melodramas that sustain interest by dishing out contrived plot twists (who will die? who will bone?) until they gradually collapse into vulgar incoherence.

The shows always start out by raising serious questions. But they inevitably succumb to the Iron Law of Serial Television:

Keep the Surprises Coming.

Has a character earned our sympathy? Surprise! She just did something terrible. Has a character earned our hatred? Surprise! He’s actually sympathetic. Have two characters sworn to love each other forever? Surprise! They just broke up. But wait–surprise!–they’re back together. But no–surprise!–one of them just died in a horrible and arbitrary fashion. But wait–surprise!–he’s not dead after all! Are the heroes almost at their goal? Surprise! It was all just a trick; the goal is farther away than ever. Do you think you know where the story is going? Surprise! It just went in a completely different direction, even if the original but predictable direction actually made perfect sense.

The Iron Law of Serial Television means that a show about evil, scary white supremacists will eventually morph into a show about charming, sympathetic white supremacists who actually don’t seem so bad after all. A show about Black heroes triumphing over adversity will eventually turn into a show about how those Black heroes aren’t actually so heroic and their triumph was just an illusion and their adversity wasn’t quite what you thought it was. A show about serious political questions will eventually get bogged down in petty interpersonal dramas. A show that promises great revelations will eventually deliver only overwrought conspiracies. A show with a cast of great characters will eventually sacrifice several of those characters to gin up media buzz. A show that creates a coherent and compelling world will get picked up for more seasons, run out of gas, and start burning its internal structure for fuel.

The result is that popular shows go through five phases:

I: The Premise. In this phase, the world, concept, and characters are introduced. Viewers aren’t sure they’ll keep watching. There’s a lot to keep track of. Things seem kind of slow. Audience and critic comments focus mostly on technical aspects. Is this a cool idea for a show? How high are the production values? How is the acting? The writing? How does it look?

II: Development. Things pick up. The world starts to take shape. Characters become familiar. Suspenseful situations arise. Commentary focuses on the intriguing questions the show has raised: its treatment of class, race, gender, politics. Big issues have been broached. How will it all end? Viewership increases as recaps and debates draw in curious people who had been skeptical at first.

III: The Turning Point. The show has taken shape. Interesting relationships develop. The actors have come to understand their characters, tidied up their techniques, fallen into a natural routine. The fictional world has taken on a definite structure. The ideas and issues driving the show have now risen to the surface; viewers are convinced the writers have something valuable to teach, something meaningful to impart. And suddenly it happens: a series of surprising twists! Relationships that had seemed stable suddenly grow more complex. Characters aren’t what you thought. The fictional world is even deeper and richer than you formerly believed. The show begins to feel like much more than a work of fiction. In its surprising intricacy, its almost overwhelming complexity, it comes to feel–dare we say it?–like real life.

IV: The Decline. After stage III, commentary is ecstatic. Viewers are addicted, critics are rapturous. Pop media outlets serve up endless critical commentary. The show has become must-watch TV. And now, slowy, subtly, the decay sets in. The twists keep coming … and coming … and coming. Important characters die or disappear. Other characters behave in crazy or inconsistent ways, or slog through improbable crises and traumas that would break the soul of any ordinary mortal, or deliver longwinded speeches that sound an awful lot like a desperate writer’s attempt to tidy up messy plot developments. Promised relevations turn out to be big disappointments. Tonal shifts and stylistic errancies mar potentially dramatic moments. Branching plotlines lead nowhere, or turn back on themselves, or form confusing and overcomplicated tangles. The fictional world feels less and less like a real, organic, living universe, and more and more like a playpen for frantic writers who are running out of ideas. Viewers begin to boast that they no longer watch the show. Critical commentary glides away from serious discussion and settles for gossipy plot summaries. Fans debate when exactly the show began to decline and whether it will ever revive. Causal audiences–the people who don’t rewatch episodes or read recaps–complain that they no longer understand the show’s structure or mythology. Even die-hard fans begin to wonder: do the writers have a plan? Do they know where they’re going with all this? Do they genuinely have something meaningful to say?

V: The Windup. The decline lasts for weeks, months, years. True believers try to reassure their friends: yes, the show was bad for a while, but now it got good again, honest. Personnel changes spark brief revivals of interest. Critics write premature postmortems, discussing the show as if it’s already over. Viewers drift away, drift back, drift away again, claiming that they once took the show seriously but now see it mostly as a guilty pleausure, or stick with it out of loyalty, or simply need to find out what happens. People blame new showrunners or casting changes for ruining a once great work. And at last, mercifully, the end arrives. The final season airs. The finale is announced. Interest briefly revives, driven by a consuming question: can the writers pull it all together? Can they save their faltering show? Do they have tricks in store that will redeem the sensationalism, reward deserving characters, untangle the Gordian plotlines, clear up the bafflingly opaque backstory? The longer the show’s decline, the bigger the pressure. Can this giant, creaking train-wreck possibly drag itself into the station?

And, more often than not, the answer is: sort of. The finale airs, some people find it satisfying, some people don’t, some people think it’s horribly disappointing, and some people are more interested in saying goodbye to beloved characters than in understanding what the heck the whole crazy story was about. But everybody agrees on one thing: overall, the show was a disappointment.

Ah, but have you heard about the new show that just aired? The one with the great premise, the amazing cast, the high production values? Not only does it look fantastic, it’s tackling the big issues of our time. Critics can’t wait to see what the writers are planning. And if it’s this good at the start, just imagine what’s coming. After all, this is prestige TV. It’s the literature of our time. Sure, everyone gave up on that last absurd fiasco, but this one–this show is really going to deliver the goods …

Everyone knows this is what TV is like, even quality TV. We all know we’re being strung along by deceptive promises and sensational twists. We go on watching anyway, because being strung along like this is addictive and fun, especially when we get together to gossip about it with friends. But there’s a tension between riding this kind of media-induced hormonal rollercoaster–spending twenty-five hours a week watching flashy soap operas, with extra helpings of wordplay, eye-candy, nudity, and gore–and the grandiose cultural claims used to justify the habit.

“It’s true, I don’t really get outdoors much anymore … or to musuems, or the gym, or the movies, or concerts … yeah, and I don’t read as much as I used to … or pick up my guitar, or paint, or write … I would like to see my friends more … or study a second language … or work on that project I started … or even just take some time to sit and think. But I mean, how can I, when there’s so much good TV?

“And it’s genuinely good, right? It’s not an idle pastime, like TV used to be. It’s not light entertainment these days–all that brain-rotting, boob-tube nonsense our parents used to complain about. TV today is intellectually demanding. It’s real culture. It’s serious business. And the commentary! I mean, you at least have to keep up with all the smart commentary. And to do that, you have to watch the shows. That’s what’s relevant, now. That’s what it takes to be informed, in-the-know, up-to-date, engaged. To be part of the conversation. That’s what marks you as a serious person. Watching serious TV shows.”

Not everyone talks that way. But some people do. Serious people do. Enough people say these things that it’s always tempting to whisper a few of the arguments to yourself, after you’ve been sitting on the couch for six hours waiting to see who’ll get eaten next.

Until a show like Confederate comes along. Until the real serious issues land with a beg wet plop on the living room floor. Until the creators try to fall back on the old high-culture apologias. “Of course our show is controversial. Of course it’s risque. Of course it’s provocative and challenging and difficult. We’re making art, people. We’re tackling important issues in a deep and relevant way. This is vital, even necessary. We’re making serious television.”

And suddenly you start to see those nervous glances. Because, sure, we’ve all been talking about serious art, here. We’ve been saying all those nice inspiring things. But we know, deep down where it counts …

If you’re making serious art, you go in with serious convictions and a serious devotion to craft and a serious respect for your audience.

If you’re dealing with serious art, the audience goes in with serious levels of patience and attention and critical distance.

If you’re talking about serious art, the critics go in with serious arguments that also demand serious levels of time and thought, and it’s all, you know, very serious and demanding and what-not, and often rather boring, and now that we’re on the subject, it’s actually pretty darn exhausting to get serious about anything, and not at all like kicking back in a half-stupor with a bag of Cheetos after putting the kids to bed and letting Netflix serve up content until you pass out with orange fingerprints on your underpants, which is what we all actually feel like doing this Tuesday night.

But now suddenly here these people are with their serious TV show talking about serious social issues and serious this and responsible that, and no one quite wants to come out and say it, but everyone’s thinking the same things …

Uh, wait a minute. How many listicles am I going to end up reading about which evil white supremacists on this show are the most attractive and which of the slaves seems like good boyfriend material and whether the guy who hunts runaway slaves for a living will ship with the sexy journalist?

Wait a minute. Am I going to be stuck watching seventy hours of ridiculous cliffhangers and plot twists for the delayed cathartic satisfaction of seeing a few psychopathic slaveholder assholes get their comeuppance, only to be sucker-punched at the last second by some bullshit twist revealing that the slaveholder assholes weren’t so bad after all?

Wait a minute. Am I going to find myself taking online quizzes about which hero of the Confederacy I most resemble, or reading Uncle Tom slash fiction, or seeing creepy whipping-post fan art pop up in my Twitter feed?

Wait a minute. Is this going to be one of those things where we’re all gossiping for six years on social media about which character got raped or mutilated or eaten last week and whether this was problematic or actually kind of okay?

Wait a minute. Are certain psychopathic evil white supremacists on this show going to unexpectedly transform into brave beautiful white saviors, because they happen to be played by super-hot actors and that’s what the fans wanted?

Wait a minute. Is this show going to offer us a roster of inspiring, convincing Black heroes and heroines, then subject them to an endless gantlet of contrived traumas and pointless humiliations, or have them turn evil for no particular reason, or have them do absurd things that real human beings would never actually do, all for the sake of sustaining viewer interest?

Wait a minute. Is this whole show about serious issues going to eventually become a campy debacle rife with winking meta-gags that poke fun at its own tired tropes and hackneyed peripeties? Is this going to be the kind of show that curries favor with the smart set by making fun of its own poor taste?

Wait a minute. Do the people making this thing even know what they’re doing? Or are they just going to tease us with a bunch of provocative scenarios that eventually degenerate into an embarassing, self-indulgent, sloppy yarnball of reversals and complications and non sequiturs, because, come to think of it, this describes pretty much every single TV show that has ever existed, and now that you mention it …

Wait a minute, wait a minute, wait a minute! Could it be that serious TV isn’t actually so serious at all? That in fact, in assuming the prerogatives of serious art, TV has actually gotten increasingly tawdry and sensational, with an ever-growing reliance on the visceral thrills of disgust, lust, rage, and dread? Could it be that a lot of these shows depend heavily on padding out their content with sexual titillations and protracted revenge fantasies? Should we all have been doing something different with our weekday nights?

Have I wasted my life?

But no, wait, it’s okay. Remember The Wire? The Wire was good, right? Except for, you know, that final season, in which it got ridiculous and contrived and everything went to pieces. But wait. Remember Breaking Bad? Breaking Bad never went to pieces. Except for, well, the uncomfortable fact that Breaking Bad is the most protracted revenge fantasy ever conceived. Ah, but remember Tina Fey and Amy Poehler? They’ve made some good shows. Except that, come to think of it, those women are perfectly content to make snappy, light entertainment in the old prime-time mode, and don’t bother much with claims to seriousness or highmindedness …

Well, no worries. Sure, peak TV isn’t all it’s cracked up to be. Maybe we have been deluding ourselves about its artistic merits. But there’s still hope for serious, demanding art that also happens to be shamelessly diverting and compulsively addictive.

After all, we’ll always have video games.

Posted in Uncategorized | Leave a comment

What’s the Big Deal with HBO’s Confederate?

I’m not sure I understand all the furor and fear over Confederate.

This is the upcoming show from HBO that imagines what would have happened if the Confederacy had won the Civil War.

With respect to content, we don’t have much to go on. We’ve been told the basic premise of the show–a divided America where slavery persists in the South but not the North–but that’s about it. So all the complaints so far are based on speculation.

Detractors seem to imagine that the show will be either:

  1. A) A kind of modernized Lost Cause romance that mythologizes and glorifies the antebellum South. You know the genre: stately plantation houses, carriage drives shaded by pecan trees, southern belles drawling coquettish come-ons to dashing Virginia dandies with riding crops in their fists, cool mint juleps on the veranda while the sweet evening songs of the darkies come breezing over the cotton fields … yikes. Of course, the writers would have to update the imagery a bit. Picture Gone with the Wind, but with iphones. The poignant tale of a genteel young cotton-heiress as she sees the only world she’s ever known crumble around her ears. Trysts in the moonlight between a plucky, pixie-haired Berkeley abolitionist and the strong but simple slave whose life she saved. Ghastly.
  2. B) Game of Thrones, but with Confederate flags. Hour upon hour of abused black bodies. Black skin whipped and flayed and punctured. Black children beaten and bloody. Black adults stripped naked, exposed, humiliated. Blacks reduced by every brutality imaginable to a state of animal abjection. Like the middle scenes of Django Unchained, but spun out through tawdry plot twists over ninety hours of high-production melodrama. Black suffering as a gussied-up grindhouse spectacle. Blaxploitation via peak TV.

Maybe that’s how it’ll go. I don’t watch Game of Thrones, or the Handmaid’s Tale, or the The Man in the High Castle, or any of the other prestige shows that people are using as points of reference.

But I’d expect Confederate to look more like an allegory of the Civil Rights movement, with a higher stakes, a lot of dramatic compression, and (perhaps) a more triumphant denouement. That seems to be the obvious arc for a show like this: you restage the Civil War as a slave uprising, with Black people on both sides of the Mason-Dixon line collaborating to overthrow their white oppressors. A show about (mostly) Black heroes from all walks of life who combine their talents to fight injustice. A Black intellectual who shapes the philosophy of the revolution. A slave with a talent for rallying mass movements. A Black politician in the northern states who struggles to convince his colleagues to liberate the slaves over the border and reunite the country. A vigilante in the southern states who champions guerrilla tactics and violent resistance.

That’s consistent with the teasers we’ve been given so far: “The story follows a broad swath of characters on both sides of the Mason-Dixon Demilitarized Zone — freedom fighters, slave hunters, politicians, abolitionists, journalists, the executives of a slave-holding conglomerate and the families of people in their thrall.”

There are a lot of things you could do with that approach that don’t involve slaver boots stomping on brown faces. You could tell a kind of pocket history of Black political thought, with fictionalized or alternate-history versions of Dubois and Parks and Malcolm and Wright, the Panthers, the Harlem Renaissance, etc. You could draw parallels to Ferguson and BLM. You could have an Obama-like figure who enters the story as a sort of modern-day Lincoln, a Black president passing a present-day version of the Emancipation Proclamation. If the show went on long enough, you could imagine what came after liberation, reboot the history of the Jim Crow era, and imagine a postwar South in which former slaves take control. You could even start digging into the debates over separatism and Black Nationalism–should the triumphant ex-slaves carve their own state out of the defeated South and set up a kind of Afro-American Israel? What happens if this new African-American nation turns out to be a resounding success, and white people want to move in and enjoy the benefits? What about actual Africans who want to migrate to this North American Liberia? How do you handle the resulting culture clash?

There are criticisms to be made of this kind of approach too–that it’s just trite escapism diverting attention from genuine problems, that it oversimplifies a complex history, that it borrows the work of real activists to provide fodder for consumer entertainment. And I can easily envision a show that sets out to explore these ideas but somehow goes horribly wrong. (In fact, I’d say that’s the likeliest scenario.)

But there are a lot of ways to tell this kind of story that downplay white nostalgia and Black vulnerability and put the focus instead on Black thought and Black heroism–and that would actually be quite dispiriting to the kind of guy who pastes Confederate flag decals on his pickup truck. It seems to me this is the obvious route for the show creators to take, given the political mood of the country.

So why are people so sure the show will glorify racial oppression?

I have further thoughts, but I’ll save them for another post.

Posted in Uncategorized | Leave a comment

Looking Back at the Yale Mess

Two years ago I wrote about the widely reported Yale dustup over Halloween costumes. The incident has become something of a cultural touchstone, and while I’m pretty sure no one reads this blog, you never know who’s going to go poking around in your social media history. So I want to say I no longer hold the views aired in that post.

At the time, I didn’t think the incident fit the profile of a free speech crisis, mostly because the professor involved held an unusual position on campus:

“I didn’t go to Yale. I can’t claim to know much about the intricacies of social relations there. But as I gather, a college master like Christakis, in his role as a master, is responsible mostly for watching over students’ social lives, planning events, and building a healthy community in the residence halls. Not for stimulating incisive classroom discussions (though presumably he would also do that, in his role as a professor). So the angry student has a point. As master and associate master at Silliman, Christakis and his wife do have a responsibility to create a safe home there, not to stir up controversy. I think it’s at least credible that they got confused about the different roles they’re supposed to play on campus.”

Well, okay. In essence, my argument was this. Professors play multiple roles on campus. They research, they teach, they organize events. Each role comes with special responsibilities. If a professor is, say, a faculty leader of the Safe Space Club for Fragile Snowflakes, or whatever, and she keeps pushing club members into charged debates about sensitive issues, I think it’s fair to say she should no longer lead that club. By the same token, I think it’s fair to argue that a professor who’s serving as a kind of glorified chaperone to a residence hall, and who is tasked with building a warm community in that residence hall, and who somehow fails to do so, shouldn’t keep chaperoning the residence hall. That doesn’t mean she should lose her job as a lecturer, or be hounded off campus, or be publicly shamed.

It also seemed to me at the time that the severity attacks on the Christakises had been exaggerated. As I wrote:

“And now we come to the real reason the story has taken off. A group of students met with Nicholas Christakis, the Master of Silliman college, to object to Erika Christakis’s email. The meeting became a shouting match. One student lost her cool. A representative of FIRE was on hand to record the moment. And there you have it: a perfect recipe for viral content.

No question, the resulting video is painful to watch. The student lost her temper in a big way. Some people sympathetic to her position have tried to justify her screaming fit as an act of passionate truth-telling, but that won’t wash; this kind of public meltdown is always more embarrassing than inspiring.

But let’s keep things in perspective. We see one student blowing her stack in that video. One. We’re not talking about an angry mob. Watch the video, and you can see other students dropping their eyes and shuffling furtively away. They want no part of a chaotic shouting match.”

Well, that opinion sure didn’t age well.

In retrospect, it’s clear I read the context wrong. We really do have a free-speech crisis on campus–and everywhere else, for that matter. And the essence of the crisis is that so few people–on the left or right–even bother to think through these kinds of distinctions. Forget about different roles with different responsibilities. The current standard seems to be: sure, it’s fine to air controversial opinions–unless someone, somewhere, gets upset about it, in which case God help you.

To begin with the obvious: at the time I wrote my original post, Erika Christakis still had her job. She later left, and it does indeed look as if she was bullied out of her position. (Nicholas Christakis stepped down as Master of Silliman College, but stayed on as a Yale professor, a remedy closer to what I had in mind.)

It’s also clear that the reaction to Christakis wasn’t just whipped up by a few irate students and free-speech advocates. Nor were protests against the Christakises limited to calls for them to step down as residential college Co-Masters; by their own report, they were subject to death threats and intimidation. And this is far from an isolated incident. Later scandals have made it clear that partisans in these battles aren’t concerned with quibbling discussions about exactly when and where and to what extent provocative discussions are appropriate. Any potentially offensive remark, in any context, is treated as grounds for firing, threats, abuse, and intimidation.

We had the Murray protests, in which protesters tried to shut down a speaker invited by a student group–a clear imposition on the intellectual freedom of their fellow students.

We had the Milo protests, which made it clear that left-leaning students condone and even promote violence against offensive speakers.

We saw resistance to Laura Kipnis’s appearance at Wesleyan, which showed that these no-platforming tactics aren’t limited to students and that some faculty approve of and encourage this kind of behavior.

We had the Tuvel affair, in which scholars–philosophers, of all people–proved incapable of leveling an academic complaint (that Tuvel had done too little research on her chosen topic) without elevating it into a vindictive personal attack.

We had the Griffiths and Weinstein cases, in which professors were persecuted for objecting to administrative measures.

And lately we’ve had a spate of suspicious firings and punitive measures provoked by snappish social media posts, all targeting women and people of color: Kathryn Detwyller, Johnny Eric Williams, June Chu, and Lisa Durden.

That’s to say nothing of the campaigns of intimidation waged against Keeanga-Yahmatta Taylor, Sarah Bond, and Tommy Curry, who (as of this writing, and so far as I know) have kept their teaching posts.

The Durden case was particularly shocking to me: a professor given the ax for airing views that are, it seems to me, all but de rigueur on liberal-leaning campuses. I’m extremely leery of the idea that racial exclusion is okay when practiced by certain groups, but this is very much the kind of idea that should be debated and not suppressed. (Personally, I would have chosen to debate it with someone other than Tucker Carlson, but, well. And as with all these scandals, I have to wonder if something else is going on behind the scenes–drama in the faculty lounge, turf battles in the administration. Maybe Durden’s real offense, in the eyes of her colleagues, was just to have gone on Fox news in the first place. All I can say for now is that her treatment, as reported, looks hamhanded and vindictive.)

I’ve seen people trying to use Durden’s firing as a club to beat on free speech advocates, the idea being that this somehow exposes their fundamental hypocrisy. Really? I can hardly think of an incident that lends more support to their position. This is why we have to defend free speech as a general principle and not just a perk enjoyed by inoffensive people. This is why we put up with the Milos and Spencers and Coulters–so that when someone like Durden comes along, we can offer her the armor of a robust and universal moral standard. Free speech champions have been warning for years that abandoning this principle would backfire on leftists, liberals, and minorities. Now, half a year into a Republican administration, we’re already seeing punitive measures against black intellectuals who hurt white feelings. Give credit where it’s due: the free speech champions who spoke out about Durden–Friedersdorf, Haidt, de Boer–have spent years building the credibility it takes to say, with some authority, “This is never okay.”

But the main reason to think that censorship is on the rise is that, well, so many people say they want more censorship. Some advocate new federal laws limiting speech. A much bigger number champion institutional measures, like ritual firings of upstart employees. And everyone but everyone seems to think it’s a great idea to use mob tactics to bully undesirable people into silence.

I’m not sure how we get out of this, since free speech itself has become the main threat to free speech. The argument I hear from all quarters is, “Why shouldn’t I be able to harass my political enemies until they shut up? After all, they’re awful people and their views are toxic in any form. My people are the real victims. Hunting down those who say things we don’t like and scaring them into cringing silence, that’s just activism for a good cause.”

Sure. Because that’s what all the good guys say.

Whenever I air these concerns–for that matter, when I see or hear anyone, anywhere, make similar points–someone inevitably pipes up to say, “But this isn’t actually a free speech issue. After all, the government isn’t explicitly censoring anyone. In everyday life, words have consequences. If citizens have the right to air controversial views, they also have the right to attack and punish controversial views. And organizations should be allowed to fire employees who don’t reflect their values. When people suffer for saying horrible things, that’s their own fault.”

Sometimes this argument takes a form that’s almost comically extreme, as when people argue that Middlebury-style protesters or Twitter harassers are just exercising a healthy right to free speech, as if death threats and intimidation are just part of the give and take of a healthy agora. “Well, sure, the villagers formed a torchlit mob, brandished their pitchforks, and chanted in unison, ‘Outsiders must die!’ But did they actually impale anyone with the pitchforks? Isn’t pitchfork-brandishing itself a form of free expression? Did the ghost of Stalin rise from the grave and guide Trump’s hand as it penned an executive order banning the use of the alphabet? Ah, didn’t think so! What’s the problem?”

Part of the problem is that a government’s practice of censorship follows from a public’s tolerance for censorship. I mean, who staffs the government? Citizens. Who votes the government into power? Citizens. Who chooses whether or not to protest government censorship when it occurs? Citizens. If citizens are collectively saying, “Free speech is for suckers,” what does that forebode?

But even if we’re only looking at private actors, I can’t see a way to square this kind of cartoon libertarianism with what’s actually happening in the street. The ugly truth is that today, freedom of expression is imperiled mostly by ordinary people. The roving eye of the social-media panopticon alights on offender after offender, and with a thousand tongues the vengeful spirit of the populace cries, “Condemn, condemn.” When slurs and intimidation fail, citizen mobs resort to institutional solutions: firings, hearings, vaguely defined investigations and disciplinary measures, threats to careers and sustenance and security. When that doesn’t work, they use threats. Can anyone speak freely in these circumstances? What does free expression mean in a society where people are routinely bullied into abjection by ephemeral crowds of fellow citizens?

Posted in Uncategorized | Leave a comment

Living in Trumpestuous Times

Things are heating up. From the NYT:

President Trump lashed out at the nation’s intelligence agencies again on Wednesday, saying that his former national security adviser, Michael T. Flynn, was brought down by illegal leaks to the news media

All the smart people said something like this was coming, a big showdown between Trump and the establishment. It sure happened fast.

In a war between Trump and the dark masters of our growing surveillance state, it’s hard to pick sides. Do we want to fight a proxy war with Russia in Syria and Ukraine, or a war with Iran? Do we want a government defined by underhanded leaks, or a government defined by fear-mongering and cronyism? I think Trump’s team is so awful that I guess I’d have to back the meddlesome puppeteers of the intelligence community over his ghoulish gang. But it’s a tough call.

As to the probable winner? Most people I know think Trump is doomed to be contained and expelled by the bureaucratic auto-immune systems of our government. The Republican leadership doesn’t like him. Judges have already clashed with him. The intelligence folks are sniping at his people. It looks like the noose is tightening.

I’m not so sure. We’re constantly reminded that the American president has a lot of power–at least notionally–even if he happens to be an ass. And though I certainly think Trump is an ass, he’s an ass with some rather remarkable talents. Trump has a big faction of the party base on his side, and that faction is unusually active and passionate. They’re also, in certain ways, unusually threatening. They have a representative cohort in Congress, and those representatives almost shut down the government, even with resistance from the House leadership, even without the president on their side.

The intelligence folks are using a Trumpian tactic, playing the media as a means of undermining Trump’s White House. But Trump himself is pretty good at playing the media. It’s also not entirely clear to me that he’s at a legal disadvantage, though of course I’m not an expert on such matters. Are malicious leaks to the media entirely above-board?

What’s to prevent Trump from whipping up the base, getting them to put pressure on their Congressional representatives? And if Congress acts, what’s to stop them from focusing their investigations on the leaks, in addition to–or instead of–investigating Trump’s murky ties to Russia?

I’m not a wonk; I don’t know how these things tend to play out. All I can do is keep a diary of my impressions. Apart from the technical issues of law, political convention, and interdepartmental conflict, a few things seem true:

  • Trump has the bully pulpit, and he’s good at using it.
  • Trump’s presidency is just the tip of a big populist uprising, and those rightwing populists don’t like the offices and agents of the deep state, aka the Washington establishment, e.g. the intelligence community.
  • Whenever establishment Republicans have been pressured by their populist base, the establishment Republicans have caved.
  • This conflict is likely to lead to a long period of scandal, uncertainty, confusion, and disorder, and in times of scandal, uncertainty, confusion, and disorder, people turn to charismatic leaders for moral guidance. Unless some other charismatic leader emerges, that need for security will focus on Trump.

All in all, I think Trump stands a good chance of benefiting from this debacle in the long run, even though he has little experience in government, even though he’s a clown, even though he seems to be–le mot du jour–incompetent.

I suppose I have much less faith in our system–and in the judgment of my fellow Americans–than my peers do.

 

Posted in Uncategorized | Leave a comment