bill & ted’s unconscious competence

There’s a difference between having a plan and changing it, and never having one at all.

6th Uncle

I was twenty-one years old when my uncle said that to me in Minnesota, and I’m still thinking about it now, more than three decades later. When he laid these supposed pearls of wisdom on me, I’d been driving aimlessly around the country right after graduating from college. Understandably, my father must have been concerned about whether I knew what I was doing, so I knew I’d have to hear a whole lot of something even before the visit with my uncle, who happened to be traveling through Minneapolis on business while I was there to visit a friend and pay homage to Dylan. 

I enjoyed that wandering burst of my youth, but the only thing that I’ve been turning over in my head ever since is what the heck my uncle was really trying to say. For the purposes of this brief post, I’m going to skip three decades of contemplation, and just write down what I hope it means:

Early in my career, I heard about the four levels of competence – listed here from worst (1) to best (4):

  1. Unconscious Incompetence
  2. Conscious Incompetence
  3. Conscious Competence
  4. Unconscious Competence

I’m not going to describe these levels here, there’s plenty of material elsewhere that explains these levels better than I could. To me, when I heard about those levels, and for a long time afterwards, I simply could not believe in that fourth level. I thought it was just something that old people pretended to exist, because they couldn’t remember how things worked. How is it possible to be unconsciously competent? 

Now, however, I simply know that this level exists, because I understand the simplicity of the insight: if one has consciously ingrained competent practices and corresponding ethical behavior into one’s habits, the result will be as competent as both those practices and adherence to those ethics. You’ll be pleased with your competence, and no one else’s opinion really matters as much. That’s you plural: your teammates all need to be on the same page regarding your practices and ethics too, or the result will eventually become extremely unpleasant unless you just happen to be lucky enough to never need the awesome power that comes from Unconscious Competence.

I mean, there’s probably a better way to say all that, but I’m trying to be precise about it, rather than saying it more briefly. It took me too long to understand that this is what is meant by “Unconscious Competence,” and it would take too long for me to try to say this all more clearly.

But … I think we could come at this from another angle …

This kind of navel-gazing was invented, for the Western world, by our old friends So-crates and Plato.

Socrates is perhaps the most famous name in Western philosophy, and famously never bothered to set pen to paper when it came to his philosophy – he wasn’t illiterate, he simply believed that deep human meaning could not be transcribed. The only way to transmit any truly valuable human meaning was directly from one human being to another, without anything in between to mix the message, without any mediation. And that includes: without any mass media, not even our first mass media, writing.

Plato, on the other hand, was a helluva writer and a smart guy with his own thoughts to add to those of his most famous colleague. And there you have it: two of the biggest names in Western philosophy, fundamentally divided by an extremely important and current philosophical question about whether human meaning can be conveyed through mass media without losing everything important about being human.

I never really had a dog in that fight, but these days I’m leaning towards So-crates, insofar as how I’d ideally live my life. Sure I’m writing on this here personal mass media blog, but I’ve thought for years and years that writing’s not for me, other than as a tool to think. Now we can all see that truthful writing has lost so much of its power in today’s mass media, and Socrates had a great point about the importance of communicating truth from human to human.

Because of the internet and all it hath wrought? Well, yes – but don’t get me wrong, I still think technology can turn around its recent trend, and begin to work for humans again. I know it’s a good thing that Plato decided to write.

But in my personal musings, I’m with So-crates just because Unconscious Competence is something I’ve observed from time to time in others, if not often enough in myself. (I mean, sure I’d like to see it more in myself and others, but that seems unreasonable given that there are, after all, four levels.) And when I see it, when I see someone succeed just because of consciously designed practices and corresponding ethical behavior that become habits – it’s really funny to watch what happens next: Those people get asked, “How did you become such a success?”

And really, the person just can’t give an answer that seems to make sense to a lot of people, because the truth is that a whole lot of what they did was just Unconscious Competence, and there’s no good way to explain that. They just live it, and someone else writes it down if they happened to notice – but that someone else always adds their own humanity, and that’s a good thing too. Maybe we can all be Socrates and Plato; certainly neither could have become who they were without the other.

Be Excellent.

truly universal advice

I enjoy mentoring as a stress-relieving hobby. I don’t mind stress, I consider it a byproduct of pursuing my goals, and I’m still willing to suffer if required to achieve my goals. I’ll probably need to let go of that at some point. But I’m still trying to do my best, at my advanced age, to do something new & interesting in the startup world, for however long I can still have fun doing it. So I experience a normal amount of stress, and it’s fine because I have more than one way to relieve it – but my favorite way to relieve that stress is in mentoring. 

It helps me to try to give people advice, because it reminds me to continually relearn the same lessons I still need today, to keep doing the very same things that people want advice about. The process of giving advice is never one way: I always learn and relearn lessons in the conversation from the other person.

Startups are great because every one is a new experience no matter how much experience you have. I can help someone else just by reminiscing about what I’ve already done, and at the same time, help myself to charge up those very same hills that I see in their experiences. It doesn’t matter how many times I’ve done it, we’re still both at the bottom of the hill today.

I’m always worried though, that anyone might remember the words that I’m saying, rather than the fact of what we did together during the mentoring. See, the value of the mentoring isn’t in any words that were said. Instead, the value was created because two human beings tried to learn meaningful lessons from each other based on their own direct experiences in life. There is no small set of words that will capture all of the things of value that truly occurred in this human interaction. I might even go so far as to argue that trying to remember any small set of words puts you at risk at forgetting the whole value of the interaction. Human interaction is irreducibly complex, and incommensurately valuable.

Now, you probably think I’m making a point about artificial intelligence. And sure, but I think that point is obvious, so I’m not going to say it.

Instead the point I’m trying to make here is about advice. There are almost no words of advice that are brief enough to easily remember, while also being universally applicable. These are my favorite:

There’s more than one way to say these words. My friend and I were discussing this video while on vacation, and he made this T-shirt about it.

As far as I know, this advice is truly universal to all people, and is applicable in all situations that might cause worry. Of course, the key is in applying the central question: “Can you do anything about it?

For example, unless there is something more directly useful to talk about in my mentoring sessions, I often just walk through that question:

CAN

What is possible in the world that you see?

YOU

Who are you? What are you capable of? What do you want?

DO

What would be the best outcome? What is the best way for you to serve that outcome?

ANYTHING

What exactly are you going to do, and when are you going to do it?

ABOUT IT?

Oops remind me: Exactly what is the problem we are trying to address here?

And don’t worry, because there’s nothing to worry about now.

Yeah so anyway, I bought a couple hundred of those T-shirts. If you see me in person and want one, just let me know your size and you can have it if there are any left. I think it’s truly universal advice, and I like to be helpful – I’m fairly certain that if you wear this shirt, someone will benefit from it.

don’t be evil

B42

“Don’t Be Evil.” 

If you recognize this as Google’s former corporate motto, you probably regard it as a broken promise. But arriving too quickly at this judgment misses the lesson of the journey. It may be true that we now live in a tech dystopia created at least in part by those who once proclaimed, “Don’t Be Evil.” But in the beginning, that motto contained a magnetic True North that once meant something, that still means something, something that is awaiting our rediscovery. 

So before memorializing “Don’t Be Evil” as a broken promise, we must remember what it once meant.

We have to remember the time before the first widespread criticism of this mantra, before the semantic noodlers complained that it is impossible to define what “evil” means. See, the thing is, before this criticism was widely shared, it wasn’t relevant. It wasn’t relevant because the real audience for “Don’t Be Evil” already knew what the phrase was supposed to mean.

The real audience was undoubtedly the employees of Google at the turn of the millennium, when either Buchheit or Patel (depending on the storyteller) first proposed this as the company’s motto. Google had fewer than 250 employees at the time. “Don’t Be Evil” was a phrase that was easily understood by not only those 250 employees, but also by all of the company’s potential employee base. Yep, I’m claiming that every person who had the qualifications to be hireable by Google at that time (1999-2001) would easily understand the basic meaning of Don’t Be Evil.

See, if you knew enough about computing in those days to be employable at Google, then you grew up in technology watching IBM lose to Microsoft, then watching Microsoft crush Apple, and then watching the government strangle Microsoft. And then you got to enjoy watching Google beat the crap out of Microsoft. It’s just human nature to watch all this and make it into a morality play, with extremely domain-specific notions of “good” and “evil.”

When the government hampered Microsoft in the ’90s, that was a fair comeuppance for an abusive player, just as had happened to IBM in the ’80s when Microsoft was coming up. Small new companies innovate into the spaces left by the decrepitude of large old companies. The cycle of life applies to all of us, businesses too. In business, as in life, that cycle plays out in predictable patterns. And as humans, we love telling ourselves a story about our patterns. And to be compelling, our stories must have good guys and bad guys, good and evil. “Don’t Be Evil” is a morality play, and it is just a fiction, but still, these notions of good and evil move us – especially when we’re deciding where to work and how to win competitive battles.

So IBM vs Microsoft, Microsoft vs Apple, Microsoft vs Google – that was the drama that played out in information technology at the time, and our notions of “good” and “evil” were aligned with the prevailing morality play that everyone knew as orthodoxy, even if they disagreed with it: Microsoft was the bad guy, Apple was awesome and cool before MSFT used monopolistic advantages to crush them (this was before the Second Coming of Jobs). Microsoft was Evil. Google was Good.

So in this morality play, “evil” means, basically: using “business techniques” instead of superior technology to win. Don’t Be Evil simply means: win with technology, not with business techniques. 

“Business techniques” include perfectly legitimate and absolutely necessary decisions and deals around pricing, packaging, and distribution. But that’s just the bare minimum. The expanded world of business techniques gets pretty gray pretty fast, and eventually you end up where we are today: dark patterns that manipulate users, platform rent-seeking, externalization of business costs into the community, lobbying and other political manipulation. I don’t really like calling these things “evil,” but it’s fair to say that these are the tactics and methods of mature businesses, and they are not what successful startups do.

I worry that the tech world has been so dominated by the usual BigTech suspects for so long now that entrepreneurs have forgotten the difference between Good and Evil. But no matter: the world doesn’t need to remember because the truth will out: for the first time in a long time, nearly all the BigTech companies are grappling with disruptive technologies that they do not understand. When there is this much disruption in the air, fancy business techniques become less valuable, and a True North for product development becomes far more valuable. For the first time in a long time, opportunity is everywhere, all incumbents are vulnerable, and all startups have this one incontestable upper hand: Don’t Be Evil is a winning strategy, not an empty corporate motto.

a lever and a place to stand

“Give me a lever and a place to stand, and I will move the world.”

Archimedes (apocryphal)

When I was a kid growing up in New Jersey, all I ever wanted was to get out, across the river to the bright lights big city. I assumed that New Yawk City was the place that moves the world, because what else would a Jersey kid think amirite? And I loved everything about living there: I loved the hard work and the harder play, the high stakes and the almost tangible power and raw human energy that buzzed through the canyons between the skyscrapers. But after starting my career in “high” finance, I was disappointed in the financial engineering that passed as “creation” in that industry, and by 1999 it was obvious that the future was really being created across the country, in Silicon Valley.

So I headed West, searching for an industry that builds levers to move the world, searching for my place to stand. About a decade in, people began to tell me that my career path looked a little weird. From leveraged buyout lawyer in NYC to startup counsel in Silicon Valley to Korean venture capitalist to Fortune 100 corporate deal maker. And then it just kept getting weirder: international marketing, developer relations, enterprise product development, startup founder, BigTech product manager, startup sales manager and more. Not content with the variety of roles, I also wandered across sectors and products: enterprise hardware, metaverse consumer software, adtech, content moderation systems, maps, devops SaaS ….

Oh, and then I got sick of tech and ran for political office … now that was a weird move. But not to me, still just trying to understand what, if anything, moves the world in a better direction. Campaigning was a deeply moving experience for me, as I’m sure it is for any child of immigrants. I learned a lot, but the long and the short of it here is just that the political industry isn’t a place I can stand.

When I look back on it all, I feel lucky to have started my career in tech during that first decade from 1999-2008, before the global financial crisis, before BigTech was a thing … and maybe the last time we could have avoided the consequences we’re living out today. The dreams were big, the schemes were fun, and the common ambition was to put a dent in the universe with technology so good it seems like magic. The “why” behind this sparkling ambition was often unspoken, but I never thought it was about the money. Most of my friends in tech thought it was wonderful to see explosions of wealth of course, but we weren’t in technology to play the lottery, we were in it because we loved technology. 

We were mostly dorky kids who were lucky enough to have access to an Apple ][ or Commodore 64 in middle school or high school, we played Atari and Intellivision, we wrote our first programs in BASIC and we fell in love with the future. And though we might have loved technology for different reasons, I think the common thread was that we loved what technology could do for humanity. We loved the spirit of innovation for its delight, not the dollars. We loved the fun that tech could add to our lives.

So the place where I stood in the best years of my first decade in tech was in San Francisco, at a company called Linden Lab, and we tried to move the world with Second Life. Enough has been written about Second Life, I don’t like to add to the noise. But I can’t say enough about the company we “Lindens” called “the Lab,” especially now that people are recognizing Second Life as an OG when they talk about “the metaverse” today … 

Of course the product innovation was fascinating, but even more than that, I appreciate the workplace innovations we implemented at the Lab. Many of these are lost already in the sands of time, and frankly not all of our innovations were good ideas, but we had an authentic commitment to transparency, openness, and trying new ways to enable emergent bottoms-up innovation rather than top-down command-and-control management. We had open floor offices because it flattened hierarchy, not to save costs on real estate. We had snacks and games because we genuinely liked to have fun with each other, not as a nefarious scheme to keep overgrown adolescents at work. We had peer bonuses as a bonding experience, not as a competition for brownie points in the next performance review. We experimented in democratic decisionmaking, as messy as any experiment in democracy. We had remote offices, work from home, chat and video collaboration before any of these things were regarded as rational costs for a startup.

The Lab was also fearless with new business models, defining and implementing product lines in a way that felt like feeling around in the dark back then, but now seem prescient. “Freemium” as an acquisition strategy, the power of subscription metrics, data-driven decisions, SaaS-like pricing and practices before SaaS was a thing, defining product management roles before the internet industry had standardized skills for the role. We didn’t invent any of these by ourselves, but they were all relatively new business practices in our context.

So we endlessly experimented and adopted internal management and business practices on the fly while also attempting a product so ridiculously difficult that the largest technology companies in the world continue to fail today in their modern attempts to replicate the possibilities we demonstrated fifteen years ago. Maybe the only way we were old-school was that we built a profitable business, even though many companies had already amply demonstrated that tech investors prefer a fanciful growth story to the reality of profitable results.

[I’m leaving out the best part about the Lab: I could write a book about the people, but to even begin that here would be to raise uncontrollable emotions that are not at all the point here. Suffice to say that to this day I feel a bond with every Linden, past and present.]

What I realize now was that rather than being ahead of its time, the Lab was at the end of an era, before technology became Big Tech. The people that first populated Silicon Valley with technology workers were geeky idealists. Many of them, especially those who entered the scene from San Francisco, descended from a local cultural heritage of hackers and pranksters, the kind of Merry Pranksters that gave rise to the Cacophony Club and Burning Man – a culture of anti-authoritarianism, a community of individualists, a spirit of creativity and freedom and fun.

After the global financial crisis, for a variety of reasons, that culture gave way to people who … well, let me not judge any person, because we all live in glass houses, but looking at where we are today … the legacy of my last decade or so in technology is not about any of that spirit from my first decade. Too many technologists began to insist that technology could lead humanity, going so far as to believe in the inevitability of technological progress as if it were some natural force more powerful than the needs of humanity. And so we got surveillance capitalism, walled gardens, dark patterns, monopolistic rent-seeking, more and more exploitative and community-destroying business models and practices, and ever bigger and bolder next-gen Ponzi plays. None of those are technology; they are instead the social and economic results of favoring technology over humanity.

I’m an old man now, perhaps just yelling at the clouds. Sure, sure, I understand that some kind souls will object that I’m not that old, that there’s plenty of life ahead, plenty to do, plenty to dream. But see, I don’t think there’s anything to object to, I don’t think there’s anything wrong with being old. There are a lot of things that I see and understand now that I simply could not have understood with less experience in life. That experience – not just the technology and business experience, but ALL of the experience of living – is the lever that I’ve sought all my life.

And now I’d like to share the leverage of experience with as many people as I can who might use it to move the world in the right direction.

And the place to stand? Well, it has to be San Francisco. There are places in the world that I love more, but there is no other place that I know with that particular spirit of love for humanity over technology. That spirit has been dominated of late, it has been beaten, it has been bruised … but it is not gone – I just know it because I have been around long enough to know it. San Francisco is currently in the worst shape that I’ve seen in my quarter-century in California, so bad that it almost reminds me of New York City in the ’70s and ’80s … a place that we Jersey kids regarded as a bankrupt disaster, only later to realize that we should have spent way more time trying to get into CBGB. What I’m saying here is that we’ll later remember now as the time when San Francisco was authentically cool.

So – this is all my way of saying that I’m going to be spending my time in San Francisco working on technology startups in generative AI, virtual currencies, and metaverse technologies. I have the idealism of my first decade in tech, the experience of my second decade, and the determination to put humanity over technology. Most importantly, I have a few like-minded friends figuring out how to work together, and we have room for more.

If you are looking for a lever and a place to stand, let’s talk 🙂 A ping on LinkedIn is best if you don’t already have other contact info.

nifty fifty

When I turned 40 years old, I wrote a short series of four posts to try to sum up the four most important lessons I’d learned to that point. For most of the past decade, I thought I’d do the same at 50. I certainly have learned a lot – far more than I expected – and I assumed that I’d have no problem churning out the “five-for-fifty” posts to sum up my life’s lessons. I even imagined myself getting to 6-for-60 and 7-for-70, as I feel confident that the older you get, the more you have to say about life.

But all those lessons started to feel overwhelming (to read, not to write), so I recently began to think that I should concentrate on the one most important lesson. And that would be about the one most important topic, which is of course love.

Someday I’ll write about that, but this isn’t the day for it, this isn’t the time for it. This is 2020, and a half-century in, I can finally see that despite anyone’s fondest dreams, the cynics and the bruised romantics were always right: Love is not enough.

My home is on fire. We are like bacteria in a bottle, blindly exhausting all the available resources in our ecosystem. More and more people believe that the end is nigh. And that’s just the obvious future. In the terrible present, we are battered black and blue by our failure to bring about a just society. Amoral tech leaders fail over and over again to actually build socially beneficial products that are worthy of their position of power. The ruin of the fourth estate has led to idiocracy. What is the lesson that I should try to deliver when my half-century on the planet has me wondering if any eventual grandchild of mine could reasonably hope to see the same age?

The lesson is this: You can be at peace while still fighting.

I am stunned to discover that I’m at peace in a way that I never believed was possible for me, or for anyone. I am not confused about my place in the world. I’m not angry all the time; no grievances torture my heart. I know what I want to make of the remaining time that I have. I know how to give and receive love, I know the power of kindness.

It remains true that I react in anger with some frequency. I’m not as kind as I’d like to be. I do still have a low opinion of people who I believe to have wronged me, and I’m quite sure that there are people with a similarly low opinion of me – and I agree with that assessment at times. I don’t know exactly whether or how I will accomplish the things I dream of today.

But still, I find that my dreams are bigger than they’ve ever been. I know that I’m going to have to fight for what I believe in, and I love that because I’ll never stop fighting.

Your mileage may vary, but the road is there if you want to take it. True peace in your heart is available for anyone. But the fight for a world worth living in will always be everyone’s to fight. I worried that peace and serenity in my heart would mean less fire in my belly, but now I realize that the fire doesn’t come from me.

Fighting Korea

john-cho-fighting

I somehow just stumbled across a years-old interview with the actor John Cho, who, like me, is of Korean ethnic background. The Korean soccer team slogan, made famous in their run in the World Cup a while back, was “Fighting!” Somehow that came up during the interview, and John Cho explained:

This is our condition. Fighting.

Every once in a while, I idly consider getting a tattoo, but it never gets very far, because I can’t think of anything I’d want permanently imprinted on my body, other than a well-placed battle scar. But now I know that if I ever go through with it, I’m going to inscribe “This is Our Condition: Fighting!”

Now, I’m not some kind of Korean studies major – I’m as far from that as I could be. I don’t speak Korean, though I’m sure if I did I’d be aware of the subtleties lost in translation into the simple term “Fighting!” I wouldn’t be surprised if those subtleties are most of what I’m trying to explain here. I don’t even remember specifically being taught any of this. And yet still, I’m writing entirely from memory, I’m not going to look up any of it. That’s why there’s no dates or numbers: my memory’s really not that good.

But I’ve always loved to fight, and I still do, even though I may not have what it takes anymore. I’ve been asked many times over the years what this is all about. For most of the time, I’ve really been unable to explain, mostly because I was too angry to explain. But for some reason, John Cho’s explanation was like a koan that opened up the doors of enlightenment as I pondered its meaning. (By the way, it’s not like I’m some sort of besotted fan. I mean, he seems plenty talented, but I haven’t really seen him in enough things. Yes, I’m aware that there’s a meme where he’s in movie posters for movies that no one will cast him in. I think I liked him in the first Harold & Kumar movie, but I never seem to finish it because I keep wandering off to grab something to eat, you know?)

So anyway, first I’ll explain “Fighting!” very quickly, then I’ll break it down. Here’s the quick version (just speed read it for now – it’s deliberately dense, we’ll come back to it later):

Yes, of course I love to fight, it’s part of my core, and there’s no foreign mystery to this at all, no false stereotype. It’s a natural outcome, as follows: my mother, burdened by PTSD and bipolar disorder, made her poor attempts to find shelter in her rigidly sexist world by instilling an absolutely indomitable ego in her only son, which ironically is exactly what the patriarchy insists upon. A child’s ego is thoroughly reinforced by its use as a shield against the relentless onslaught of physical and emotional rage from father to son, as father had inherited from his father before him, in a ruined landscape of the battlefields of actual and proxy wars among superpowers on the Korean peninsula. That sense of fighting spirit – fighting as not only necessary but tantamount to survival – it never goes away, not with age nor wisdom, so that any satiation is temporary and the fight is everlasting.

Now … that might sound like a uniquely specific and melodramatic personal story, but there’s hardly anything unusual in it for generations of Koreans. You may be vaguely aware of the history. I’ll keep the pace up through this breezy recital, since these are all things you probably heard about in bits and pieces before:

After decades of imperial rule under Japanese occupation, in which the Japanese routinely pursued policies of cultural eradication, the Koreans were briefly liberated with the Allied victory in World War II. This liberation was incomplete when the Korean War promptly broke out, greatly inflamed as a proxy war between the United States and China, with the looming specter of the Soviet Union in the background. (This actually was only the first of a series of bloody proxy wars against Communism which continued through Vietnam and much of Southeast Asia, and even today continues in the Middle East and Africa.) Korean families were divided and impoverished by war, such that it became very common to experience the early deaths of immediate family members, including an especially high proportion of children. Korea is a relatively small country for superpowers to stomp around on – the war affected everyone.

Of course, as this happened way back in the middle of the twentieth century, there was hardly any therapeutic understanding of the mental trauma involved in all of this; at least, not in the terms we would discuss for same conditions today. The prevalence of PTSD was undoubtedly very high, and bipolar disorder could be expected to be no less than it would be at any time in any other place – though with even light cases highly likely to be exacerbated by the conditions of survival in the war-torn land.

Go back up to the short version, and see if it makes more sense now.

I’m not saying that every single Korean has experience with all of the implications of the description here, nor that all Koreans would agree with all of the implications of this description. And of course some of the effects of these common events are dissipated in time as well as diaspora, although some may be intensified by the common immigrant experience of dislocation, isolation, and racism.

I’m also not even going to attempt to explain whether or not any of this is related to a progress within three generations from a country that looks like background footage in M*A*S*H to a country that makes among the best consumer electronics in the world while also producing entertainment that somehow has not only reached the heights of world mass culture, but also accrued international social media clout with actual political impact in the United States of America. I mean …

I’m just saying, I think I know what John Cho was talking about, and I just wanted to share it with you. Put him in some more goddamn movies.

ETA Jan 2023: This seems the right place to note my succinct definition of han: A deep-seated sense of injustice, which fuels a never-ending thirst for revenge.

the logic of “silence is compliance”

https://commons.wikimedia.org/wiki/File:Silence_is_compliance_-_A_protester_with_a_message_standing_on_a_window_ledge_in_Whitehall._(31903348794).jpg

“Silence is compliance” is a phrase that many people toss off without thinking through how it works. People who use the phrase earnestly think that it’s obvious that silence in the face of injustice is equivalent to complicity in that injustice. But apparently, it’s not so obvious, because many people quote the phrase with a sense of irony, as though it is some kind of slogan for Orwellian thought control.

I have never seen the logic of “silence is compliance” thoroughly explained, so I’m going to attempt that here, just for kicks. I’m sure if I looked hard enough, I’d find a reasonably similar explanation, but the logic is straightforward enough that it’s probably easier to write it from scratch than it is to find an explanation as painfully dull as the one I’m going to give here.

First off, it’s important to discern that the phrase is only really meaningful in political contexts. People do sometimes use the phrase in other decisionmaking contexts, but in those it’s usually meant as a dumb joke. Somehow that dumbness is transferred through osmosis when some people see the phrase in political contexts. For example, when someone says, “Hey how about burritos for lunch? Silence is compliance!” – it’s obvious that this means nothing more than, “If you don’t say anything, I’ll move forward!” (And when you think about it, what even is illogical about that statement?) This is a completely different kind of claim than “Speak up about injustice! Silence is compliance!”

In a political context, it’s a reasonable moral claim, and deserves to be treated as such regardless of which side of the politics you’re on. We can demonstrate exactly why with an example of a controversial political issue … Hmmmm, so many to pick from, what to do, what to do … Well, though I’m tempted to go with old statues, or Confederate flags, or kneeling at anthems, virus names and nicknames, or “violent” protests, but no – these topics may be too hot right now, they could inflame consideration of the simple logic being offered. So I’m going to have to take down the temperature to … Islam vs the West. Truly extraordinary times we are in, that this qualifies as de-escalation!

Let’s start with a controversial statement about Islam, like “Islamic culture supports honor killings.” A “progressive” reaction to this might be something like, “that’s a horribly racist stereotype that is factually untrue.” A “conservative” reaction might be “we lose everything of value if we cannot acknowledge the truth of the harm done in the name of Islam.”

For comparison’s sake, let’s also present the caricatured responses from the land of social media:

Social Justice Warrior“: Your harmful words deny our reality as a people! Until you come to terms with the racism in your soul, you will never know the truth of your injustice! You must bow down in fear to our coercive power to silence your reasonable objections to our moral superiority!

Intellectual Dark Web“: You’ve lost sight of the true meaning of liberalism, for you lack the courage to grasp the freedom that is clearly within your reach. You can never outlast the real truth that you are too weak to see. Intellect über alles!

Now, neither of these responses have anything to do with Islam or Western culture, and no one worth your attention ever says exactly these words. Nevertheless, the entire discussion proceeds in social media as if only the other side had said the words of their own caricature. It’s quite an amazing phenomenon.

Back here in the safe ol’ blogosphere, we have the space and the luxury of constructing arguments from steel rather than straw, and insisting that the only welcome comments are fires that temper the steel rather than burn the straw. Or something like that.

So, initial steelmen in this “Islam vs the West” example would be something like:

The “scholarly” view: An attentive reading of the Quran shows that honor killings are to be condemned, as an innocent life is lost and the perpetrators of this crime do not set a good example for society. Of course there are radicals; people with abhorrent beliefs and actions, but it is not fair to taint Islam with their distorted beliefs, just as it is not fair to taint all Christians with the beliefs and actions of the Crusades and many other wars and acts of genocide carried out in the name of a Christian God. It is unjust to impugn all of Islam by association with the horror of honor killings.

The “cultural” view: You can’t claim that a religion is just the words in a book. A religion is how people live it, and how it manifests in the world through the people who claim it, whatever the merits of their claim. I do absolutely condemn all of the wars and genocides of the Christian God, I do also agree that a Christian culture led to those evil outcomes, for the same reasons I cite regarding Islam. So when I say that Islamic culture supports honor killings, I am only stating a fair interpretation of facts and a cultural understanding applied equally across all cultures.

These may have weaknesses, but they are not strawmen, and they can both be much improved. It might even be possible to improve both of these positions to the point that they are not in factual conflict, while they still remain in support of their political positions – but that would be a difficult discussion. It would be lengthy, it would be nuanced, it would be challenging and at times frustrating and possibly emotionally exhausting.

The fact is, all serious political controversies have steelman arguments (including any controversy over whether I should be saying “steelwomxn” instead). But it’s much easier to burn down the strawmen than do the hard work of discussion.

And further, it could be a reasonable moral choice to decline to do the work. In general, you are not obligated to provide anyone your intellectual or emotional labor, and you don’t even need to have a reason to decline, not even privately for yourself. You only have an obligation to engage with people that you’re already in a relationship with, like your partner, or your kids, or your neighbors, or your town, or your country … hey waitaminute …

Politics, of course, is an endeavor among people living in the same society, even if some of those people wish some of the others would leave. Any belief in a political solution raises the obligation of informed discourse. Maybe you don’t have to discuss every little political issue that the neighbors want to gossip about on Nextdoor. But you most certainly do have an obligation to participate in discussions of justice in your society, because if you are willingly living in an unjust society, then one way or another, you will eventually suffer the consequences if you aren’t already.

When a political issue raises questions of injustice, understanding that you have this basic civic obligation to participate is only the first step for making silence into compliance with the injustice, but let’s be clear: you can’t skip that step. To say, “I don’t owe anybody anything!” is simply to withdraw from political participation entirely. That may be your right in some circumstances, but if the current situation is indeed unjust, and you decline to consider yourself in the society at all – when it is in fact true that you are in the society – then your objection is based on a lie, and your silence is willing compliance with injustice.

But what if you do recognize the obvious fact that you’re in the society, but you just don’t want to say your opinion because you know that other people won’t like it? In this case, you are even worse, morally speaking, than in the prior case. There is a claim of injustice in your society, and you will not speak on it because you are afraid of what others will say? How is that a defense of your silence? What if you’re wrong, and your opponents are right about the claim – don’t you want to support justice even if you’re wrong? And even worse, what if you’re right, and your opponents are wrong about this claim of injustice – wouldn’t true justice be better served if you spoke up, regardless of what anyone says in response? In this case, silence is not only compliance, it is cowardly.

And what if pure intellectual freedom favors one outcome, while the demands of social justice favor another? Again: if either of these things actually matter to your society, and you remain silent, then you are compliant regarding the claim of injustice. Ok, one last shot: What if intellectual freedom allows anyone to favor either outcome, but only one of the outcomes supports injustice? Isn’t individual freedom the highest freedom of all? “I still want to pick the outcome that supports injustice, and the inviolable freedom of my mind gives me that right!” So … you’re saying that you could choose to believe either, and you consciously chose to believe the one that favors injustice, just because … you like it better? At this point, there is only one word for you, and I’m too polite to use it, motherfucker.

These intellectual gymnastics are unnecessary. Simply note that all claims of injustice perpetrated by the state are claims that the powerful committed injustice against the powerless. So the default outcome to a true claim of injustice by the state, if nothing is done, is for the injustice to continue. If the claim is false, and you don’t speak up about it, then you are contributing to the decline of a just state. Either way, the worst thing you can do is remain silent.

My point isn’t whether any of the stereotypes, caricatures, steelmen, strawmen, or painfully obvious statements above are bulletproof. My point is only that there is a reasonable and straightforward argument for why silence is compliance, and those who only view the statement mockingly are making a careless mistake. I’m not saying that everyone who utters the phrase has exactly this logic in mind, with this kind of specificity – good people usually don’t have to think it through in that much detail, because it doesn’t occur to them that anyone doesn’t see the clear logic: silence in the face of injustice is morally equivalent to compliance with that injustice.

police technology

fingerprint

In the future, the police as we know it today will not exist.

This is not a political statement, it’s simply a technological fact. Now, it’s essential to remember that all technological facts are endlessly contingent. For example, it’s a technological fact that if you click on a link, another webpage will open. But that’s contingent, usually on some very complicated and impressive infrastructure operating without fault (or rather, with sufficient fault-tolerance whose few exceptions did not affect the expected outcome, this time).

If you click a link, it will only do what technologists expect if you’re using a browser that doesn’t have the wrong kind of malicious software. And you have to be using a computing device that doesn’t have some other hardware or software flaw that will prevent expected actions. And you have to be connected to a network that has sufficient range and capacity. And then an entirely different set of computing devices needs to be connected and operating as expected. And then all of that has to work correctly, walking backwards, in high heels. During this entire time, every device involved needs to have electric power in the right amount and at the right time. That is a lot of contingencies.

But still: if you click on a link, a webpage will open (even if it’s not the one you expected). And with just as much certainty: in the future, the job of police will not exist as we understand it today. That is a technological fact, and it requires very little understanding of technology to see that. It merely requires obvious extrapolation from technologies you see around you every day.

Most people, including most police officers, may think the job of police is to stop crime. But all police officers know that it’s more an exception than a rule that they make an arrest on any given day. This is not an indictment or a criticism in any way, it is simply a pure accounting of time. Cops probably spend 50% of any given day in travel time, going from place to place. Maybe another 20% of the day is talking to people: talking to each other, to dispatchers, to citizens with a question or complaint, to witnesses, to victims, to prosecutors and lawyers and judges and juries. Then 30% of the day is administrative: paperwork, paperwork, paperwork, court time, occasionally some training. As a proportion of time spent, there is almost no time spent on a usual day in the active act of stopping crime. Stopping crime might be the reason for police, but that’s not how they spend their time on the job.

Of course, there are occasions where crime is discovered during travel time noted above, and during the talking time above. That happens a lot more on TV than in real life. More often, crime is discovered through other means: an alarm, a call to 911, while carrying out a search warrant, perhaps during a stakeout, or a successful search for a suspect. Police action in each one of these cases is planned beforehand, it doesn’t happen extemporaneously. There is forewarning, and police are specifically sent to a location where the crime may be discovered. None of this is the result of random discovery during the usual day at work.

Technologists hate inefficiency, and can’t help but think about designing for a more efficient police force. A perfect police force would do nothing but fight crime: they would only conduct the very few activities that are a result of planned actions expecting to find crime. The other activities would be done by people who were not police: all that traveling around, talking to people, filling out paperwork – people who are not police officers can do all those things. That is not to diminish the importance of any of those things, and many of them are essential to stopping crime – they are just not themselves the active act of stopping crime that requires the most prepared police action.

In a perfect world, anyone who might ever be involved in actively stopping crime would spend all their free time preparing for the most dangerous police actions, and they would have exactly the resources they need to stop the most deadly opposition that they are likely to encounter – no more and no less. Because some crimes are so inherently dangerous, perfect police would spend all their time on training when they weren’t actively in the act of stopping crime. And in a perfect world, their training would be perfect, so they would follow the best possible tactics to avoid escalation and the use of deadly force, including the elimination of any kind of bias whatsoever.

Obviously, we do not live in a perfect world. There are many, many social reasons why we cannot today operate a perfect police force. Many. But there are also many technological reasons: we cannot predict where crime will happen, we can’t be everywhere at once, we can’t travel fast enough or efficiently enough or safely enough. We might not have the data we need to identify everything that we need in order to make good use of technology, including data relevant to both crime and to training.

The thing about technology is, though, that all of the technological problems will be solved, so long as social barriers don’t prevent that from happening. To be clear: this is NOT an argument for the moral supremacy of technology. Morality is only to be found in society, not in technology – and there may be times when the development of a certain technology may be itself immoral. However, in the absence of social barriers (including moral barriers that we should respect), technology problems will be solved, because that’s the definition of technology: applied knowledge that solves problems. If a problem can’t be solved through technology, it’s not a technology problem: it’s a physics problem.

So in the future, cops will do absolutely nothing other than attempt to stop crime, and train to do that in the best possible way – unless social barriers prevent it.

Unless social barriers prevent it, predictive technology will show where crime is likely to occur, with very high accuracy. Some people might think that there’s no social barrier that should prevent such an obviously worthy goal. Some people will be more concerned about social harms that might come from errors and bias. Some people will be equally concerned, if not more, about the surveillance required to enable predictions. And yet some others believe that citizen surveillance could be a safer alternative to state-operated surveillance – or maybe that some combination of the two, formally or informally, would work optimally. But in any case, if it becomes known that a crime may be stopped, regardless of how it might be known, the police should be sent to stop that crime. Few people could possibly disagree that this would not be what we want from a perfect police force, which don’t forget, is perfectly trained.

As for the people who do all of the other things that police do today – some might argue that these are still police officers, that they are still as essential and honorable, if not even more so. And indeed, it is irrelevant whether or not they are called by the word “police” and irrelevant whether they wear a uniform and irrelevant where their paycheck comes from, from a technology point of view. Social factors determine whether they are called “police” or social workers, whether they are public or private or nonprofit. Those kinds of things have nothing to do with technology – although technology could certainly help determine which social choice is most likely to be optimal.

Social factors also determine whether those other “police” (whether or not so named) are allowed to carry weapons of any kind. None of these people are performing any tasks that are particularly likely to discover a crime in progress, so they clearly don’t need a weapon most of the time. Crucial exception: tasks that routinely involve interactions with victims, actual or potential, will of course discover crimes in progress. But as this is discovered from a victim, no weapon is needed unless for some reason the perpetrator is nearby, as is usually the case with domestic violence. Even in this case, it is clear that the task of ensuring safety is different from the task of preventing ongoing violence, so these are obviously separate jobs, only one of which is likely to need a weapon.

Social factors determine whether or not people who spend so much time doing social work should be able to carry any particular kind of weapon. Whether a “police officer” actually needs to carry a weapon is a social question. For example: maybe a political reason requires all the people doing all this driving around, talking to people, and filling out paperwork to be called “police.” And maybe other social factors require all people that are called “police” to carry weapons that they don’t need, for example for recruiting purposes (assuming that some people join the police at least in part due to their affinity for weapons). As a counter-example: maybe for political reasons, only the people who are actually trained to stop crime will be called police, and all of the other people will be some category of social worker (whether public or private). In that case, it seems unlikely that anyone would want the social workers to carry weapons. But it’s very clear from a technology perspective that only some types of work that we call police work today requires any kind of weapon.

So, in the future, the police as we know it will not exist, as a matter of technological fact – though this is endlessly contingent on social factors. In a perfect world, most people that we call “police” today would be doing the exact same thing that they do today, in terms of time, but they wouldn’t carry guns. Any rational person wouldn’t even want them, at least not for work, as they would know that they are unlikely to ever need to use them. (This is completely independent of any 2nd Amendment argument for or against carrying guns, as those arguments apply to all citizens, not just particularly to police.)

Like all technological predictions, the inevitable end of police as we know it is highly contingent on the expected operation of an extraordinarily complex and interrelated system of infrastructure and endpoints – but this is dependence on social infrastructure and people, not technology. Nevertheless, any good technologist should understand all relevant contingencies.

It’s very easy to imagine an attempt to reach this perfect world that inadvertently turns into a totalitarian police state enabled by technology – we’ve all seen those movies and shows many times now. It’s very tempting to imagine that enough social problems can be addressed so that technology has the social basis it needs to be successful – but there isn’t really much data that should give anyone optimism. So good technologists should spend most of their time finding data and implementing solutions that address the social infrastructure that is required for success.

I didn’t intend to include any moral suasion in this very dry essay, but I can’t help but end with it. Technologists: stop building weapons (anything that enables the police state), and do the social work (data and tools to solve the social problems that prevent us from working on more useful technology).

ETA: Someone suggested the perfect slogan for techies who want to reboot the police: CTRL-ALT-POLICE.

the missing links

spaghetti

The first scientific mnemonic I can remember is King Philip Came Over For Good Spaghetti: Kingdom, Phylum, Class, Order, Family, Genus, Species. That was a long time ago, and biological classification now seems very different (as far as I can tell from Wikipedia), though it’s unclear to me whether cladistics wasn’t the standard back then, or whether it wasn’t taught in the introductory material that came over with King Philip. Still, it remains true that binomial nomenclature is the standard for the most basic unit of biological classification: species.

When you are learning the basics, you usually don’t stop to question them, at least not very deeply, because if you don’t start by accepting most of what you hear, you won’t ever learn enough to really question everything you’re told. But even back then, the idea that Homo sapiens stood uniquely alone in the classification of all living things seemed very questionable. Humans are considered a monotypic species, which means that the species is the sole member of the rank above it, the genus. I remember thinking that it made sense to group together a lion (Panthera leo), tiger (Panthera tigris), jaguar (Panthera onca), and leopard (Panthera pardus). But why was Homo sapiens all alone? Does that really seem likely to be true, now and forever?

There are other monotypic species; some are even singular through multiple ranks. The aardvark (Orycteropus afer) is the only member of its genus, which is the only genus in its family, making it the loneliest mammal on the planet. There’s a monotypic species of fish (Ozichthys albimaculosus) and butterfly (Eucheira socialis), and several monotypic plants. The hyacinth macaw (Anodorhynchus hyacinthinus) was once thought to be monotypic, until a compatriot (Anodorhynchus leari) was correctly identified over a hundred years after it was discovered and originally misclassified. In all of these cases, it’s easy to understand the isolation of the species as a function of either specific adaptations to an available habitat, or as isolation imposed by a habitat that has become unavailable. That is, the aardvark’s habitat is not rare, but its evolutionary adaptations are so specific that its singularity doesn’t seem strange – what’s strange is eating termites, but termites can be found in a lot of places. On the other hand, the Madrone butterfly exists only where madrone trees exist, in high elevations in Mexico – the geographic specificity of the habitat explains the singularity of the species.

The singularity of Homo sapiens is much harder to accept. Human adaptation is so generally applicable that we can thrive in any habitat. As we’ve expanded across the Earth, no isolation of habitat has yet cut us off from further access to evolutionary development; indeed, the opposite is true: we’ve expanded into every habitat and we live in increasingly interconnected ways. There’s no obvious explanation for the absence of other human species. The idea that we stand alone seems like a category error.

Prior to the modern understanding of evolution, biologists theorized that a missing link existed between humans and apes. At one time, humans were thought to be the only “hominids,” and all other apes were called “pongids” – many people sought a missing link between the two, but it was never found because we had misclassified the relationship between hominids and pongids. In the modern understanding, humans and apes are in the same family – turns out, we’re all hominids. So the search for a missing link is now thought to be a fool’s errand.

In this essay, I am that fool. I speculate that the links that are missing aren’t between humans and other apes, but among humans themselves. I propose that Homo sapiens has already evolved into separate species, and possibly we were never a monotypic species for very long.

This is a claim so outlandish that I can only compare it to Copernicus, who examined the complicated orbits of Ptolemaic star charts and realized the absurdity of putting humans in the center of the picture. If you insist that the Sun revolves around the Earth, you need ungainly mathematical gymnastics to work out the orbits of the other planets. It’s a lot more simple to adopt a frame of reference where the Earth revolves around the Sun.

Apparent_retrograde_motion

That simple shift in framing was the beginning of a scientific revolution that defines how we live to this day. What I can outline today lacks such impact only because of my inability to specify all that is required in a single essay. However, I aim to provide notes that might inspire a modern-day Aristarchus of Samos, who theorized that the Earth revolves around the Sun, while being utterly without the knowledge or data to prove it.

Aristarchus lived more than three centuries before Ptolemy, and Copernicus lived another millennium and a half after that. We’re only a century and a half after Darwin, but knowledge moves faster these days. Have we properly applied what we’ve learned about evolution to the case of Homo sapiens? A frame of reference that makes us apparently above the laws of evolution must be regarded as inherently suspect.

Are Humans Above Evolution?

One way to ease into this inquiry is to ask whether human evolution has stopped. We don’t have to ask this question, and it seems outlandish to bother to try, but let’s ask anyway. The average duration of a species is 1.2 million years, as far as we can tell from the geological record, and Homo sapiens has been around for only about 200 thousand years, or maybe half a million at most. Isn’t it several hundred thousand years too early to consider this question?

We only estimate the lifespan of species that leave a geological record, some trace of their time on Earth buried deep within its layers. We have no way of knowing about the existence of species that left no record. The shorter the lifespan of a species, the less likely it is that there was enough time and development to leave a geological record. So it’s entirely possible that the average lifespan of species is much shorter than we have been able to measure. Nevertheless, we can sharpen this question by saying that humans now are nearly certain to leave a geological record that will be readable far into the future. Maybe we can assume that the average duration of any species that leaves a readable geological record is over a million years, so therefore we can assume that our species will last a lot longer than we have so far.

But this math-based objection probably isn’t why many of us believe that human evolution isn’t occurring. A stronger reason is just that we find it difficult to conceive of our evolution, because we have dominated the Earth. Other species evolve due to changes in habitat and environment, but our species is defined by its mastery over habitat. We might not survive changes to our environment, but if we do, it will be through either mastery over the environment itself, or due to human-developed adaptations to environmental change. If the planet warms, the ice melts, the oceans rise, and the atmosphere allows in unprecedented radiation – in any scenario where we don’t all die, some will adapt. Will that adaptation be considered human speciation? If some of us learn to live underground, and subsequently develop improved ability to see in the dark, will that be speciation? If some of us are enhanced with bionic lungs, artificial gills, and metallic skin thick enough to repel radiation, will that be speciation?

The traditional definition of species is that members of a common species can generally produce fertile offspring through sexual reproduction. There may be exceptions within the group, but these exceptions are not popular characteristics of the group. For the sake of brevity, I’ll use the term “can mate” to mean “can generally produce fertile offspring through sexual reproduction.” The term mating is inexact and arguably overbroad, but it’s better than typing four times the number of words necessary every time. So: members of the same species can mate. If different types of animals cannot mate, then they are not of the same species.

Consider again a scenario where the environment has changed so much that some humans choose to live underground, others remain above ground. Assume that after a very long period of time passes, the underground-dwellers can see much better in the dark than the aboveground folk. Are these two groups of humans now different species? The usual analysis would conclude that as long as members of the two groups could still mate, we should say no.

What about a situation where one set of humans develop a revolutionary treatment that inserts rare elements into their skin, at a molecular level involving genetic editing, so that no amount of sunlight will harm them? Are they still human? We might still say no, assuming they can still mate with people who don’t get the molecular treatment, but the editing of genetic material will give many of us pause. Is an integration of non-organic technology with human life enough to create a new species?

In either situation, we probably are inclined to wait before rendering judgment. The longer we wait after the change (i.e. living underground, artificial skin), the likelier it is that morphological changes will evolve that absolutely preclude the possibility of mating. In fact, the usual practice of professional taxonomists is to only identify a species after such changes have occurred and can be mapped to phylogenetic markers. The state of the art of biological classification today demands that we be able to see the boundaries between species in their DNA sequences.

Remember though, that today’s state of the art is tomorrow’s obsolete mistake. In the case of humans, stopping the analysis at DNA sequences is a very strange thing to do, since we do not currently know the relationship between patterns of mind and any genetic marker, and yet many scientists suspect a relationship will eventually be proven. Why should we let our own ignorance be the boundary to speciation? How can we analyze differentiation within a species without looking at the most critical features that actually distinguish them as a unique species? It’s like trying to distinguish fish without looking at their gills.

What defines Homo sapiens as a unique species is the product of our minds. We can argue about exactly which products are crucial from an evolutionary standpoint – language or emotion or consciousness or whatever – but there is no question that the evolutionary prospects of our species have always been entirely dependent on the products of our minds. Once we have acknowledged this completely uncontroversial fact, why would we insist that human speciation must be defined by features that are entirely unrelated to the features that make us human? Human speciation must be defined by something that is happening in our minds.

It may seem unscientific to suggest that mere mental activity can be a boundary for species. Genetic material, whether or not you’ve ever seen a strand of DNA, seems more real than thoughts. We’ve seen pictures and diagrams, we know this is an actual object of science. Who has ever laid eyes on a thought? But remember how we got here: DNA sequencing replaced rougher methods of measurement as the preferred tool for biological classification; we updated our techniques because the science advanced. In earlier days, biologists made many mistakes in classification by relying only on visual features – in humans, this has had disastrous results in eugenics and racism. Phylogenetic analysis, for which DNA sequences are the key texts, is vastly superior to prior methods. But it’s not the end of the story.

Most biologists reject mind-body dualism, which is the idea that the sense of self constructed in the mind occurs entirely separate from all material aspects of the body. Almost no scientist believes that a human can have a thought without some observable activity in the brain. Broadly speaking, all of neuroscience is devoted to identifying the biological properties of mental activities. We aren’t very close to being able to match habitual patterns of thinking to heritable genetic markers, but closing that gap seems like a very realistic possibility. As we understand more about how the processes of the mind manifest in matter, we may end up discarding DNA sequencing altogether, just as we abandoned Darwin’s mistaken theory of pangenesis. Or we might better understand what the genetic markers tell us as we learn more about how patterns of thoughts are observable in biology.

Assume that one day, we will find the biological markers of thought patterns, and that these markers are heritable (i.e. transmissible from one generation to the next). The real question here is whether differences in minds are so profound that they can prevent mating, with such prevention being meaningful enough to describe separate species.

The Veil of Limited Perfection

Let’s consider this question with a thought experiment, where we consider the world as viewed through a “Veil of Limited Perfection” – assume all of the people in the thought experiment are exactly the same people as in our world, except that all problems of safety, health, and economics are solved. (Make no assumptions about how the problems were solved! This world is no more likely to be dominated by socialism than capitalism, for example.) No other problems are solved; in particular, sexual reproduction is still the only way for humans to procreate, and our mating rituals and characteristics remain largely the same. No one ever experiences unwanted fear, no one ever dies an unnatural death, and everyone has as much financial resource as they want – but you still have to figure out who to date. In such a world, can different species of humans ever evolve?

Would every single human being be capable of mating with any other? You could say “Yes, in theory.” You have to say “in theory” to account for the fact that you know that no human would happily mate with a completely random selection of any other human on the planet, even though in this thought experiment, that would not affect their safety, health, or wealth. Every single human would continue to have mating preferences, and these preferences would of course not be formed entirely on physical features. Instead, preferences would be largely if not entirely about the products of minds of prospective mates. Are they happy, kind, generous? Are they courageous, resilient, or honest? Do they like the same music, movies, books? Do they like sex the way you like it? All of these questions, and their answers, exist in the minds of prospective mates. Beyond the Veil of Limited Perfection, it’s clear that survival of the fittest is a process determined entirely by products of our minds.

Note also that this world would not be more perfect if we could wish away our preferences. That would mean removing all differences in opinion, temperament, and intellect. It would mean a world without variation in art, or music, or drama or comedy. People do not enjoy all expressions of these equally, and we tend to enjoy other people who enjoy things that are complementary to what we enjoy.

So beyond the Veil of Limited Perfection, each person has a set of people that they would willingly mate with. Looking at the preferences of all people, you can construct sets that include only people who would all be willing to mate with any member of the same set. You could call that a “mutual intra-mating preference group” – but this is a cumbersome name, so for now let’s use “phyloculture” instead. This term risks considerable confusion, since it implies that culture evolves through evolutionary processes, and it’s not yet clear that culture is what we’re talking about here. But if we need a term to describe what is shared between people who enjoy a related set of opinions, temperament, art and music – what better term is there than culture?

Since a phyloculture is defined as “mutual intra-mating preference group,” can we say that different phylocultures are in fact different species? Why not, if by definition no human would choose to mate outside their own phyloculture?

A simple objection is: “But people can still choose to mate outside their phyloculture, can’t they?” No: if they are willing to mate with each other, then by definition they are in the same phyloculture.

The harder form of this objection asks how consent can possibly be considered a barrier in whether animals can generally produce fertile offspring through sexual reproduction. But this objection has already been addressed: in humans, we must look for speciation in the features that define us as humans; as these features are within our minds, we must look at the products of our minds to identify the distinguishing barriers between species (since we do not yet have the capability of genetically identifying the material processes within our minds). The reason that we do not consider consent as a question in the mating of animals is not that consent is irrelevant, but that animals are not capable of consent. (Of course, some argue that animals are capable of consent, but that has no bearing on whether humans have speciated.)

Now, take off the Veil of Limited Perfection. Do you have a set of people that you would willingly mate with? Of course you do. If you knew the same kind of information about everyone on the planet that you know about your set, would your set include everyone? Of course not. The fact is, you already have a mutual intra-mating preference group. You just can’t see it, because it’s distorted by considerations of safety, health, and wealth.

Phylocultures exist today, but they are hidden by social phenomena. Remarkably, many of those phenomena have decreasing importance to species survival over time. In the early days of Homo sapiens, the species could not survive simple threats to safety or health. As we developed increasingly sophisticated social structures, economic considerations also greatly affected human survival. But nearly all humans alive today have considerably better prospects for matters of basic survival than humans of a thousand years ago. Another way of saying this is: Human speciation has already occurred, you just didn’t notice because it was hidden by earlier survival needs.

Finally, I can reveal that I decided to use the term “culture” despite possible confusion because I’m adding a dubious corollary to this theory of human speciation: As cultures evolve, they will tend to evolve into phylocultures, or they will disappear. In the future, there will be no cultures other than phylocultures.

Cultural Evolution in an Interconnected World

Cultural evolution is a field with an ugly history of controversy, as it is closely aligned with repugnant ideas about race and nationalism, and eugenics and genocide. We should take seriously the possibility that the ideas here can similarly be distorted by supporters of repugnant ideologies. These matters may deserve a separate follow up essay to address all concerns in detail, but for now, suffice to say that I categorically reject racism, nationalism, eugenics, and genocide. I’m a crazy amateur political philosopher, but I’m not a monster; just because the latter overlaps significantly with the former, that obviously doesn’t mean that the former all sympathize with the latter.

The study of cultural evolution routinely assumes that culture is transmitted through social means. The newer subfield of biocultural evolution posits that an interplay of genetic and social factors result in the evolution of cultures. One of the more common objections to the idea of cultural evolution is the assertion that evolution only acts on an individual level, sometimes even going so far as to say that only genes evolve, not people. Biocultural evolution has a great answer to this: individual genetic evolution has emergent properties that are only interpretable at a group level. As an over-simplified example: if some set of genes contributes to musical talent, and if a particular culture values musical talent (including in mating), then cultural reinforcement of the value of music will favor the continued advantage of those genes in a virtuous cycle.

There are of course cultural traits that have value between groups, not just within them. Put a warlike culture next to one that is not, in circumstances where war is common and resources are scarce, and the warlike culture has a group advantage that has evolutionary impact on the other group. However, cultural advantages rise and fall much more rapidly than natural habitats. If warlike culture were always an advantage, presumably we would all be Spartans.

And this is the first key to understanding how cultures will evolve into more visible phylocultures. When considering the advantages of traits that are expressed by the body, the background timetable is provided by changes in habitat, which occur over epochs. For advantages of traits expressed by the mind, the background timetable is provided by changes in culture, which evolve much faster than habitat, and faster still as time goes on. As far as we can tell, the culture of every type of early human was relatively static for millennia. In the Common Era, cultural change usually occurred over centuries. But in the last century, cultures changed by the decade, and in this decade, many of us have experienced cultural change just in the past year. So human speciation, properly understood, is happening faster than ever. That doesn’t mean the acceleration will continue, but it does mean that there might be more to analyze about human speciation from the last few decades than there has been in all the human history prior to that.

In prehistoric times, Homo sapiens coexisted with other hominins (including interbreeding, by the way, and yet we still view these as separate species). We may have had similar cultures, but we had very separate geographies. Then as the human population grew to cover the Earth, and finally we developed the technology for a very high degree of interconnection, there was a point in the 20th Century where we talked about a “monoculture” because we were so many and so connected that it seemed like a concentration of media power would drive a single dominant culture.

And then the Internet happened. In the glory days after the turn of the millennium, we crowed about the disaggregation of media and the disintermediation of corporate gatekeepers. Microcontent and microtargeting at first seemed to mean thousands of different cultures were possible. But that was an illusion. The reality is, concentration of media power has reassembled, in only a slightly different configuration. You can see it if you look for it: reconfiguration and consolidation of online cultures is happening now, very rapidly. And online culture increasingly forms and reflects offline culture. The importance of geography, nationality, race, and even religion in forming cultural boundaries has diminished. People are more united now by thoughts, opinions, and tastes that are relatively free of those old boundaries, and getting more free all the time. As this process continues, the observable features of phylocultures will become more and more prominent.

Where Does This End?

It never ends. Evolution never ends … or does it? (Or were you just asking, when is this incredibly long essay going to end?)

I don’t know. I have a theory, and since that theory builds upon this one, it is even crazier than the notions here. But I’ve already obliquely revealed the beginnings of an outline in a prior essay, which is actually my first statement of three phylocultures that I believe exist today. I believe that properly structured research would show credible material supporting the existence of at least three human species today. It would take a lot of work over many years to properly design and conduct this research, and frankly I’m not qualified for the task. However, as a closing note and a stake in the ground, I’ll assert the first proposed binomial nomenclature for these ostensible species: Homo fidelus (The Culture of Belief), Homo humanitas (The Culture of Humanity), Homo cognitio (The Culture of Knowledge).

I’m no Linnaeus, I’m certainly no Copernicus, and I only hope to inspire an Aristarchus. But if you think I’m crazy, you’re probably a different species than I am.

the tragic triangle of the three cultures

In 1959, C.P. Snow delivered a lecture called The Two Cultures, about a vexing divide that he saw in the academic circles of Oxford and Cambridge in the middle of the 20th Century. Snow was a rare bird, as a professional scientist who was also an esteemed novelist. Although the two cultures he describes are often cast as “sciences” against the “humanities,” Snow noted that both regarded themselves as “intellectuals.” One set of his colleagues explored the mysteries of human nature through literature, visual arts, music, politics, and economics. The other set explored the most fundamental aspects of the world in physics, biology, chemistry, and mathematics. Neither was superior to the other, though both harbored the belief that they were. Both were engaged in the deepest exploration of creativity and experience, but abjectly illiterate across the cultural divide. These two cultures would sit at the same dinner table, and their misunderstanding of each other was so extreme that they could not understand each other even when they were agreeing.

This recent “Mundia & Modia” essay reads as a distillation of The Two Cultures, giving the name “Mundia” to the culture that reasons from immutable facts about the world, and “Modia” to the culture that centers on relationships between people. The message is so similar to that of The Two Cultures that I can almost recommend this short essay as a replacement for the longer lecture. The essay lacks the evocative detail of the lecture, but it is also free of the lecture’s jargon, less bound to a particular place in time, and unburdened by mid-century nationalistic baggage, which was nearly unavoidable in Snow’s time. I like the descriptions of Mundia and Modia because the boundaries are formed by how members of a culture make meaning about life. Your culture isn’t where you live or where you’re from, it’s not what you wear or eat, it’s not who you admire or hate. Culture is how you make sense of your place in the world.

Snow’s world was in the precious context of intellectual elites, and was distorted by fashionable stereotypes. But he does muse about another culture that seems outside of both Mundia and Modia. For lack of a better term, he calls it a culture of “technology.” This was near the dawn of the Information Age, at a time when the transistor had recently been invented and hardly commercialized. Yet he was prescient in identifying a culture that seemed based in something more than immutable facts about the natural world, and something beyond the relationships between humans. A generation ago, a futurist organization described its view of this third culture, but that came only one generation after the Snow lecture – another entire generation has passed since then. We have much more information that allows us to clearly understand and define the boundaries between cultures.

3 cultures

Belief, Humanity, and Knowledge – these are all highly loaded terms, and prefixes like “pro-” and “anti-” are bullets in our rhetorical guns. The very first thing to understand about this Three Cultures Thesis is that none of the cultures is superior to the others. Instead, we recognize that each culture is certain of its own superiority, while completely unable to demonstrate that superiority across cultural boundaries because of the way each of them reasons against or with the others.

In each culture, there is a first value (“pro-“) that is at the center of all reasoning within the culture, and there is a second value (“anti-“) that is the most important challenge to that first value. Although the term “anti-” denotes that second value, this doesn’t mean that the culture is always against the second value – only (and always) when the second value comes in conflict with the first. The triangle is completed by noting that the rationale for the first value is always rooted in a third value (“rationale”), which is the way a culture justifies its first value.

Each culture is interlocked with the other two cultures in relationships that are at once antagonistic and attractive. It’s tragic, really: Your biggest enemies want to be friends with you, while you want to be friends with others whose central motivation is inimical to yours.

THE CULTURE OF BELIEF

Culture of Belief: pro-belief

The Culture of Belief includes all people whose central reasoning is based on believing in something above all else. God and Country are obvious examples, but there are many other subcultures of belief that are not about religion or nationalism. People can believe in capitalism with similar fervor, or socialism, or art, or love or sex. Some of the greatest accomplishments in human history have been achieved from Belief subcultures, as well as some of the greatest atrocities. Nothing is stronger than belief, in that it really cannot be defeated until the believer stops believing, and nothing can stop a true believer from believing.

Culture of Belief: anti-knowledge

The most important challenges to Belief come from knowledge. But knowledge is ultimately irrelevant to belief: When you believe in something, you know it is true in a way that requires no further proof, in part because knowledge is inherently unreliable and limited. Anything that is honestly presented as fact must also be open to re-examination, because an openness to new information is a hallmark of knowledge. The few facts that can be established as immutable and universal can easily be dismissed as limited, since the only way to achieve an unchanging, universal fact is to define the universe in a static and bounded way. If you believe that the universe is infinite and that everything changes, you know that no knowledge can endure forever. Beliefs, however, have endured as long as humanity, and always will.

Culture of Belief: human rationale

Why should anyone bother believing in anything? Most adherents to any belief will insist that their devotion serves the dual purpose of advancing the belief as well as the prospects for humanity. People don’t kill in the name of God – or capitalism, or communism, or any ideology – because they hate people. Instead, adherents to a Belief culture will insist that the belief is in the best interests of humanity. They believe that humanity cannot prosper without Belief, so anyone who would be human must adopt this belief. Conversely, anyone who doesn’t adopt the belief is missing what it takes to fully achieve the best of humanity. When push comes to shove, Believers will choose the eternal interests of the Belief over the short-term interests of humans – because those interests must necessarily only be short term if they are not in service of the Belief.

THE CULTURE OF HUMANITY

Culture of Humanity – pro-human

The Culture of Humanity is centered in the common interests of all humanity. Humans by their own definition are the source of compassion, kindness, and really all of the things that are truly worthy, which is to say worthy of being human. That either sounds lovely to you or comes across as fatuous tautology, which is a fancy way of calling hippie-dippie bullshit. Again, the Culture of Humanity is no guarantee of good works or bad. Some our greatest leaders have been centered in humanity, and some who espouse human values have become terrorists in the eyes of the world. As is the case with Belief, no one is always right or always wrong just from the fact of membership in Humanity.

Culture of Humanity – anti-belief

No matter how much any Belief appears to be grounded in a rationale to benefit humanity, Believers must always make a dividing line between themselves and non-believers. And non-believers are, by definition in the eyes of Believers, not fully realizing their humanity. It’s a very short step from there to regarding non-believers as less human, and less worthy. In this way, Belief is the most important challenge to Humanity, which recognizes no boundaries between humans that can justify differing valuations of essential human worth. Humanity sees Belief as inevitably leading to bloodshed because Belief fails to put humanity first.

Culture of Humanity – knowledge rationale

Isn’t the Culture of Humanity merely a belief that humans are the most important value? No, because Humanists reason from knowledge to arrive at their culture. (The term “humanist” has been used in various ways for centuries. In that tradition, I’m using “Humanist” here in a way that aligns with some but not all of the history of the word.) Knowledge is impermanent and limited, except for this one fact: we are all humans. That fact is irrefutable, though the chain of reasoning from there to require that we all be treated as humans has many weak links. Nevertheless, Humanists use the techniques of Knowledge, not Belief, to make the case for compassion, kindness, and mindfulness.

THE CULTURE OF KNOWLEDGE

Culture of Knowledge – pro-knowledge

The greatest accomplishments of our species have come through the accumulation, examination, curation, distribution, and application of knowledge. The Culture of Knowledge values knowledge above all other values because without knowledge, we would be little more than vulnerable and rather pathetic animals. So knowledge is not just instrumental to our flourishing, it is itself the most important thing to flourish. Everything else is ephemeral or retrograde. Beliefs become superstition and ignorance. Humanity is largely violent, brutal, and selfish. Knowledge is the path to a better life.

Culture of Knowledge – anti-human

Knowledge is the highest value in part because it’s the greatest expression of human ability. Knowledge rises above the temporary concerns of humans in their current form. Given a choice to advance humanity or advance knowledge, there can be no acceptable choice other than to pick knowledge, because what makes us human is our knowledge, so advancing knowledge is advancing the best of our humanity. If other aspects of humanity must be shed in order to continue to advance knowledge, then that’s a small price to pay for the prize of keeping the best of what being human is about. Beliefs that oppose knowledge are not a true threat to knowledge. Only a definition of “human” that doesn’t put knowledge first is a threat – in this way, Knowledge is anti-human.

Culture of Knowledge – belief rationale

Ultimately, choosing Knowledge as the highest of all values is a kind of belief, though it is not within the culture of Belief. In a sense, the value of Humanity is a belief, and of course so is any Belief. But Knowledge is the most demonstrably powerful belief because by definition, applied knowledge always has observable results in the physical world. Proof of the existence of Knowledge is evident everywhere; proof of the existence of God is not only lacking, but unnecessary according to the very belief in God. Belief in the value of Knowledge does not require faith, as the rules of evidence are stated within the belief. This makes Knowledge stand outside of all other beliefs, but nevertheless the choice of Knowledge as the highest value is rooted in belief.

CONSEQUENCES AND CONTEXT

This Three Cultures Thesis explains many otherwise curious contradictions: divisions in progressive politics, Austrian vs Keynesian economics, environmentalists vs ecofascists, American patriotism vs exceptionalism, and differing reactions to pandemic plans. These will have to be the subjects of other essays, which I may attempt depending on how long the current pandemic lasts.

At least one future essay will cover the context of these cultural divisions, as I believe that this thesis is important in a much larger context. In fact, the only reason I wrote this post is so that I could write a later post explaining why this all matters. All of the above is really just a prelude to pick up from the very last lines of Snow’s lecture:

“The danger is, we have been brought up to think as though we had all the time in the world. We have very little time. So little that I dare not guess at it.”

That was over sixty years ago. We have so little time left that we have no choice but to try to guess at it.