real time

At Second Life, we occasionally debated the merits of virtual reality vs augmented reality. In caricature:

Virtual reality was the core dream of SL, same as the core proposition of Snow Crash, the Holodeck, the Matrix – the idea that a computer simulated world could have all of the sensory and intellectual stimulus, all of the emotion and vitality, all of the commerce and society, of the “real” world (quotations necessary because virtual reality would be so real that non-simulated reality has no better claim on the term).

Augmented reality said that the virtual realists dropped too much acid in their youth. A fully simulated environment might be escapist pleasure for the overcommitted few, but computers would show their real power by adding a layer to our existing lives, not creating entirely new ones. Computers would sink themselves into our phones, our clothes, eventually our fingers and eyeballs and brains, not in the service of making another world, but enhancing the world we live in.

If that debate sounded ridiculously theoretical to you, then I hope that was yesterday because today it’s as real as it gets.

Google Glass is the vanguard of augmented reality, and obviously important to the company.* Google’s mission has always been to organize the world’s information – not to create a fantasy world but to organize our world.

Second Life had its heyday after Google established itself as the new tech titan, but before any serious challenger had risen up behind it. We spent a lot of time trying to convince people that SL could be the next big thing … trying to explain that people wanted to have an online identity, instantiations of themselves that would interact with other online personalities, creating tiny bits of content that might not have individual value, but would have enormous value as a whole fabric of an online world where people would go and interact every day …

I was laughed out of a lot of buildings after explaining SL. Who wants to live online? Who wants friends that they see only in a computer? Who wants to spend their leisure hours pecking away at a keyboard and looking at the cascades of dreck that other non-professional users create?

Second Life missed the mark for a lot of reasons, but not because we were wrong about online life. Facebook came along, and gave us all of the virtual life that the Web could really handle – only 2D, status updates instead of atomic 3D content, kitten pictures instead of furries – but Facebook succeeded in creating a virtual world.

And now they’ve acquired Oculus VR. If it wasn’t clear before – and perhaps it wasn’t clear even to them – they have now taken a side in that old debate, the same side that they’ve been on since the beginning. Facebook is going to go more and more towards virtual reality, while Google expands further and further into augmented reality.

 

*I don’t work on Glass, have no special knowledge of the product or strategy, and actually have never even tried it.

like a boss

Zappos says goodbye to bosses” is a recent entry in a long string of articles about decentralized management practices. In the popular press, the implicit message is that decentralization is a nonstandard practice compared to strict hierarchy (if it were standard, why report on it at all?) – and if there is a comment section it is often filled with bitter vitriol about the dumbass management hippies who would rather chant kumbaya than actually do the hard work of telling employees what to do.

Almost 10 years ago, Thomas Malone wrote a book called The Future of Work that summarized twenty years of research on organizational structure, concluding that decentralized management was, well, the future of work. This is no longer a controversial theory, and many different kinds of companies have instituted varying degrees of decentralization with great success. So why are there still so many critics, and why are some of them so bitterly opposed?

One reason is that decentralization isn’t always the right choice. Most employees probably work in enterprises for which a strong degree of hierarchy is a better choice, or at least not an obviously worse choice. This is because the majority of employees in many countries work in SMBs (small-to-medium sized businesses), where there is often little difference in outcome between decentralized and hierarchical management. When you have, say, 5 equally committed people working in the same room together, the information they receive is so similar, and the communication between them so frequent and unmediated, that the employees would probably make the same decisions with or without formal management. In addition, the single largest employer in many countries is the government, where hierarchy is highly beneficial or required due to the nature of the service or because of laws and regulations.

So most people work in SMBs that don’t need decentralization even if they have it, or in large organizations that benefit from a lot of hierarchy. This leads to the common misconception that decentralized management doesn’t scale. “Oh sure, rinky-dink startups and mom-and-pop shops can get by without managers, but when you get to the really big efforts, you gotta have hierarchy to be a great company.”

That is not just wrong, it is perversely wrong. Decentralized management is, for certain kinds of enterprises, actually required in order to scale. The right way to decide whether your company needs decentralized management is to ask yourself these two questions:

How many people are required for my company to achieve our vision?

You have to have a pretty strong idea of your vision to answer this, which is harder than it seems, but let’s assume you know your vision. If you need less than about 150 people (because that’s Dunbar’s number), then decentralized management isn’t required. It might be more fun, more engaging for everyone involved, but it’s not required – unless you’re on the extreme side of the next question …

How well-known and stable is the path to achieving our vision?

If you know exactly how to get to the mountaintop, and that path is set in stone, then you have no need for decentralization. A single leader can just tell everyone what to do. A lot of decentralization could also work, so long as everyone is aware of the well-known and stable path – and this would probably be more fun for everyone involved, but it’s not required. However, if the path is unknown, or even if it’s known but subject to change before the full vision is achieved, then decentralized management is required. Failure is guaranteed under these circumstances due to the Innovator’s Dilemma – in large organizations, strict hierarchy will inevitably serve the needs of the current business model, leaving the company open to disruptive innovators that eat the large company’s future. The only hope to avoid the dilemma is to have decentralized management: employees with enough freedom to ignore the dictates of management might – with the right resources and a lot of luck – find the disruptive innovation within the company before it’s found outside.

So, to summarize in the obligatory 2×2:

decentralized management 2x2

I’ve noted the fun factor because it’s an important driver of employee criticism of distributed management. It’s not hard to find people who worked in places with “no bosses” and absolutely hated it, comparing the experience to high school and worse. And the truth is, in a large organization with an unknown and unstable path to a big vision, distributed management is definitely not fun for the employees, because:

  1. It is intellectually and emotionally draining. If everyone is supposed to make their own decisions, a lot of information and communication is required, and there is no way of getting around the time demands that this imposes, especially compared to the job you would be doing in a hierarchical company. Worse, making so many decisions is very stressful for most people, especially when you believe in the vision and you are close to your colleagues. You don’t want to let down your dreams and your friends, and it is very hard to face the possibility that every day may be the day you screw it all up for everyone.
  2. It is unrewarded by compensation. People start to think, “Hey waitaminute – I thought managers were supposed to make these decisions. If I’m making them now, why aren’t I being paid like a manager?” Most companies do not adjust their compensation schemes to account for this additional responsibility, because doing so would likely require a complex mechanism for collecting all possible projects, allowing everyone in the company to contribute to decisions on which they are knowledgable, and rewarding both successes and noble failures with monetary compensation commensurate to the effort of the people who implemented the project as well as those who contributed decisionmaking weight to the project. An attempt build this kind of compensation scheme would be regarded as insane, both internally and externally to the company. So most companies don’t try.
  3. The rewards for this kind of system extend beyond the likely employment period, possibly even beyond the lifetime of the employee. The Innovator’s Dilemma takes a long time to become a real threat. A small company first has to grow to a market leader and have such dominance that it is blinded to the threat of disruptive innovation – that can take years, possibly better measured as generations. So people are doing hard, uncompensated work, for the benefit of preventing a problem that might not happen during the lifetime of anyone that works at the company. That is a tough, tough ask of anyone. Even employees who understand the problem wish that the company could be hierarchical until the problem is apparent, and then switch over to this distributed bullshite. But the problem of course is that at that point, it’s too late.

So … should you like a boss or be a boss? Should you like your boss, and should that even be a question when your boss is you?

Bit flip

I was wrong about the PC. As a kid I played with the TRS-80, Apple ][ and C64 – I was engrossed in them all, I thought they were the future. But I didn’t predict the sweeping change the PC would have on society and the economy. I didn’t devote my hobbies and education to learning more about computer science.

I was wrong about the Internet. I was introduced to UNIX as an intern at Bell Labs, I read BBSes, I was on CompuServe and Prodigy and AOL, I used Mosaic. I enjoyed them all, I understood how these were the future. But I didn’t anticipate how all-encompassing this future could become. I didn’t devote my early career plans to working in Internet companies.

I was wrong about Google. As soon as I started using it in 1999, I saw that this combination of simplicity and power was the future of search, and that search was the key to the Web. But I didn’t see the enormous economic engine that search intent could generate. I didn’t want to work at Google while it was still a relatively small company.

So I’m probably wrong about Bitcoin. For reasons I’ll go into towards the end of this post, I feel it’s very important to state this at the beginning. If you already know I’m wrong, your time is much better spent reading and re-reading this wonderful piece by Marc Andreessen, the finest articulation of the potential power of Bitcoin yet written. (Incidentally, I’ve concluded that I was wrong when I said that Andreessen is probably the best living tech entrepreneur, but would be a mediocre VC. He’s already proven he’s a great VC.)

Again: please stop reading if you already know I’m wrong.

I don’t believe in Bitcoin, I don’t believe that it’s the foundation of a new age, a wave to follow the PC, the Internet, the Web. My resistance to the judgment of my betters is broad and deep, logical and emotional, based on fact and conjecture. So clearly, I’m not trying to win an argument here. I just want to someday look back on this and laugh. Or cry, as the case may be.

The roots of my skepticism about Bitcoin grow from three areas, which I’ll call What’s Missing, What I Know From Experience, and What’s Distasteful.

What’s Missing

As I humblebragged above, I knew about some of the key life-changing technologies of our time before most people. I may have been wrong about just how far they would go, but I was right to be curious about them, right to try them before they were popular, and right to enjoy their early incarnations. I had that curiosity and enjoyment from the minute I heard about them, and that enjoyment was sustained and nourished through each and every use.

I’m not curious about Bitcoin, at least, not curious enough to try it. As a consumer (not as a technologist, futurist, or business person), I don’t see why I might enjoy using it. I can understand why it has speculative value, but the joy of a good return from a speculative investment is nonspecific to Bitcoin. As a consumer, what’s in it for me?

The shortest description of the most obvious consumer proposition for Bitcoin is that it’s digital cash. But I’m not actually having a problem with the features of non-digital cash. Making digital payment behave exactly like cash would introduce giant problems into my life without solving any.

The first problem is the fear of seller fraud, i.e. how to address the problem that the person selling the goods might not actually deliver the goods. Bitcoin could, in theory, help quite a lot with buyer fraud, since once Bitcoins are transferred it’s just like receiving cash. But I’m mostly a consumer, not a seller, and as a consumer I don’t like to hand cash over to anyone unless I receive the goods at the same time or before I give the cash. Under what circumstance besides anonymity could I possibly want to use digital cash rather than a credit card? A credit card gives me the assurance that if I’m truly defrauded by the seller, I can always call the credit card company and demand a chargeback. Bitcoin advocates talk about chargebacks as a merchant’s curse (which it is), without addressing how the same thing is an honest consumer’s blessing.

Another big problem is the fear of loss and theft. I have this problem with real cash already, I don’t want to keep an excessive amount on my person or in my home or business. I don’t want to forget where I put it, I don’t want someone to steal it. Digital cash makes this an enormous problem, since I can now have a very large amount of cash, which becomes a very attractive target for theft, and a very sad potential case for loss. Sure, I can protect my digital cash with all manner of digital locks and keys, but this makes my cash security problems worse, not better. Banking has lots and lots of problems, but one of them is not that if I forget my key, I lose all my money.

I understand that these are problems of privilege, first world problems, and I’m not addressing the benefits that Bitcoin’s success would have for problems particular to the developing world. But I’m also not aware of any mass consumer technology that became successful due to features that benefitted developing economies without solving first world problems first. That may be sad, but it’s true.

What I Know From Experience

How many people have managed the growth of a new currency from its early days through its use in hundreds of millions of dollars worth of transactions per year? I don’t know, but I suspect that the number is only in the dozens, and I know that I’m one of them. So I cannot help but view the prospects for Bitcoin through the lens of what I learned from developing the Linden Dollar as a product for Second Life. This experience might provide some special insight, but it also almost certainly comes with bias, false equivalencies, the color of regret and the specter of envy. Nevertheless, I can’t talk about Bitcoin without thinking of the Linden Dollar.

Since memories are short, let me try to explain the Linden Dollar very briefly. Second Life was once a thing that had the same level of interest as Bitcoin does today, actually a bit more judging by search queries:

SL-bitcoin

The Linden Dollar is a virtual currency, the primary medium of exchange for transactions in the virtual world of Second Life. At its peak, people using Second Life used the Linden Dollar to buy and sell virtual goods worth more than half a billion dollars per year. Although there are many other digital worlds featuring the ability to get goods in exchange for some virtual token, the Linden Dollar had some unusual features that didn’t exist or weren’t allowed by similar services. The L$ could be transferred from user to user, and could be exchanged for a price in US dollars (and Euros and other currencies). Linden Lab, the company making Second Life, could issue new Linden Dollars in any amount and at any price, without any guarantee of redemption for any value, making the L$ a true fiat currency (i.e. having value by declaration rather than by guarantee of exchange for something of value, like gold).

It’s fair to say that the Linden Dollar was inferior to Bitcoin in every possible aspect of technical implementation, particularly the cryptological security measures. And it was not only centrally managed, but subject to the inflationary risks inherent to management of a money supply by an unstable government (i.e. a startup). Bitcoin advocates would have no problem listing dozens of feature inadequacies and design mistakes for the Linden Dollar. But I don’t think that the absence of any of Bitcoin’s vaunted features are the reason that the Linden Dollar didn’t reach mass success.

The Linden Dollar failed to reach a mass audience because Second Life failed to reach a mass audience. Even with SL’s shortcomings, the L$ might still have reached a broad audience if it had also become an accepted medium of exchange on another successful platform. The features and design of a currency can preclude certain types of failure (e.g. widespread fraud), but with one possible exception* they cannot be the driving reason for success. A currency, or any payment method, succeeds not because of its features, but because of the adoption of the platform on which the currency is the primary medium of exchange. As I have argued elsewhere, the value of the platform is the dominant factor in determining whether the medium of exchange for that platform will be successful. Consider the US dollar, which is after all Bitcoin’s true competition. The “platform” for the US dollar is the United States economy. The US$ has many feature deficiencies, and has undergone many design changes over the years. Someday the US dollar will fail to be the world’s dominant currency. That day will come after the United States is no longer the world’s largest economy, and not a day before.

Now, it’s arguable that the platform for Bitcoin is the Internet, and that economic transactions running through the Internet could exceed the US GDP (minus the portion running through the Internet). So perhaps we are on the cusp of seeing Bitcoin take the place of the US$, not because the features of the currency make it better than the US$, but because the US GDP is smaller than Internet GDP, and no rising country GDP (i.e. China) grows fast enough to fill the vacuum. But that’s not Bitcoin winning through superior features or technology, that’s the US economy failing and the world not wanting to rely on China’s economy.

What’s Distasteful

If it’s not clear enough already, this post is driven by personal taste, experience and bias as much as it is by fact and logic. So I may as well conclude with the least logical portion. I started this post by admitting that I’ve been wrong about pretty much every important technology trend in my lifetime, and practically begged many readers to read something else. Now I’ll admit that I don’t actually think I’m a moron. As I pointed out, I enjoyed and was excited about the PC, the Internet, the Web as soon as I saw them. I was right, I just didn’t make many important personal decisions based on that belief. (As an aside, I don’t actually regret the decisions I made instead. Life is full of wonderful choices.)

But I wanted to give Bitcoin fanatics every reason to dismiss this post without comment, because I’ve observed that Bitcoin skepticism is often attacked with an onslaught of vituperative insult. Now, this is true of the current sad state of Internet commentary generally, but here I’m excluding the routine trolls and bitter ignoramuses, and thinking of people who are clearly capable of intelligent, reasoned discussion. Some very smart and often nice people are Bitcoin fanatics, but in the eyes of many intelligent true believers on this topic, skeptics aren’t just wrong but idiotic, not just shortsighted but malicious. That reaction is of course is distasteful, but the point here isn’t just that I have delicate sensibilities. The point is pattern recognition: I have seen this kind of fanaticism many times, and it is usually a sign that merit of the proposition cannot speak for itself.

*The possible exception to all of my skepticism for Bitcoin is micropayments. I think this could be a compelling use case, though not in digital content payments because the problem with many digital content models is not that people don’t have a good means to pay, but that they would rather receive inferior free content than superior paid content at any price. But micropayments in antispam implementation or for microtransactions in data transmission generally is very interesting. This is the one area where I’ll continue to think about what Bitcoin could mean. After all, I’ve been wrong before.

classless techies

Is there a class war brewing in San Francisco, where the underclass rise up against the oppressive masters of technology, those sinister “techies” who would create a world without the unsightly poor?

The notion is flatly ridiculous – which is irrelevant to whether it will actually take root in the minds of ironically-named progressive advocates. The ridiculousness is not in the descriptive or predictive power (i.e. whether class conflict is here or will happen), but in the conceptual assumptions about what technology is and who comprise its class. Why have a class war against a class whose triumph would achieve all of your goals?

What is the end state of the triumph of technology? Consider the question in the context of the ultimate vision of, say, capitalism or socialism, democracy or theocracy, or any -ism or -acy you care to caricature: In a capitalist dream, only money matters; in a socialist one, the government provides for us all; we are all equal in a democracy and a perfect deity makes our lives perfect in a theocracy. So what does the world look like if all the wild dreams of technology come true?

It is a world where energy is endless, food is bountiful, transportation is instant, humanity is connected, information is universally accessible, life longevity is unprecedented – and all of these things are so inexpensive that it’s easier to call them “free.” Technology in its greatest ambition aims to make class irrelevant. Not nonexistent, but irrelevant – there may still be luxury, there might still be money, but the meaning of these things is entirely different when the poorest person on earth lives for a hundred years in full health. The poets and philosophers tell us that money isn’t everything, but action on that insight is available to only a very few who reach enlightenment. Technology’s implicit ideal is that everyone will have no concern higher than pondering the meaning of life.

So how can you have a class war against a class that aims to end class? Hating techies may be like hating lawyers – the stated goals of the profession are inarguably noble, its highest practitioners are indispensable to a just society, but the system is complex, widely misunderstood, rife with perverse outcomes, and plagued by bottom-feeders. The would-be techies who make trivial applications with inconsequential goals, who despoil the conversation with their attention-seeking rants – these are the ambulance chasers of the tech world. They are classless in a more literal sense.

burning questions

Since coming back from my first trip to Burning Man, I’ve been turning over some questions in my mind. Well, not some questions, just one question, or rather multiple angles on a single question, trying to get to the heart of the matter like an artist’s chisel biting through stone to find the sculpture within. The question I started with was, What do you do with the problem of Burning Man?

And what’s the problem of Burning Man? That you have to come back, is the facile answer. Re-entry into “default world” is a problem. You spend a week in the desert without a lot of structure or societal expectations, immersed in humanity with good will and open spirits. There is some kind of magic in the desert, there is every kind of magic in the desert. And then you have to return “home” and face a world with a lot of structure and without a lot of humanity. This is such a problem that a common reaction is to reverse the concept of home, so that a return to the desert every year results in a resounding call of “Welcome home!”

Home is where the heart is, so I can understand why people would call a place home if it’s where their hearts are most alive. But your heart is always with you – if it’s not, you’re not alive, and I mean this figuratively although it’s literally true – so if you’re not at home wherever you are, this is literally (and now I mean figuratively) a mortal problem.

I would restate the Problem of Burning Man as a problem of dust. In the desert, the dust is everywhere, it covers everything, seeps into every crevice, covers you, envelops you, surrounds you and blinds you. The dust is everywhere and it doesn’t matter. What matters is the experience you’re having, the openness of your heart and largeness of your spirit, the humanity around you and within you.

And in the real world, you don’t see the dust. But the humanity that you now know exists is completely buried under vast layers of societal structure, expectations, obligations, fears and neuroses. These are layers of dust that matter more than anything else, for they prevent you from reaching the things that truly should matter most of all. In the desert, the dust is everywhere, highly visible and yet it doesn’t matter, while in the real world, the dust is everywhere, invisible and seems like the only thing that does matter.

So the Problem of Burning Man is not that you have to go home, or that you have to figure out what’s worth calling “home,” but that wherever you go, the humanity that you discovered in the desert is actually there as well, but covered in dust that you can’t see and so can’t ignore.

What do you do with this Problem? Some people opt out of the life they were living, quit their jobs, leave their lovers, hit the road. Some devote their lives to digging through the dust, joining non-profits, humanitarian missions, trying to dig through the vast invisible desert to find the humanity underneath. Some descend into cynicism and escapism, hating the world they live in while waiting only for a return to the desert.

For me, I’ve enjoyed carving out this problem with my dull chisel and a few blows of a heavy hammer. I’ll probably continue to work on it, wielding increasingly sharper tools and refined motions. Once it’s carved into a delicate figurine, I’ll have to decide whether to toss it into a dark closet, or put it up on the shelf, or carry it around in my pocket, or swallow it whole and make it forever a part of me.

the good gatsby

I had fervently hoped that Baz Luhrmann’s signature brand of loopy romanticism was exactly the antidote for staid, sullen efforts to bring literary classics to the screen like the plodding 1974 Redford version. He delivered enough to make a movie good enough to be worth watching, at least for completist Gatsby fans, but far from Great enough to be worthy of the title.

Surprisingly, Luhrmann makes the most fundamental error of all page-to-screen translation: overuse of narration. Words make a novel great, so it’s understandable that directors want to capture those words onscreen. But each artistic medium can only be great in its own form – narration and words flashed across a screen are unnecessary concessions to the inability of the visual medium to fire the imagination as great writing can. Pounding the screen with Courier font only screams, “this movie doesn’t know how to convey the depth of immortal prose!”

It’s really too bad, because the movie does deliver great visuals when the director didn’t feel overwhelmed by the classic novel. The filmmakers clearly did their homework and were in love with the gorgeous writing of the book. The irony is that Fitzgerald’s cinematic descriptions translate perfectly to the screen, in scene after scene after scene. The expansive Buchanan lawn “jumping over sun-dials and brick walks and burning gardens,” the breeze in a room that “blew curtains in one end and out the other like pale flags, twisting them up toward the frosted wedding cake of the ceiling,” the limousine “driven by a white chauffeur, in which sat three modish Negroes” – these are not famous passages, and the film brings the unspoken words to life beautifully. But when burdened by great prose, we hear Tobey Maguire as Nick Carraway intoning about boats and currents and the past, with that silly Courier font and no memorable visuals at all.

The two scenes that worked the best were critical ones in the novel, where the film did manage to let go of the crutch of narration. The reunion of Daisy and Gatsby in Nick’s cottage was wonderful from the cutting of the lawn, through the overstuffed house of flowers and pathetic tea cakes, leading to a rain-drenched Gatsby and frightened Daisy finally reconnecting in wonder. Every moment of that was pitch perfect down to the clock that didn’t break though everyone acted as if it did. And the scene where Myrtle runs into the street, thinking that Tom is driving the yellow car, was perfect in its operatic brutality. Although purists might hate it, I loved the use of modern music in the film – a good example of Luhrmann trusting his own emotional tuning fork rather that giving stultifying respect to the source material.

Leonardo DiCaprio was an excellent Gatsby, never perfectly comfortable in his sheen of refinement, his insecurity and obsession poking out through the surface in increasingly desperate displays. I thought Daisy Buchanan was an unplayable role, but Carey Mulligan has the talent to make the most of it, her voice freighted with emotion and eyes conveying the love, fear and weakness of the classic Fitzgerald girl. Isla Fisher brought the sensuous vitality of Myrtle Wilson to life in her few scenes. Joel Edgerton was a passable but unexceptional Tom Buchanan. Elizabeth Debicki was disappointing as Jordan Baker, without the athletic bearing and liar’s charm that should define the famous sportswoman. I’m never fully convinced by Tobey Maguire, and he didn’t break that streak here as Nick Carraway. Nevertheless, I wish the film had tried to develop the relationship between Jordan and Nick more fully – it’s a small but important sideline in the book that reminds us that Nick is more than a literary device.

As far as devices go, the framing of the narration by Nick in a sanatorium was well designed. Although it’s not in the book, it’s a clever connection to the Fitzgeralds – both Scott and Zelda spent time in them – and is used well to turn Nick into the author and not just the narrator of the story. It’s just too bad that the filmmakers thought the device was necessary, that they held the words so holy that they couldn’t rely solely on their own bountiful skills in cinema.

my goodness

The other day I met with my “periodic friend” – my term for a friend that you meet only once in a long while, but those meetings instantly become deeply personal conversations. This is different from the longtime friend who has been separated by circumstances of geography, family or career – many of us have the childhood or old school friend whom we rarely see, but who always provides a joyful reunion at every occasional reconnection. The periodic friend is much rarer – in a sense, the friendship depends on the periodic nature of the contact. You might not even enjoy more regular social engagement with this friend. Friendship is about recognizing your kinship with another, finding yourself in someone else. So a periodic friendship provides a unique opportunity to visit with yourself after long intervals of being apart – sort of like living out the “7-Up” documentary series, where the filmmaker chronicles the lives of the same set of people in portraits composed every seven years.

My periodic friend and I tend to fall into comically introspective conversation at every meeting. One thing that binds us is the anguished belief that there is something broken, dark and irreparable within us, some character flaw, an absence of humanity that no worldly bounty can fix. Yes, it’s dramatic and self-involved in an embarrassing, adolescent fashion. But it’s perversely fun to talk about.

My friend brought up the Platonic proposition of the ring of invisibility. This ancient concept was dramatized most popularly in The Lord of the Rings. The question goes something like, “What is the first thing you would do if you acquired a ring that made you invisible when you wore it?” Think on this for a moment but answer as quickly as you can.

He could barely express the question before I gave my honest response: I’d go visit a women’s locker room. Hey, I grew up in the ’80s, when Porky’s was a major movie franchise. My friend had a different, though equally amoral response. The point of the question, as in all interesting questions, was really in the follow-on question: “Do you think there is anyone who would honestly give a response that wasn’t bad?”

Even the most saintly figure wouldn’t answer anything like “I would go secretly leave a needed gift for a desperate stranger.” Just wouldn’t happen. Pretty much everybody would give an answer involving transgression of some moral code. So the question of the ring is an argument that people are, at their base, bad or amoral creatures. Human nature is bad, fundamentally self-interested and greedy.

We talked about this for a while, but ultimately I said “If absolutely everybody would behave the same way under certain conditions, then that can’t be the test of badness.” Even as I said it, I felt the raft of implication carried by those words, and at the same time felt astonished by the conviction I felt in saying it. I was expressing a core belief that I didn’t really know that I held. My friend said “Whoa, you could write a book unpacking what’s behind that sentence.” Well, maybe not a book, but I’d like to try to understand what I meant – sometimes I write to find out what I think.

A straightforward interpretation is the logical truism that a test is not a test if there is only one outcome. But that’s not all that I meant. It’s closer to restate the proposition as “What is universal to humanity cannot be bad.” But that is what I found astonishing, as this is pretty close to saying “All humans are inherently good.” And I have a hard time saying that about all people, not least because I’d have a hard time saying that about myself.

People are pretty shitty sometimes – I guess most people are shitty some of the time, and some people are shitty most of the time, and no one has never been shitty any time. Given this generally misanthropic view, I am surprised when I’m described as magnanimous, which has happened from time to time. A magnanimous person is supposed to believe the best of people, and my views of people are, well, mostly shitty. But now I realize that I actually do believe that everyone can be good, no matter what they’ve done. Unfortunately, this ersatz optimism only makes me continually disappointed to be let down by people, including myself. True misanthropes must be happy most of the time, as their beliefs are validated by constant evidence of human failing. Optimists about human nature can only be miserable at the pervasive failure of people to live up to their own best conception. But we can hang on to one thread of hope, missing from that catalogue of shittiness above: No one is shitty all of the time, which means that everyone can be good.