magnificent seven

Tomorrow Oculus Rift is taking consumer pre-orders for its heralded VR headset, and many people are wondering whether 2016 will be the year of virtual reality … just as they wondered in 2015, and 2014 … as they wondered in 2009 when I left Second Life. At that time, I remained certain that virtual reality was the future of online interaction, but that it would be at least 10 years before the field could achieve mass consumer success. Virtual reality actually has many decades of history as a technical project, as well as a rich history in fiction that demonstrates the enduring attraction.

People keep mistaking “THE year” for virtual reality because they fail to properly assess the progress in each field required to make a truly compelling VR experience. Observers see great progress in just one field, and they assume that it’s enough to break open mass consumer interest. But in fact there are SEVEN fields required for VR success – “the year of virtual reality” won’t happen until every one of these fields has progressed past the minimum development threshold. Here’s a brief rundown of each field, WHAT it is and WHY it’s important, and WHEN it’ll be ready for a truly compelling VR experience.

1. GRAPHICS COMPUTING POWER

WHAT: The most obvious requirement is that a computer needs to be powerful enough to make a compelling simulation of reality. Now, what’s “compelling” is open to argument, and I would argue that some relatively primitive figures can comprise a compelling environment if they move, interact with each other, and react to you in an engaging way.

WHY: I guess you could have virtual reality without computers, just as you can have it without compelling graphics. I mean, that’s called “storytelling” and it’s pretty cool. But that’s not what we’re talking about here. Some minimum level of simulated graphics is required.

WHEN: Sufficient power exists now, and has existed for at least seven years. But if your requirement for visual fidelity is very high, then you might think that even today’s computers aren’t powerful enough.

The technical measurement discussion isn’t too interesting, so please skip this paragraph and the graph below if you’re not inclined to pick over this kind of detail. There’s no single measure of computing power, but as a rough analogue I’d pick FLOPS and say that to simplify further we should talk only about GPU FLOPS, noting that there’s CPU-equivalent performance. Because I believe that an experience comprised of rough primitives can be compelling, I’d say that even one GPU GFLOPS is sufficient to support a compelling experience, and we’ve had that in home computers since 1999. But giving room for argument, I can raise the requirement to 500 times that, and still we’re talking about 2007-8 as the time when consumer-level computers had enough power to make virtual reality.

cpu-vs-gpu2. PERSONAL COMPUTING DEVICES

WHAT: Unlike the first field, this is less about predicting the power that computers have than it is about predicting what type of computer people will use. “Personal computing” used to mean desktop computers, but now people actually carry computers on their persons. Today, the type of computer that is most commonly in use is the mobile smartphone.

WHY: Philip Rosedale frequently said that when he started SL, he underestimated the time that it would take to get to mass market use of virtual reality, because he was only looking at the increasing power of desktop computers. He didn’t predict the shift to laptops, which happened in the early 2000s. Using smaller computers generally means using less powerful computers, so although desktop computing power was sufficient to simulate reality by the mid-2000s, the computers that people actually used were laptops, which were not powerful enough. Today, the computer that most people use is a mobile phone, which is even less powerful.

WHEN: Using the same standards above, smartphones will be able to simulate a compelling VR experience in 2017.

(Boring, skippable paragraph follows.) As above, this assumes the requirement is 500 GPU GFLOPS, without arguing too much about what that number really means. A high-end smartphone today can do about 180 GPU GFLOPS, with more power coming soon. (For comparison, a PS4 game console can do over 1800 GPU GFLOPS.) Taking Moore’s Law narrowly and literally, it will be 2017 before smartphones will get over 500 GFLOPS.

But should we even be talking about smartphones here? Forget about “PC” meaning desktop – the truly personal computer has moved from your desk to your lap to your pocket. Where is it going next? On your wrist, on your face? This is a question about the intersection of device use and power, not either one alone. The precise question is, “When is the type of computer that most people use every day going to be capable of 500 GFLOPS?” I still think this is a question about smartphones, but who knows?

3. VISUAL DISPLAY

WHAT: A computer just simulates the environment, you need to be able to see what the computer is simulating. For many years, the way most people see a computer’s output has been through a monitor. Now, Oculus Rift and other goggles are coming into the mass consumer market, and these are so good that they’ve ushered in the current wave of excitement about VR.

WHY: Sight is the most important sense in giving people a feeling that they are somewhere other than where they’re sitting. It’s not the only required sense, but without seeing a virtual environment, most people cannot begin to immerse themselves in the experience. I used to think that a flat monitor of sufficient size and resolution could provide a compelling enough VR experience, but using the most advanced VR goggles today simply blows away the monitor experience.

WHEN: The major unsolved problem with VR goggles is that using them for too long induces nausea. Although Oculus and others have made a lot of progress on this, it’s only to the point where nausea is delayed rather than eliminated. A product that makes you puke is never going to be mass market. Based on nothing more than a rough guess based on many years of observation of consumer hardware cycles, I’m going to say that it will take three years to sufficiently refine VR goggles to smooth away the nausea and other early problems, so it will be late 2018 before this field is really ready for mass consumption.

4. AUDIO FIDELITY

WHAT: Properly spatial audio means that sound should be directional, you should be able to hear where a sound is coming from, and more than just direction, you should be able to distinguish sounds from each other even when they are coming from the same direction or obscured by ambient sounds. This latter goal is called the “cocktail party problem” – even in a noisy cocktail party, you can focus on and hear a single speaker who isn’t necessarily louder than the party noise.

WHY: Seeing may be believing, but hearing is confirmation. The audio experience is often overlooked and undervalued, but the sound of being in a space is crucial confirmation that your brain can believe what you see. It’s possible that the nausea of the VR goggles experience is due to insufficient confirmation from other senses, and hearing is probably the most important and easiest sense to add to the virtual environment, though some might advocate for smell or taste (uh, yuck).

WHEN: “3D audio” has been around for many years, but the cocktail party problem remains unsolved despite recent advances. Still, the current state of the art in spatial audio is very good, and probably good enough without fully solving the cocktail party problem. I think we’ll see really excellent integration of audio fidelity with VR goggles even before the VR goggles are fully nausea-free, so let’s say that the audio component will be ready by 2017.

5. 3D INPUT PERIPHERALS

WHAT: This is the most important area that not enough people are talking about. Virtual reality requires a host of new technologies for allowing a whole body to interact in a 3D space: hand and finger movement, body position, eye-tracking, multidirectional treadmills. Every single one of these is a new Oculus-size opportunity in the making.

WHY: A keyboard and mouse or trackpad are not designed, and not sufficiently adaptable, for a person to move easily in three-dimensional computed environment. The only innovation in input in mass computing devices that we’ve seen in the last 20 years has been multitouch on a smartphone or tablet, and that doesn’t help much for 3D.

WHEN: We have yet to see a breakthrough product here, despite many promising efforts. The field is extremely varied and diverse, and it could take many years to sort out the winners. Somewhat arbitrarily, I guess it will be at least 2019 before we have mass consumer products that enable all of the tactile, visual and auditory input needed for compelling VR.

6. BANDWIDTH

WHAT: Though it’s easy to imagine a VR experience that is entirely created by a single computer at a single place (somewhat like watching a movie at a theater), it is much more likely that many computers will need to talk with each other over distance, and that requires access bandwidth to communicate (like the Web).

WHY: The design of the particular VR experience that defines success really comes into play here. For example, this post assumes “mass market VR” will be enabled by personal computing devices and that multiple people can share a VR experience from different locations. That means that larger computers will perform important tasks and coordinate communication with the smaller computers that people have. The amount of bandwidth required can vary greatly depending on what demands the system is making on the computers involved.

WHEN: If you think that VR is going to get to mass market through smartphones or whatever successor computer that we carry around with us, then you’re bottlenecked by the state of wireless cell networks. Although high-speed data connection is broadly available in major metropolitan areas, it is unreliable even there and unavailable outside of the most densely populated areas. Given the slow rate of evolution of cell networks, it would be at least 2022 before bandwidth is sufficient for VR everywhere.

Many VR enthusiasts picture mass-market adoption through desktop computers, gaming consoles, or other specialized hardware yet to penetrate mass market, but all of which would use wired connections up until the wireless access point, so for that camp, we could say that bandwidth is already sufficient.

7. LATENCY DESIGN

WHAT: The delay in computers communicating with each other is sometimes related to bandwidth, but this field is included as a separate factor to encompass other network quality issues as well as the sheer physics of data traveling across large terrestrial distances.

WHY: Some amount of perceptible latency is unavoidable as a matter of physics if we are talking about communication across the world. So to the extent that the VR experience relies on real-time interaction with a global population, acceptable latency must be designed into the experience, mediated somehow to make the perception of latency acceptable.

WHEN: Arguably, this is a problem that is solved now for many types of VR experiences, but I include it here just because I’ve seen many VR proposals that don’t consider how latency must be designed into the experience. We’ll see some common design patterns in the first year of release of the current crop, and they’ll formalize into best practices by next year, so we’ll call this solved by 2017.

So, when will “The Year of VR” really be? My rough guess is 2019 at earliest, 2022 for the more ambitious visions of mass-market VR that include mobile computing.

My point isn’t to be right on the prediction though; here I just wanted to give a more rigorous framework for making predictions of mass market success. When people claim that this or any year will be the year of VR, they should be clear on what the state of progress needs to be on each of these seven pieces. Considering progress in just one field alone has led to many, many mistakenly optimistic predictions.

dear prudence

When I joined Google in December 2010, my friends didn’t think I’d last six months. I’d been working in startups for over a decade, and my experience and predilections had given me an enormous appetite for chaos, joyful appreciation of uncertainty, and incorrigible disdain for authority. Joining the world’s largest Internet company didn’t seem like a long-term move.

I lasted five years. It’s still a bit of a wonder to me how I stayed so long, but the attractions are undeniable. Google is routinely ranked as the best place to work, and it’s all true: market-leading products, smart colleagues, admirable leaders, outstanding perks and outsized pay. The list of reasons to work at Google is long and enviable.

Usually “great culture” is on that list, but it’s not on mine because no culture is great for every person. Only insane zealots would seek to impose a monoculture on the world, and to claim there’s just one way to have a great workplace culture is similarly indefensible. If chaos makes you hungry, if uncertainty brings you joy, if authority makes you want to punch up – you probably don’t want to work in a culture of extremely refined processes, luxurious reaction times, and deference to position. None of these are bad qualities in the abstract; it’s not inherently disadvantageous to be wild or deliberate, only the context makes it so. The context can vary from company to company, and even within companies.

I was in the right context, even at Google, for the first couple of years. Then I spent three years learning valuable things that nevertheless weren’t skills I wanted to have. Despite all the benefits, I feared becoming dependent on the enormous generosity of the leviathan, reduced to a remora suctioned to a whale for so long that it forgets how to swim. Unfortunately, I’m constitutionally incapable of adopting the prudence required to enjoy stability and luxury. I don’t think I’m irrational, I just value the parts of my personality that strain against these bounds. Prudence is expensive, unbearably dear, when it comes at the cost of your hunger, your joy, even the double-edged sword of your pride.

So finally, I’m out of the longest and most comfortable work relationship I’ve ever had, finally a fish without a host in the ocean, flapping the atrophy out of my fins. The water is deep and wide, filled with fearsome predators and cold currents, and the friendly coves are as yet hidden to me, but still it feels like home.

dan the man

The last time I saw Dan Fredinburg, he was heads-down in a tray of food at the cafeteria. I tapped him on the back as I passed by and mumbled some routine hello. A reflexive “Hey we should catch up” caught in my throat when I saw his haggard stare and the robotic shoveling of food into his mouth. He wasn’t really there, and that was very unlike Dan, who was usually so present, so effervescent with pleasure at seeing people and connecting with them in the moment.

I thought I understood: he was about to leave on his second attempt to summit Everest. The first attempt had ended in the most lives lost in a climbing accident on the mountain, when sixteen sherpas died in an avalanche that befell a commercial expedition in April 2014. Dan was acutely aware of the difference in risks for sherpas and expedition customers, and I think he’d been haunted by his contribution to the burden carried by the men who had died trying to help him achieve a dream. I saw the difference in his training this time around, when I’d occasionally spot him in the gym – he moved the heavy weights with a serious sense of purpose, dedicated to raising himself to an even higher level of fitness, without the jokey repartee that we had shared during his training the previous year. This time the journey was about more than just getting to the top because it’s there, more than making the world’s highest StreetView.

Dan died in an avalanche on Everest last Saturday, triggered by the powerful earthquake that now has a death toll of over 4000 people. The cynical will ask why anyone should remark on just one death among these thousands, just the death of a rich, powerful, famous playboy.

Dan wasn’t rich in money. Of course anyone with a good job in Silicon Valley may have wealth in comparison to much of the less fortunate world, but Dan wasn’t a jackpot entrepreneur flaunting his success with expensive hobbies. Instead he was rich in spirit, a wealth far beyond the norm even though it’s accessible to all. He was rich in vision, seeing a way to make his job into his passion, pursuing personal enrichment that’s not about money at all.

Dan wasn’t powerful in the org chart. A talent like Dan could never be a mere cog in a giant machine, but he wasn’t an executive commanding thousands of peons to do his bidding. Instead he was powerful in his presence, in his sheer joy at living, in the force of his will to live deep and suck out all the marrow of life.

Dan wasn’t famous in the media. He happened to date an actress, but he never saw people as what they did for a living; he responded only to who they are inside. The memory of Dan will live like a star in all who knew him, surviving well beyond the transitory and dull illumination of the names and faces of the merely famous.

Pablo Neruda often told an anecdote about a hole in the fence of his childhood backyard. It was just a hole in a fence, a tiny view into the landscape beyond, until one day there suddenly appeared a boy’s hand. When he got closer to the fence the hand had disappeared, but in its place was a gift of a marvelous little toy, and this toy touched his heart so much that he left his own in return. The chance view, the momentary and partial encounter with another emerging spirit, the exchange of common but magical gifts – the great poet marks this as the beginning of his understanding that there is a bond between strangers that is greater in its way than the bond between intimates.

I have been a lucky man. To feel the intimacy of brothers is a marvellous thing in life. To feel the love of people whom we love is a fire that feeds our life. But to feel the affection that comes from those whom we do not know, from those unknown to us, who are watching over our sleep and solitude, over our dangers and our weaknesses—that is something still greater and more beautiful because it widens out the boundaries of our being, and unites all living things.

That exchange brought home to me for the first time a precious idea: that all humanity is somehow together… This is the great lesson I learned in my childhood, in the backyard of a lonely house. Maybe it was nothing but a game two boys played who didn’t know each other and wanted to pass to the other some good things of life. Yet maybe this small and mysterious exchange of gifts remained inside me also, deep and indestructible, giving my poetry light.

People say “I’m sorry for your loss” when they hear that someone you know has died. It was really something to know Dan, but I’m not among his closest friends, family and loved ones, so I cannot truly grieve as they do, I have not lost as they have. For me, Dan was a gift spotted through a small hole in the fence that separates us from each other as we wander through our own life paths. I came close enough to see the joy he made of life, and to understand that we are united by something deep and indestructible inside of all of us. I’m grateful for the gift, lucky to have it, and determined to give it to all who pass by and see that these fences are truly no barrier at all.

the flitcraft parable

The “Flitcraft Parable” is in Chapter 7 of The Maltese Falcon, titled “G in the Air” with unsettling personal aptness. In condensed form:

Flitcraft lived a comfortable life of routine. Married with two kids, financially secure, without secrets or unruly inner demons. One day he disappeared, without warning, without trace, like a fist when you open your hand. His wife hired a private detective to find him, and find him he did. Confronted in the detective’s hotel room, Flitcraft had no feeling of guilt, as he felt his disappearance was utterly reasonable. His only bother was knowing that he couldn’t make that reasonableness clear to the detective, so he tried to explain.

Walking to lunch on the day he disappeared, a giant construction beam accidentally fell right beside him. He was uninjured, but after he recovered from the initial shock, he knew that the falling beam had shown him that life was fundamentally not the clean, orderly, sane, responsible affair he’d been living. Good citizenship, stable family, fortunate business – none of these things changed the fact that people lived only while while blind chance spared them.

He left that day with only the clothes on his back. After a couple of years of wandering, he settled in another suburb not far from the one he had left. He married another woman who didn’t look much like his first wife, but they were more alike than they were different. He wasn’t sorry for what he’d done, it seemed reasonable enough to him. He wasn’t even aware that he’d settled back into the same groove that he had left. He adjusted himself to beams falling, and then no more of them fell, and he adjusted himself to them not falling.

Since my last post, I left the dessicated shell of my marriage, leaving the house and most of my possessions, reassembling my life without the structure and goals that had been the foundation of my adult life. Among the many material things I left behind, I regretted nothing except that I did not bring the book that had been my reference for the Gatsby project. I had pored over each page of that book, carefully underlining the one phrase or sentence that would form the kernel of each post in the project, which was just under halfway done. I asked for the book’s return, to no avail. I thought I couldn’t start again without that exact book; but the edition was not a common one, and even if I could get it, of course it wouldn’t have my underlined markings, the toil of years.

For a long while I was furious that I could not get my book back, and that no one seemed to care how much it mattered to me. I stopped writing, because the Gatsby project had become my favorite warmup to writing, and my reliable fallback when words would not flow on other efforts. I was blocked – emotionally, creatively, spiritually blocked.

Finally I picked up another edition. Although I knew the pages wouldn’t match, that the project wouldn’t be the seamless tapestry I’d once imagined, I was ready to embrace the wisdom that you can start any new path you want, so long as you don’t require that it proceed on a straight line from the one you’ve been on. New edition in hand, I loaded the Gatsby project from the first page, freshly underlining the sentences I’d already posted about, expecting to run into the page that didn’t match. It didn’t come in the first chapter, or the second … or any that I’d done. I reached the final page of the project to date, and each sentence that I had selected in the old book landed neatly on the same page of the new book.

He adjusted himself to beams falling, and then no more of them fell, and he adjusted himself to them not falling.

I’m ready to write again.

real time

At Second Life, we occasionally debated the merits of virtual reality vs augmented reality. In caricature:

Virtual reality was the core dream of SL, same as the core proposition of Snow Crash, the Holodeck, the Matrix – the idea that a computer simulated world could have all of the sensory and intellectual stimulus, all of the emotion and vitality, all of the commerce and society, of the “real” world (quotations necessary because virtual reality would be so real that non-simulated reality has no better claim on the term).

Augmented reality said that the virtual realists dropped too much acid in their youth. A fully simulated environment might be escapist pleasure for the overcommitted few, but computers would show their real power by adding a layer to our existing lives, not creating entirely new ones. Computers would sink themselves into our phones, our clothes, eventually our fingers and eyeballs and brains, not in the service of making another world, but enhancing the world we live in.

If that debate sounded ridiculously theoretical to you, then I hope that was yesterday because today it’s as real as it gets.

Google Glass is the vanguard of augmented reality, and obviously important to the company.* Google’s mission has always been to organize the world’s information – not to create a fantasy world but to organize our world.

Second Life had its heyday after Google established itself as the new tech titan, but before any serious challenger had risen up behind it. We spent a lot of time trying to convince people that SL could be the next big thing … trying to explain that people wanted to have an online identity, instantiations of themselves that would interact with other online personalities, creating tiny bits of content that might not have individual value, but would have enormous value as a whole fabric of an online world where people would go and interact every day …

I was laughed out of a lot of buildings after explaining SL. Who wants to live online? Who wants friends that they see only in a computer? Who wants to spend their leisure hours pecking away at a keyboard and looking at the cascades of dreck that other non-professional users create?

Second Life missed the mark for a lot of reasons, but not because we were wrong about online life. Facebook came along, and gave us all of the virtual life that the Web could really handle – only 2D, status updates instead of atomic 3D content, kitten pictures instead of furries – but Facebook succeeded in creating a virtual world.

And now they’ve acquired Oculus VR. If it wasn’t clear before – and perhaps it wasn’t clear even to them – they have now taken a side in that old debate, the same side that they’ve been on since the beginning. Facebook is going to go more and more towards virtual reality, while Google expands further and further into augmented reality.

 

*I don’t work on Glass, have no special knowledge of the product or strategy, and actually have never even tried it.

like a boss

Zappos says goodbye to bosses” is a recent entry in a long string of articles about decentralized management practices. In the popular press, the implicit message is that decentralization is a nonstandard practice compared to strict hierarchy (if it were standard, why report on it at all?) – and if there is a comment section it is often filled with bitter vitriol about the dumbass management hippies who would rather chant kumbaya than actually do the hard work of telling employees what to do.

Almost 10 years ago, Thomas Malone wrote a book called The Future of Work that summarized twenty years of research on organizational structure, concluding that decentralized management was, well, the future of work. This is no longer a controversial theory, and many different kinds of companies have instituted varying degrees of decentralization with great success. So why are there still so many critics, and why are some of them so bitterly opposed?

One reason is that decentralization isn’t always the right choice. Most employees probably work in enterprises for which a strong degree of hierarchy is a better choice, or at least not an obviously worse choice. This is because the majority of employees in many countries work in SMBs (small-to-medium sized businesses), where there is often little difference in outcome between decentralized and hierarchical management. When you have, say, 5 equally committed people working in the same room together, the information they receive is so similar, and the communication between them so frequent and unmediated, that the employees would probably make the same decisions with or without formal management. In addition, the single largest employer in many countries is the government, where hierarchy is highly beneficial or required due to the nature of the service or because of laws and regulations.

So most people work in SMBs that don’t need decentralization even if they have it, or in large organizations that benefit from a lot of hierarchy. This leads to the common misconception that decentralized management doesn’t scale. “Oh sure, rinky-dink startups and mom-and-pop shops can get by without managers, but when you get to the really big efforts, you gotta have hierarchy to be a great company.”

That is not just wrong, it is perversely wrong. Decentralized management is, for certain kinds of enterprises, actually required in order to scale. The right way to decide whether your company needs decentralized management is to ask yourself these two questions:

How many people are required for my company to achieve our vision?

You have to have a pretty strong idea of your vision to answer this, which is harder than it seems, but let’s assume you know your vision. If you need less than about 150 people (because that’s Dunbar’s number), then decentralized management isn’t required. It might be more fun, more engaging for everyone involved, but it’s not required – unless you’re on the extreme side of the next question …

How well-known and stable is the path to achieving our vision?

If you know exactly how to get to the mountaintop, and that path is set in stone, then you have no need for decentralization. A single leader can just tell everyone what to do. A lot of decentralization could also work, so long as everyone is aware of the well-known and stable path – and this would probably be more fun for everyone involved, but it’s not required. However, if the path is unknown, or even if it’s known but subject to change before the full vision is achieved, then decentralized management is required. Failure is guaranteed under these circumstances due to the Innovator’s Dilemma – in large organizations, strict hierarchy will inevitably serve the needs of the current business model, leaving the company open to disruptive innovators that eat the large company’s future. The only hope to avoid the dilemma is to have decentralized management: employees with enough freedom to ignore the dictates of management might – with the right resources and a lot of luck – find the disruptive innovation within the company before it’s found outside.

So, to summarize in the obligatory 2×2:

decentralized management 2x2

I’ve noted the fun factor because it’s an important driver of employee criticism of distributed management. It’s not hard to find people who worked in places with “no bosses” and absolutely hated it, comparing the experience to high school and worse. And the truth is, in a large organization with an unknown and unstable path to a big vision, distributed management is definitely not fun for the employees, because:

  1. It is intellectually and emotionally draining. If everyone is supposed to make their own decisions, a lot of information and communication is required, and there is no way of getting around the time demands that this imposes, especially compared to the job you would be doing in a hierarchical company. Worse, making so many decisions is very stressful for most people, especially when you believe in the vision and you are close to your colleagues. You don’t want to let down your dreams and your friends, and it is very hard to face the possibility that every day may be the day you screw it all up for everyone.
  2. It is unrewarded by compensation. People start to think, “Hey waitaminute – I thought managers were supposed to make these decisions. If I’m making them now, why aren’t I being paid like a manager?” Most companies do not adjust their compensation schemes to account for this additional responsibility, because doing so would likely require a complex mechanism for collecting all possible projects, allowing everyone in the company to contribute to decisions on which they are knowledgable, and rewarding both successes and noble failures with monetary compensation commensurate to the effort of the people who implemented the project as well as those who contributed decisionmaking weight to the project. An attempt build this kind of compensation scheme would be regarded as insane, both internally and externally to the company. So most companies don’t try.
  3. The rewards for this kind of system extend beyond the likely employment period, possibly even beyond the lifetime of the employee. The Innovator’s Dilemma takes a long time to become a real threat. A small company first has to grow to a market leader and have such dominance that it is blinded to the threat of disruptive innovation – that can take years, possibly better measured as generations. So people are doing hard, uncompensated work, for the benefit of preventing a problem that might not happen during the lifetime of anyone that works at the company. That is a tough, tough ask of anyone. Even employees who understand the problem wish that the company could be hierarchical until the problem is apparent, and then switch over to this distributed bullshite. But the problem of course is that at that point, it’s too late.

So … should you like a boss or be a boss? Should you like your boss, and should that even be a question when your boss is you?

Bit flip

I was wrong about the PC. As a kid I played with the TRS-80, Apple ][ and C64 – I was engrossed in them all, I thought they were the future. But I didn’t predict the sweeping change the PC would have on society and the economy. I didn’t devote my hobbies and education to learning more about computer science.

I was wrong about the Internet. I was introduced to UNIX as an intern at Bell Labs, I read BBSes, I was on CompuServe and Prodigy and AOL, I used Mosaic. I enjoyed them all, I understood how these were the future. But I didn’t anticipate how all-encompassing this future could become. I didn’t devote my early career plans to working in Internet companies.

I was wrong about Google. As soon as I started using it in 1999, I saw that this combination of simplicity and power was the future of search, and that search was the key to the Web. But I didn’t see the enormous economic engine that search intent could generate. I didn’t want to work at Google while it was still a relatively small company.

So I’m probably wrong about Bitcoin. For reasons I’ll go into towards the end of this post, I feel it’s very important to state this at the beginning. If you already know I’m wrong, your time is much better spent reading and re-reading this wonderful piece by Marc Andreessen, the finest articulation of the potential power of Bitcoin yet written. (Incidentally, I’ve concluded that I was wrong when I said that Andreessen is probably the best living tech entrepreneur, but would be a mediocre VC. He’s already proven he’s a great VC.)

Again: please stop reading if you already know I’m wrong.

I don’t believe in Bitcoin, I don’t believe that it’s the foundation of a new age, a wave to follow the PC, the Internet, the Web. My resistance to the judgment of my betters is broad and deep, logical and emotional, based on fact and conjecture. So clearly, I’m not trying to win an argument here. I just want to someday look back on this and laugh. Or cry, as the case may be.

The roots of my skepticism about Bitcoin grow from three areas, which I’ll call What’s Missing, What I Know From Experience, and What’s Distasteful.

What’s Missing

As I humblebragged above, I knew about some of the key life-changing technologies of our time before most people. I may have been wrong about just how far they would go, but I was right to be curious about them, right to try them before they were popular, and right to enjoy their early incarnations. I had that curiosity and enjoyment from the minute I heard about them, and that enjoyment was sustained and nourished through each and every use.

I’m not curious about Bitcoin, at least, not curious enough to try it. As a consumer (not as a technologist, futurist, or business person), I don’t see why I might enjoy using it. I can understand why it has speculative value, but the joy of a good return from a speculative investment is nonspecific to Bitcoin. As a consumer, what’s in it for me?

The shortest description of the most obvious consumer proposition for Bitcoin is that it’s digital cash. But I’m not actually having a problem with the features of non-digital cash. Making digital payment behave exactly like cash would introduce giant problems into my life without solving any.

The first problem is the fear of seller fraud, i.e. how to address the problem that the person selling the goods might not actually deliver the goods. Bitcoin could, in theory, help quite a lot with buyer fraud, since once Bitcoins are transferred it’s just like receiving cash. But I’m mostly a consumer, not a seller, and as a consumer I don’t like to hand cash over to anyone unless I receive the goods at the same time or before I give the cash. Under what circumstance besides anonymity could I possibly want to use digital cash rather than a credit card? A credit card gives me the assurance that if I’m truly defrauded by the seller, I can always call the credit card company and demand a chargeback. Bitcoin advocates talk about chargebacks as a merchant’s curse (which it is), without addressing how the same thing is an honest consumer’s blessing.

Another big problem is the fear of loss and theft. I have this problem with real cash already, I don’t want to keep an excessive amount on my person or in my home or business. I don’t want to forget where I put it, I don’t want someone to steal it. Digital cash makes this an enormous problem, since I can now have a very large amount of cash, which becomes a very attractive target for theft, and a very sad potential case for loss. Sure, I can protect my digital cash with all manner of digital locks and keys, but this makes my cash security problems worse, not better. Banking has lots and lots of problems, but one of them is not that if I forget my key, I lose all my money.

I understand that these are problems of privilege, first world problems, and I’m not addressing the benefits that Bitcoin’s success would have for problems particular to the developing world. But I’m also not aware of any mass consumer technology that became successful due to features that benefitted developing economies without solving first world problems first. That may be sad, but it’s true.

What I Know From Experience

How many people have managed the growth of a new currency from its early days through its use in hundreds of millions of dollars worth of transactions per year? I don’t know, but I suspect that the number is only in the dozens, and I know that I’m one of them. So I cannot help but view the prospects for Bitcoin through the lens of what I learned from developing the Linden Dollar as a product for Second Life. This experience might provide some special insight, but it also almost certainly comes with bias, false equivalencies, the color of regret and the specter of envy. Nevertheless, I can’t talk about Bitcoin without thinking of the Linden Dollar.

Since memories are short, let me try to explain the Linden Dollar very briefly. Second Life was once a thing that had the same level of interest as Bitcoin does today, actually a bit more judging by search queries:

SL-bitcoin

The Linden Dollar is a virtual currency, the primary medium of exchange for transactions in the virtual world of Second Life. At its peak, people using Second Life used the Linden Dollar to buy and sell virtual goods worth more than half a billion dollars per year. Although there are many other digital worlds featuring the ability to get goods in exchange for some virtual token, the Linden Dollar had some unusual features that didn’t exist or weren’t allowed by similar services. The L$ could be transferred from user to user, and could be exchanged for a price in US dollars (and Euros and other currencies). Linden Lab, the company making Second Life, could issue new Linden Dollars in any amount and at any price, without any guarantee of redemption for any value, making the L$ a true fiat currency (i.e. having value by declaration rather than by guarantee of exchange for something of value, like gold).

It’s fair to say that the Linden Dollar was inferior to Bitcoin in every possible aspect of technical implementation, particularly the cryptological security measures. And it was not only centrally managed, but subject to the inflationary risks inherent to management of a money supply by an unstable government (i.e. a startup). Bitcoin advocates would have no problem listing dozens of feature inadequacies and design mistakes for the Linden Dollar. But I don’t think that the absence of any of Bitcoin’s vaunted features are the reason that the Linden Dollar didn’t reach mass success.

The Linden Dollar failed to reach a mass audience because Second Life failed to reach a mass audience. Even with SL’s shortcomings, the L$ might still have reached a broad audience if it had also become an accepted medium of exchange on another successful platform. The features and design of a currency can preclude certain types of failure (e.g. widespread fraud), but with one possible exception* they cannot be the driving reason for success. A currency, or any payment method, succeeds not because of its features, but because of the adoption of the platform on which the currency is the primary medium of exchange. As I have argued elsewhere, the value of the platform is the dominant factor in determining whether the medium of exchange for that platform will be successful. Consider the US dollar, which is after all Bitcoin’s true competition. The “platform” for the US dollar is the United States economy. The US$ has many feature deficiencies, and has undergone many design changes over the years. Someday the US dollar will fail to be the world’s dominant currency. That day will come after the United States is no longer the world’s largest economy, and not a day before.

Now, it’s arguable that the platform for Bitcoin is the Internet, and that economic transactions running through the Internet could exceed the US GDP (minus the portion running through the Internet). So perhaps we are on the cusp of seeing Bitcoin take the place of the US$, not because the features of the currency make it better than the US$, but because the US GDP is smaller than Internet GDP, and no rising country GDP (i.e. China) grows fast enough to fill the vacuum. But that’s not Bitcoin winning through superior features or technology, that’s the US economy failing and the world not wanting to rely on China’s economy.

What’s Distasteful

If it’s not clear enough already, this post is driven by personal taste, experience and bias as much as it is by fact and logic. So I may as well conclude with the least logical portion. I started this post by admitting that I’ve been wrong about pretty much every important technology trend in my lifetime, and practically begged many readers to read something else. Now I’ll admit that I don’t actually think I’m a moron. As I pointed out, I enjoyed and was excited about the PC, the Internet, the Web as soon as I saw them. I was right, I just didn’t make many important personal decisions based on that belief. (As an aside, I don’t actually regret the decisions I made instead. Life is full of wonderful choices.)

But I wanted to give Bitcoin fanatics every reason to dismiss this post without comment, because I’ve observed that Bitcoin skepticism is often attacked with an onslaught of vituperative insult. Now, this is true of the current sad state of Internet commentary generally, but here I’m excluding the routine trolls and bitter ignoramuses, and thinking of people who are clearly capable of intelligent, reasoned discussion. Some very smart and often nice people are Bitcoin fanatics, but in the eyes of many intelligent true believers on this topic, skeptics aren’t just wrong but idiotic, not just shortsighted but malicious. That reaction is of course is distasteful, but the point here isn’t just that I have delicate sensibilities. The point is pattern recognition: I have seen this kind of fanaticism many times, and it is usually a sign that merit of the proposition cannot speak for itself.

*The possible exception to all of my skepticism for Bitcoin is micropayments. I think this could be a compelling use case, though not in digital content payments because the problem with many digital content models is not that people don’t have a good means to pay, but that they would rather receive inferior free content than superior paid content at any price. But micropayments in antispam implementation or for microtransactions in data transmission generally is very interesting. This is the one area where I’ll continue to think about what Bitcoin could mean. After all, I’ve been wrong before.