magnificent seven

Tomorrow Oculus Rift is taking consumer pre-orders for its heralded VR headset, and many people are wondering whether 2016 will be the year of virtual reality … just as they wondered in 2015, and 2014 … as they wondered in 2009 when I left Second Life. At that time, I remained certain that virtual reality was the future of online interaction, but that it would be at least 10 years before the field could achieve mass consumer success. Virtual reality actually has many decades of history as a technical project, as well as a rich history in fiction that demonstrates the enduring attraction.

People keep mistaking “THE year” for virtual reality because they fail to properly assess the progress in each field required to make a truly compelling VR experience. Observers see great progress in just one field, and they assume that it’s enough to break open mass consumer interest. But in fact there are SEVEN fields required for VR success – “the year of virtual reality” won’t happen until every one of these fields has progressed past the minimum development threshold. Here’s a brief rundown of each field, WHAT it is and WHY it’s important, and WHEN it’ll be ready for a truly compelling VR experience.

1. GRAPHICS COMPUTING POWER

WHAT: The most obvious requirement is that a computer needs to be powerful enough to make a compelling simulation of reality. Now, what’s “compelling” is open to argument, and I would argue that some relatively primitive figures can comprise a compelling environment if they move, interact with each other, and react to you in an engaging way.

WHY: I guess you could have virtual reality without computers, just as you can have it without compelling graphics. I mean, that’s called “storytelling” and it’s pretty cool. But that’s not what we’re talking about here. Some minimum level of simulated graphics is required.

WHEN: Sufficient power exists now, and has existed for at least seven years. But if your requirement for visual fidelity is very high, then you might think that even today’s computers aren’t powerful enough.

The technical measurement discussion isn’t too interesting, so please skip this paragraph and the graph below if you’re not inclined to pick over this kind of detail. There’s no single measure of computing power, but as a rough analogue I’d pick FLOPS and say that to simplify further we should talk only about GPU FLOPS, noting that there’s CPU-equivalent performance. Because I believe that an experience comprised of rough primitives can be compelling, I’d say that even one GPU GFLOPS is sufficient to support a compelling experience, and we’ve had that in home computers since 1999. But giving room for argument, I can raise the requirement to 500 times that, and still we’re talking about 2007-8 as the time when consumer-level computers had enough power to make virtual reality.

cpu-vs-gpu2. PERSONAL COMPUTING DEVICES

WHAT: Unlike the first field, this is less about predicting the power that computers have than it is about predicting what type of computer people will use. “Personal computing” used to mean desktop computers, but now people actually carry computers on their persons. Today, the type of computer that is most commonly in use is the mobile smartphone.

WHY: Philip Rosedale frequently said that when he started SL, he underestimated the time that it would take to get to mass market use of virtual reality, because he was only looking at the increasing power of desktop computers. He didn’t predict the shift to laptops, which happened in the early 2000s. Using smaller computers generally means using less powerful computers, so although desktop computing power was sufficient to simulate reality by the mid-2000s, the computers that people actually used were laptops, which were not powerful enough. Today, the computer that most people use is a mobile phone, which is even less powerful.

WHEN: Using the same standards above, smartphones will be able to simulate a compelling VR experience in 2017.

(Boring, skippable paragraph follows.) As above, this assumes the requirement is 500 GPU GFLOPS, without arguing too much about what that number really means. A high-end smartphone today can do about 180 GPU GFLOPS, with more power coming soon. (For comparison, a PS4 game console can do over 1800 GPU GFLOPS.) Taking Moore’s Law narrowly and literally, it will be 2017 before smartphones will get over 500 GFLOPS.

But should we even be talking about smartphones here? Forget about “PC” meaning desktop – the truly personal computer has moved from your desk to your lap to your pocket. Where is it going next? On your wrist, on your face? This is a question about the intersection of device use and power, not either one alone. The precise question is, “When is the type of computer that most people use every day going to be capable of 500 GFLOPS?” I still think this is a question about smartphones, but who knows?

3. VISUAL DISPLAY

WHAT: A computer just simulates the environment, you need to be able to see what the computer is simulating. For many years, the way most people see a computer’s output has been through a monitor. Now, Oculus Rift and other goggles are coming into the mass consumer market, and these are so good that they’ve ushered in the current wave of excitement about VR.

WHY: Sight is the most important sense in giving people a feeling that they are somewhere other than where they’re sitting. It’s not the only required sense, but without seeing a virtual environment, most people cannot begin to immerse themselves in the experience. I used to think that a flat monitor of sufficient size and resolution could provide a compelling enough VR experience, but using the most advanced VR goggles today simply blows away the monitor experience.

WHEN: The major unsolved problem with VR goggles is that using them for too long induces nausea. Although Oculus and others have made a lot of progress on this, it’s only to the point where nausea is delayed rather than eliminated. A product that makes you puke is never going to be mass market. Based on nothing more than a rough guess based on many years of observation of consumer hardware cycles, I’m going to say that it will take three years to sufficiently refine VR goggles to smooth away the nausea and other early problems, so it will be late 2018 before this field is really ready for mass consumption.

4. AUDIO FIDELITY

WHAT: Properly spatial audio means that sound should be directional, you should be able to hear where a sound is coming from, and more than just direction, you should be able to distinguish sounds from each other even when they are coming from the same direction or obscured by ambient sounds. This latter goal is called the “cocktail party problem” – even in a noisy cocktail party, you can focus on and hear a single speaker who isn’t necessarily louder than the party noise.

WHY: Seeing may be believing, but hearing is confirmation. The audio experience is often overlooked and undervalued, but the sound of being in a space is crucial confirmation that your brain can believe what you see. It’s possible that the nausea of the VR goggles experience is due to insufficient confirmation from other senses, and hearing is probably the most important and easiest sense to add to the virtual environment, though some might advocate for smell or taste (uh, yuck).

WHEN: “3D audio” has been around for many years, but the cocktail party problem remains unsolved despite recent advances. Still, the current state of the art in spatial audio is very good, and probably good enough without fully solving the cocktail party problem. I think we’ll see really excellent integration of audio fidelity with VR goggles even before the VR goggles are fully nausea-free, so let’s say that the audio component will be ready by 2017.

5. 3D INPUT PERIPHERALS

WHAT: This is the most important area that not enough people are talking about. Virtual reality requires a host of new technologies for allowing a whole body to interact in a 3D space: hand and finger movement, body position, eye-tracking, multidirectional treadmills. Every single one of these is a new Oculus-size opportunity in the making.

WHY: A keyboard and mouse or trackpad are not designed, and not sufficiently adaptable, for a person to move easily in three-dimensional computed environment. The only innovation in input in mass computing devices that we’ve seen in the last 20 years has been multitouch on a smartphone or tablet, and that doesn’t help much for 3D.

WHEN: We have yet to see a breakthrough product here, despite many promising efforts. The field is extremely varied and diverse, and it could take many years to sort out the winners. Somewhat arbitrarily, I guess it will be at least 2019 before we have mass consumer products that enable all of the tactile, visual and auditory input needed for compelling VR.

6. BANDWIDTH

WHAT: Though it’s easy to imagine a VR experience that is entirely created by a single computer at a single place (somewhat like watching a movie at a theater), it is much more likely that many computers will need to talk with each other over distance, and that requires access bandwidth to communicate (like the Web).

WHY: The design of the particular VR experience that defines success really comes into play here. For example, this post assumes “mass market VR” will be enabled by personal computing devices and that multiple people can share a VR experience from different locations. That means that larger computers will perform important tasks and coordinate communication with the smaller computers that people have. The amount of bandwidth required can vary greatly depending on what demands the system is making on the computers involved.

WHEN: If you think that VR is going to get to mass market through smartphones or whatever successor computer that we carry around with us, then you’re bottlenecked by the state of wireless cell networks. Although high-speed data connection is broadly available in major metropolitan areas, it is unreliable even there and unavailable outside of the most densely populated areas. Given the slow rate of evolution of cell networks, it would be at least 2022 before bandwidth is sufficient for VR everywhere.

Many VR enthusiasts picture mass-market adoption through desktop computers, gaming consoles, or other specialized hardware yet to penetrate mass market, but all of which would use wired connections up until the wireless access point, so for that camp, we could say that bandwidth is already sufficient.

7. LATENCY DESIGN

WHAT: The delay in computers communicating with each other is sometimes related to bandwidth, but this field is included as a separate factor to encompass other network quality issues as well as the sheer physics of data traveling across large terrestrial distances.

WHY: Some amount of perceptible latency is unavoidable as a matter of physics if we are talking about communication across the world. So to the extent that the VR experience relies on real-time interaction with a global population, acceptable latency must be designed into the experience, mediated somehow to make the perception of latency acceptable.

WHEN: Arguably, this is a problem that is solved now for many types of VR experiences, but I include it here just because I’ve seen many VR proposals that don’t consider how latency must be designed into the experience. We’ll see some common design patterns in the first year of release of the current crop, and they’ll formalize into best practices by next year, so we’ll call this solved by 2017.

So, when will “The Year of VR” really be? My rough guess is 2019 at earliest, 2022 for the more ambitious visions of mass-market VR that include mobile computing.

My point isn’t to be right on the prediction though; here I just wanted to give a more rigorous framework for making predictions of mass market success. When people claim that this or any year will be the year of VR, they should be clear on what the state of progress needs to be on each of these seven pieces. Considering progress in just one field alone has led to many, many mistakenly optimistic predictions.

real time

At Second Life, we occasionally debated the merits of virtual reality vs augmented reality. In caricature:

Virtual reality was the core dream of SL, same as the core proposition of Snow Crash, the Holodeck, the Matrix – the idea that a computer simulated world could have all of the sensory and intellectual stimulus, all of the emotion and vitality, all of the commerce and society, of the “real” world (quotations necessary because virtual reality would be so real that non-simulated reality has no better claim on the term).

Augmented reality said that the virtual realists dropped too much acid in their youth. A fully simulated environment might be escapist pleasure for the overcommitted few, but computers would show their real power by adding a layer to our existing lives, not creating entirely new ones. Computers would sink themselves into our phones, our clothes, eventually our fingers and eyeballs and brains, not in the service of making another world, but enhancing the world we live in.

If that debate sounded ridiculously theoretical to you, then I hope that was yesterday because today it’s as real as it gets.

Google Glass is the vanguard of augmented reality, and obviously important to the company.* Google’s mission has always been to organize the world’s information – not to create a fantasy world but to organize our world.

Second Life had its heyday after Google established itself as the new tech titan, but before any serious challenger had risen up behind it. We spent a lot of time trying to convince people that SL could be the next big thing … trying to explain that people wanted to have an online identity, instantiations of themselves that would interact with other online personalities, creating tiny bits of content that might not have individual value, but would have enormous value as a whole fabric of an online world where people would go and interact every day …

I was laughed out of a lot of buildings after explaining SL. Who wants to live online? Who wants friends that they see only in a computer? Who wants to spend their leisure hours pecking away at a keyboard and looking at the cascades of dreck that other non-professional users create?

Second Life missed the mark for a lot of reasons, but not because we were wrong about online life. Facebook came along, and gave us all of the virtual life that the Web could really handle – only 2D, status updates instead of atomic 3D content, kitten pictures instead of furries – but Facebook succeeded in creating a virtual world.

And now they’ve acquired Oculus VR. If it wasn’t clear before – and perhaps it wasn’t clear even to them – they have now taken a side in that old debate, the same side that they’ve been on since the beginning. Facebook is going to go more and more towards virtual reality, while Google expands further and further into augmented reality.

 

*I don’t work on Glass, have no special knowledge of the product or strategy, and actually have never even tried it.

like a boss

Zappos says goodbye to bosses” is a recent entry in a long string of articles about decentralized management practices. In the popular press, the implicit message is that decentralization is a nonstandard practice compared to strict hierarchy (if it were standard, why report on it at all?) – and if there is a comment section it is often filled with bitter vitriol about the dumbass management hippies who would rather chant kumbaya than actually do the hard work of telling employees what to do.

Almost 10 years ago, Thomas Malone wrote a book called The Future of Work that summarized twenty years of research on organizational structure, concluding that decentralized management was, well, the future of work. This is no longer a controversial theory, and many different kinds of companies have instituted varying degrees of decentralization with great success. So why are there still so many critics, and why are some of them so bitterly opposed?

One reason is that decentralization isn’t always the right choice. Most employees probably work in enterprises for which a strong degree of hierarchy is a better choice, or at least not an obviously worse choice. This is because the majority of employees in many countries work in SMBs (small-to-medium sized businesses), where there is often little difference in outcome between decentralized and hierarchical management. When you have, say, 5 equally committed people working in the same room together, the information they receive is so similar, and the communication between them so frequent and unmediated, that the employees would probably make the same decisions with or without formal management. In addition, the single largest employer in many countries is the government, where hierarchy is highly beneficial or required due to the nature of the service or because of laws and regulations.

So most people work in SMBs that don’t need decentralization even if they have it, or in large organizations that benefit from a lot of hierarchy. This leads to the common misconception that decentralized management doesn’t scale. “Oh sure, rinky-dink startups and mom-and-pop shops can get by without managers, but when you get to the really big efforts, you gotta have hierarchy to be a great company.”

That is not just wrong, it is perversely wrong. Decentralized management is, for certain kinds of enterprises, actually required in order to scale. The right way to decide whether your company needs decentralized management is to ask yourself these two questions:

How many people are required for my company to achieve our vision?

You have to have a pretty strong idea of your vision to answer this, which is harder than it seems, but let’s assume you know your vision. If you need less than about 150 people (because that’s Dunbar’s number), then decentralized management isn’t required. It might be more fun, more engaging for everyone involved, but it’s not required – unless you’re on the extreme side of the next question …

How well-known and stable is the path to achieving our vision?

If you know exactly how to get to the mountaintop, and that path is set in stone, then you have no need for decentralization. A single leader can just tell everyone what to do. A lot of decentralization could also work, so long as everyone is aware of the well-known and stable path – and this would probably be more fun for everyone involved, but it’s not required. However, if the path is unknown, or even if it’s known but subject to change before the full vision is achieved, then decentralized management is required. Failure is guaranteed under these circumstances due to the Innovator’s Dilemma – in large organizations, strict hierarchy will inevitably serve the needs of the current business model, leaving the company open to disruptive innovators that eat the large company’s future. The only hope to avoid the dilemma is to have decentralized management: employees with enough freedom to ignore the dictates of management might – with the right resources and a lot of luck – find the disruptive innovation within the company before it’s found outside.

So, to summarize in the obligatory 2×2:

decentralized management 2x2

I’ve noted the fun factor because it’s an important driver of employee criticism of distributed management. It’s not hard to find people who worked in places with “no bosses” and absolutely hated it, comparing the experience to high school and worse. And the truth is, in a large organization with an unknown and unstable path to a big vision, distributed management is definitely not fun for the employees, because:

  1. It is intellectually and emotionally draining. If everyone is supposed to make their own decisions, a lot of information and communication is required, and there is no way of getting around the time demands that this imposes, especially compared to the job you would be doing in a hierarchical company. Worse, making so many decisions is very stressful for most people, especially when you believe in the vision and you are close to your colleagues. You don’t want to let down your dreams and your friends, and it is very hard to face the possibility that every day may be the day you screw it all up for everyone.
  2. It is unrewarded by compensation. People start to think, “Hey waitaminute – I thought managers were supposed to make these decisions. If I’m making them now, why aren’t I being paid like a manager?” Most companies do not adjust their compensation schemes to account for this additional responsibility, because doing so would likely require a complex mechanism for collecting all possible projects, allowing everyone in the company to contribute to decisions on which they are knowledgable, and rewarding both successes and noble failures with monetary compensation commensurate to the effort of the people who implemented the project as well as those who contributed decisionmaking weight to the project. An attempt build this kind of compensation scheme would be regarded as insane, both internally and externally to the company. So most companies don’t try.
  3. The rewards for this kind of system extend beyond the likely employment period, possibly even beyond the lifetime of the employee. The Innovator’s Dilemma takes a long time to become a real threat. A small company first has to grow to a market leader and have such dominance that it is blinded to the threat of disruptive innovation – that can take years, possibly better measured as generations. So people are doing hard, uncompensated work, for the benefit of preventing a problem that might not happen during the lifetime of anyone that works at the company. That is a tough, tough ask of anyone. Even employees who understand the problem wish that the company could be hierarchical until the problem is apparent, and then switch over to this distributed bullshite. But the problem of course is that at that point, it’s too late.

So … should you like a boss or be a boss? Should you like your boss, and should that even be a question when your boss is you?

Bit flip

I was wrong about the PC. As a kid I played with the TRS-80, Apple ][ and C64 – I was engrossed in them all, I thought they were the future. But I didn’t predict the sweeping change the PC would have on society and the economy. I didn’t devote my hobbies and education to learning more about computer science.

I was wrong about the Internet. I was introduced to UNIX as an intern at Bell Labs, I read BBSes, I was on CompuServe and Prodigy and AOL, I used Mosaic. I enjoyed them all, I understood how these were the future. But I didn’t anticipate how all-encompassing this future could become. I didn’t devote my early career plans to working in Internet companies.

I was wrong about Google. As soon as I started using it in 1999, I saw that this combination of simplicity and power was the future of search, and that search was the key to the Web. But I didn’t see the enormous economic engine that search intent could generate. I didn’t want to work at Google while it was still a relatively small company.

So I’m probably wrong about Bitcoin. For reasons I’ll go into towards the end of this post, I feel it’s very important to state this at the beginning. If you already know I’m wrong, your time is much better spent reading and re-reading this wonderful piece by Marc Andreessen, the finest articulation of the potential power of Bitcoin yet written. (Incidentally, I’ve concluded that I was wrong when I said that Andreessen is probably the best living tech entrepreneur, but would be a mediocre VC. He’s already proven he’s a great VC.)

Again: please stop reading if you already know I’m wrong.

I don’t believe in Bitcoin, I don’t believe that it’s the foundation of a new age, a wave to follow the PC, the Internet, the Web. My resistance to the judgment of my betters is broad and deep, logical and emotional, based on fact and conjecture. So clearly, I’m not trying to win an argument here. I just want to someday look back on this and laugh. Or cry, as the case may be.

The roots of my skepticism about Bitcoin grow from three areas, which I’ll call What’s Missing, What I Know From Experience, and What’s Distasteful.

What’s Missing

As I humblebragged above, I knew about some of the key life-changing technologies of our time before most people. I may have been wrong about just how far they would go, but I was right to be curious about them, right to try them before they were popular, and right to enjoy their early incarnations. I had that curiosity and enjoyment from the minute I heard about them, and that enjoyment was sustained and nourished through each and every use.

I’m not curious about Bitcoin, at least, not curious enough to try it. As a consumer (not as a technologist, futurist, or business person), I don’t see why I might enjoy using it. I can understand why it has speculative value, but the joy of a good return from a speculative investment is nonspecific to Bitcoin. As a consumer, what’s in it for me?

The shortest description of the most obvious consumer proposition for Bitcoin is that it’s digital cash. But I’m not actually having a problem with the features of non-digital cash. Making digital payment behave exactly like cash would introduce giant problems into my life without solving any.

The first problem is the fear of seller fraud, i.e. how to address the problem that the person selling the goods might not actually deliver the goods. Bitcoin could, in theory, help quite a lot with buyer fraud, since once Bitcoins are transferred it’s just like receiving cash. But I’m mostly a consumer, not a seller, and as a consumer I don’t like to hand cash over to anyone unless I receive the goods at the same time or before I give the cash. Under what circumstance besides anonymity could I possibly want to use digital cash rather than a credit card? A credit card gives me the assurance that if I’m truly defrauded by the seller, I can always call the credit card company and demand a chargeback. Bitcoin advocates talk about chargebacks as a merchant’s curse (which it is), without addressing how the same thing is an honest consumer’s blessing.

Another big problem is the fear of loss and theft. I have this problem with real cash already, I don’t want to keep an excessive amount on my person or in my home or business. I don’t want to forget where I put it, I don’t want someone to steal it. Digital cash makes this an enormous problem, since I can now have a very large amount of cash, which becomes a very attractive target for theft, and a very sad potential case for loss. Sure, I can protect my digital cash with all manner of digital locks and keys, but this makes my cash security problems worse, not better. Banking has lots and lots of problems, but one of them is not that if I forget my key, I lose all my money.

I understand that these are problems of privilege, first world problems, and I’m not addressing the benefits that Bitcoin’s success would have for problems particular to the developing world. But I’m also not aware of any mass consumer technology that became successful due to features that benefitted developing economies without solving first world problems first. That may be sad, but it’s true.

What I Know From Experience

How many people have managed the growth of a new currency from its early days through its use in hundreds of millions of dollars worth of transactions per year? I don’t know, but I suspect that the number is only in the dozens, and I know that I’m one of them. So I cannot help but view the prospects for Bitcoin through the lens of what I learned from developing the Linden Dollar as a product for Second Life. This experience might provide some special insight, but it also almost certainly comes with bias, false equivalencies, the color of regret and the specter of envy. Nevertheless, I can’t talk about Bitcoin without thinking of the Linden Dollar.

Since memories are short, let me try to explain the Linden Dollar very briefly. Second Life was once a thing that had the same level of interest as Bitcoin does today, actually a bit more judging by search queries:

SL-bitcoin

The Linden Dollar is a virtual currency, the primary medium of exchange for transactions in the virtual world of Second Life. At its peak, people using Second Life used the Linden Dollar to buy and sell virtual goods worth more than half a billion dollars per year. Although there are many other digital worlds featuring the ability to get goods in exchange for some virtual token, the Linden Dollar had some unusual features that didn’t exist or weren’t allowed by similar services. The L$ could be transferred from user to user, and could be exchanged for a price in US dollars (and Euros and other currencies). Linden Lab, the company making Second Life, could issue new Linden Dollars in any amount and at any price, without any guarantee of redemption for any value, making the L$ a true fiat currency (i.e. having value by declaration rather than by guarantee of exchange for something of value, like gold).

It’s fair to say that the Linden Dollar was inferior to Bitcoin in every possible aspect of technical implementation, particularly the cryptological security measures. And it was not only centrally managed, but subject to the inflationary risks inherent to management of a money supply by an unstable government (i.e. a startup). Bitcoin advocates would have no problem listing dozens of feature inadequacies and design mistakes for the Linden Dollar. But I don’t think that the absence of any of Bitcoin’s vaunted features are the reason that the Linden Dollar didn’t reach mass success.

The Linden Dollar failed to reach a mass audience because Second Life failed to reach a mass audience. Even with SL’s shortcomings, the L$ might still have reached a broad audience if it had also become an accepted medium of exchange on another successful platform. The features and design of a currency can preclude certain types of failure (e.g. widespread fraud), but with one possible exception* they cannot be the driving reason for success. A currency, or any payment method, succeeds not because of its features, but because of the adoption of the platform on which the currency is the primary medium of exchange. As I have argued elsewhere, the value of the platform is the dominant factor in determining whether the medium of exchange for that platform will be successful. Consider the US dollar, which is after all Bitcoin’s true competition. The “platform” for the US dollar is the United States economy. The US$ has many feature deficiencies, and has undergone many design changes over the years. Someday the US dollar will fail to be the world’s dominant currency. That day will come after the United States is no longer the world’s largest economy, and not a day before.

Now, it’s arguable that the platform for Bitcoin is the Internet, and that economic transactions running through the Internet could exceed the US GDP (minus the portion running through the Internet). So perhaps we are on the cusp of seeing Bitcoin take the place of the US$, not because the features of the currency make it better than the US$, but because the US GDP is smaller than Internet GDP, and no rising country GDP (i.e. China) grows fast enough to fill the vacuum. But that’s not Bitcoin winning through superior features or technology, that’s the US economy failing and the world not wanting to rely on China’s economy.

What’s Distasteful

If it’s not clear enough already, this post is driven by personal taste, experience and bias as much as it is by fact and logic. So I may as well conclude with the least logical portion. I started this post by admitting that I’ve been wrong about pretty much every important technology trend in my lifetime, and practically begged many readers to read something else. Now I’ll admit that I don’t actually think I’m a moron. As I pointed out, I enjoyed and was excited about the PC, the Internet, the Web as soon as I saw them. I was right, I just didn’t make many important personal decisions based on that belief. (As an aside, I don’t actually regret the decisions I made instead. Life is full of wonderful choices.)

But I wanted to give Bitcoin fanatics every reason to dismiss this post without comment, because I’ve observed that Bitcoin skepticism is often attacked with an onslaught of vituperative insult. Now, this is true of the current sad state of Internet commentary generally, but here I’m excluding the routine trolls and bitter ignoramuses, and thinking of people who are clearly capable of intelligent, reasoned discussion. Some very smart and often nice people are Bitcoin fanatics, but in the eyes of many intelligent true believers on this topic, skeptics aren’t just wrong but idiotic, not just shortsighted but malicious. That reaction is of course is distasteful, but the point here isn’t just that I have delicate sensibilities. The point is pattern recognition: I have seen this kind of fanaticism many times, and it is usually a sign that merit of the proposition cannot speak for itself.

*The possible exception to all of my skepticism for Bitcoin is micropayments. I think this could be a compelling use case, though not in digital content payments because the problem with many digital content models is not that people don’t have a good means to pay, but that they would rather receive inferior free content than superior paid content at any price. But micropayments in antispam implementation or for microtransactions in data transmission generally is very interesting. This is the one area where I’ll continue to think about what Bitcoin could mean. After all, I’ve been wrong before.

the conflict question

Recent events remind me of one of my favorite hiring tales. I used to ask prospective hires an annoying interview question, one of those open-ended travelogues that journeyed through odd pathways and byways but always ended up in the same cramped room, where two colleagues were locked in irreconcilable conflict, and the proposal of only one could proceed. Depending on the mindset and tenacity of the candidate, this question could take 3 minutes or closer to 30.

Once I was the last interviewer in an extended, multi-day interview slate. Around twenty people had already interviewed the candidate – this was for a small, tightly-knit company, and in such circumstances it’s not unusual (though it may be inadvisable) for nearly all of the company to interview new members of the tribe. So by the time she came to me, she’d already been run a bit ragged.

I proceeded to launch into what I called The Conflict Question. Due to some combination of my mood, her mindset, the weather and wicked chance, this version unfurled into an inquisition taking the better part of an hour. I felt like I learned a lot about her, and was happy to recommend a hire.

I barely made it back to my desk when one of the company stalwarts stormed up to me and dragged me to an empty conference room. Although I held a senior position, I hadn’t been with the company very long, and this guy held considerably greater history and moral authority (quite correctly and deservedly, in my view).

“What the hell is wrong with you?” he hissed.

“What?”

“We have a critical hire to fill here, we found this amazing candidate. She’s been interviewing for days and everybody loves her. Every single person gave her rave reviews. She gets to you, the very last interview – more of a formality than anything – and now she doesn’t want to work here. She says if assholes like you are in senior management, this can’t be the kind of place she’d like to work!”

“What?! Uh ….” My mind raced through the interview, playing it back and forth in my head like the Zapruder film. What did I do? What did I say? Admittedly the interview did get a little strained, spending the bulk of time on a question whose very essence is about conflict. Not infrequently, the course of the question presses conflict so extensively that it can be said to generate real conflict as well. I didn’t think I’d crossed any kind of line, but then again if I had it wouldn’t be the first (or the last) time I’d done so without realizing it.

I asked frantic questions about what she said about the interview and whether there was any chance of saving the hire. My colleague furiously insisted she was only reacting appropriately to my unforgivable boorishness. His red-faced anger thrashed me like an invisible whip … and then I saw that this was the strangest thing of all. I’d always admired his grace under pressure – I had watched him work with grim calm through disastrous crises and deplorable failures. He was, as far as I knew then, an obdurate, utterly reliable, improbably emotionless rock of a man.

“Whoa, whoa, waitaminute,” I said. “Are you fucking her? Or trying to?”

The passion drained from his face. It wasn’t the passion of anger, as I thought, but passion itself. He took a moment to gather himself, drew in a deep breath, and finally hung his head as he answered, simply and plainly: “Yes. Yes I am.”

“Ok. Now get the fuck out of my office,” I said, which was a weird thing to say as we were standing in a conference room and I had no office. “Now wait – just so you know – I’ll fix this, I’ll go make nice with her and we’ll bring her on. But I don’t think my interview was wrong, and I don’t think what I found is wrong: she doesn’t like conflict, especially when she thinks she should be walking into a friendly situation. I might be an asshole, but her job might sometimes require making assholes do good work, so that’s something you’ll have to deal with – as her boyfriend, perhaps, but not in any role that could possibly be supervisory. So thanks for letting me know.”

We hired her. She was great. She wasn’t so bad at handling conflict (and assholes), but I’m pretty sure she didn’t like it. And she certainly was never put into a conflict of interest with her romantic relationship, because everyone was transparent from the start. Their relationship with each other has long outlasted any of ours with the company, and is certain to outlast the company itself. At the end, there was nothing left of real or imagined conflict, nothing but a funny story.

It’s all so easy when the truth is out at the beginning.

welcome to SV, DBs

‘That’s why,’ said Azaz, ‘there was one very important thing about your quest that we couldn’t discuss until you returned.’

‘I remember,’ said Milo eagerly, ‘Tell me now.’

‘It was impossible,’ said the king, looking at the Mathemagician.

‘Completely impossible,’ said the Mathemagician, looking at the king.

‘Do you mean –‘ stammered the bug, who suddenly felt a bit faint.

‘Yes indeed,’ they repeated together; ‘but if we’d told you then, you might not have gone — and, as you’ve discovered, so many things are possible just as long as you don’t know they’re impossible.’

— The Phantom Tollbooth

In the startup blogosphere, you’ll regularly see posts about how hard startups are, how hard it is to be an entrepreneur. Mark Suster has an excellent recent entry into the genre, coining the very excellent term Entrepreneurshit. Earlier this summer, Ben Horowitz brought his rapper’s flair to describing The Struggle, a cold and merciless beatdown about a place where nothing is easy and nothing feels right. A few years ago, Paul Graham posted what should have been the definitive piece about What Startups Are Really Like, covering all the high-low points of cofounder conflict, total life immersion, emotional roller coasters, endless persistence, unpredictable customers, clueless investors and heartless luck. But it wasn’t the final word, and it won’t be – why is that?

Dave McClure bends the pattern by noting (blaring, really, in inimitable McClure style) that the passion should be about product, not entrepreneurs. What all the other posts were saying is, Don’t come and try this shite because you think being an entrepreneur is fun, because it’s not. Dave completes the sentence by saying what the passion should really be about: product and customers. It’s a nice continuation of the message to whomever needed to read all the previous entreaties about the pain, the passion, and the not-very-likely glory.

Who exactly are all these posts talking to? To the inexperienced, of course – the battle scarred veterans already know what’s what. But those young tyros, those fresh-off-the-presses CS majors, the hackers, the “design guys,” the would-be world conquerors – all those startup sages want to send a message: think twice before you dive into the deep end of the pool, kiddos. There’s a bit of a concern that an endless horde of former Wall Street DBs will descend upon Silicon Valley, as they have been doing ever since the late ’90s, with their uninformed dreams of being “a startup guy.”

I say, let ’em come. I have no problem with anyone who wants to take the plunge. If you’re even thinking you might want to do it someday, do it now, do it today. I’d rather have you here, facing down those odds, in the Entrepreneurshit, deep in The Struggle, finding out What Startups Are Really Like – rather have you here than constructing a new derivative, grinding it out for the man, toiling away while wondering if this is really all there is to life. Never mind the fact that it’s completely impossible; that’s only true for those who listen to the misguided wisdom of their elders.

what is technology?

Since the 2008 financial crisis, the world economy has been in the doldrums, and every time we think we’re out of the storm, we find that we are still at sea, struggling to stay afloat. Europe appears on the verge of disastrous devolution, and the world economy as a whole is roughly at the same levels as 2000. Will we ever feel confident that we are returning to sustainable growth?

I think the answer lies in technology investing. I think we just have to ask, Have we been investing in technology? But in order to answer that, we have to know what “technology” is.

Ask a random stranger “What is technology?” and you’ll likely hear something about “computers” or “the Internet.” Most people assume that investing in anything having to do with computers or the Internet is automatically “technology investing.”

This can’t be right, of course. The term “technology” has been around since the 1600s, before anything like today’s computers existed. I suppose a compass was considered technology back then, and when the sextant was invented it was “high-tech.” More recently, though still generations ago, the technologies of railroad transformed the world economy from the mid-1800s, and automobiles shortly afterwards extended that transformation further into our economy and culture. Broadcast radio and then television and movies became important technologies as mass media came into our lives from the early-to-mid 20th century. The cycles got shorter and faster with computers in the ’70s and ’80s, and the Internet from the ’90s.

When we say “technology” today we no longer think of trains or cars or even radio or TV. All of those things still have technology in them, but none of them represents what we mean by “technology.” So it only makes sense that someday soon “technology” will bring no reflexive association with computers or the Internet. So what then will it mean when we say “technology”?

My nutshell roadmap of technology from the compass to the Internet stopped in the 1990s. The right investments in seafaring, shipping, autos, broadcast networks, computers and Internet resulted in personal fortunes and worldwide economic growth. I think in any of those eras, you could ask “Are we investing in technology?” and the answer would be a clear yes, and you would have been able to point clearly to the technology. But ask yourself today, “Over the last decade, have we been investing in technology?” and I’m not comfortable with the answer.

“Technology investors” have made personal fortunes and huge companies have been birthed since 2002, but what is the technology? Should social media and games be considered technology? Should mobile phones and tablet computers? If so (or if not), why (not)? What is the definition? What is the test? What is technology?

Here is the simplest definition of technology:

Technology promises a better life.

This begs a question with almost every word. Why a promise? Better by what standard? Whose life? Before trying to clarify, let me propose a test:

Technology delivers what you need while breaking the boundaries of the Speed/Quality/Price triangle.

When technology works, you get what you need at a higher quality, lower price and faster than you could have gotten it before. At introduction, “high-tech” may not include all three right away, but it’s apparent even early on that the speed, quality and price will inevitably improve. This is why technology is a promise – early iterations may give you what you want with clear improvement in only one of the three aspects, but even early on there is an explicit assumption that the other two will follow. It’s also inherently assumed that although only an exclusive few might access and benefit from the early technology, someday everyone will. The definition of “better” is just that “quality” is delivered, in whatever definition of quality that is being used at the time, but that the quality comes faster and cheaper. So: Technology promises a better life.

How well does this definition and test fit the waves of important technology advances of the past? If say, the prime years of your life were from 1930 to 1970, did television give you what you needed, better and faster and cheaper? At a time when we went from worldwide depression, then broad scale war, then peace and increasing interdependency and complexity and societal change – yes, I’d say that the ability to viscerally and quickly deliver news, entertainment and culture gave life what we needed. How about the personal computer, the Internet, and search engines? I think positive answers are similarly easy to construct, and negative answers are mostly dyspeptic dystopianism.

Now how about social media? Well, everyone needs friends. Everyone needs a way to connect with friends, close and distant. Everyone needs to be a part of a community. But are social networking companies truly satisfying these needs? Is that even what they are trying to give us? Do your multiple social networks, hundreds or thousands of “friends” you have on them, their messages and status updates and pictures and quotes of the day – are these giving you what you need? Is this a promise of a better life for you?

I have no problem if your answer is “yes” to these questions. But I can’t answer yes, and I fear that most people wouldn’t answer yes, and this makes me uncomfortable because when I return to the question, Have we been investing in technology over the past decade? – I also cannot answer yes. And that means I cannot see how we will emerge from this worldwide economic slump.

I’m sure there is active investment in technology that really does promise a better life, but that’s not the mainstream of what’s called technology investing today. When autos and radios and TV and computers and the Internet were coming up, there was plenty of investment fervor around these industries. Today, the fervor is around companies that promise all sorts of interesting things, but I wouldn’t call most of these things a promise of a better life. They may be great companies, they are certainly filled with great people, they definitely have smart investors – but they are not making technology. And if we fail to invest in technology – real technology – then the economy will not return to robust health, and life will not get better.