real time

At Second Life, we occasionally debated the merits of virtual reality vs augmented reality. In caricature:

Virtual reality was the core dream of SL, same as the core proposition of Snow Crash, the Holodeck, the Matrix – the idea that a computer simulated world could have all of the sensory and intellectual stimulus, all of the emotion and vitality, all of the commerce and society, of the “real” world (quotations necessary because virtual reality would be so real that non-simulated reality has no better claim on the term).

Augmented reality said that the virtual realists dropped too much acid in their youth. A fully simulated environment might be escapist pleasure for the overcommitted few, but computers would show their real power by adding a layer to our existing lives, not creating entirely new ones. Computers would sink themselves into our phones, our clothes, eventually our fingers and eyeballs and brains, not in the service of making another world, but enhancing the world we live in.

If that debate sounded ridiculously theoretical to you, then I hope that was yesterday because today it’s as real as it gets.

Google Glass is the vanguard of augmented reality, and obviously important to the company.* Google’s mission has always been to organize the world’s information – not to create a fantasy world but to organize our world.

Second Life had its heyday after Google established itself as the new tech titan, but before any serious challenger had risen up behind it. We spent a lot of time trying to convince people that SL could be the next big thing … trying to explain that people wanted to have an online identity, instantiations of themselves that would interact with other online personalities, creating tiny bits of content that might not have individual value, but would have enormous value as a whole fabric of an online world where people would go and interact every day …

I was laughed out of a lot of buildings after explaining SL. Who wants to live online? Who wants friends that they see only in a computer? Who wants to spend their leisure hours pecking away at a keyboard and looking at the cascades of dreck that other non-professional users create?

Second Life missed the mark for a lot of reasons, but not because we were wrong about online life. Facebook came along, and gave us all of the virtual life that the Web could really handle – only 2D, status updates instead of atomic 3D content, kitten pictures instead of furries – but Facebook succeeded in creating a virtual world.

And now they’ve acquired Oculus VR. If it wasn’t clear before – and perhaps it wasn’t clear even to them – they have now taken a side in that old debate, the same side that they’ve been on since the beginning. Facebook is going to go more and more towards virtual reality, while Google expands further and further into augmented reality.

 

*I don’t work on Glass, have no special knowledge of the product or strategy, and actually have never even tried it.

like a boss

Zappos says goodbye to bosses” is a recent entry in a long string of articles about decentralized management practices. In the popular press, the implicit message is that decentralization is a nonstandard practice compared to strict hierarchy (if it were standard, why report on it at all?) – and if there is a comment section it is often filled with bitter vitriol about the dumbass management hippies who would rather chant kumbaya than actually do the hard work of telling employees what to do.

Almost 10 years ago, Thomas Malone wrote a book called The Future of Work that summarized twenty years of research on organizational structure, concluding that decentralized management was, well, the future of work. This is no longer a controversial theory, and many different kinds of companies have instituted varying degrees of decentralization with great success. So why are there still so many critics, and why are some of them so bitterly opposed?

One reason is that decentralization isn’t always the right choice. Most employees probably work in enterprises for which a strong degree of hierarchy is a better choice, or at least not an obviously worse choice. This is because the majority of employees in many countries work in SMBs (small-to-medium sized businesses), where there is often little difference in outcome between decentralized and hierarchical management. When you have, say, 5 equally committed people working in the same room together, the information they receive is so similar, and the communication between them so frequent and unmediated, that the employees would probably make the same decisions with or without formal management. In addition, the single largest employer in many countries is the government, where hierarchy is highly beneficial or required due to the nature of the service or because of laws and regulations.

So most people work in SMBs that don’t need decentralization even if they have it, or in large organizations that benefit from a lot of hierarchy. This leads to the common misconception that decentralized management doesn’t scale. “Oh sure, rinky-dink startups and mom-and-pop shops can get by without managers, but when you get to the really big efforts, you gotta have hierarchy to be a great company.”

That is not just wrong, it is perversely wrong. Decentralized management is, for certain kinds of enterprises, actually required in order to scale. The right way to decide whether your company needs decentralized management is to ask yourself these two questions:

How many people are required for my company to achieve our vision?

You have to have a pretty strong idea of your vision to answer this, which is harder than it seems, but let’s assume you know your vision. If you need less than about 150 people (because that’s Dunbar’s number), then decentralized management isn’t required. It might be more fun, more engaging for everyone involved, but it’s not required – unless you’re on the extreme side of the next question …

How well-known and stable is the path to achieving our vision?

If you know exactly how to get to the mountaintop, and that path is set in stone, then you have no need for decentralization. A single leader can just tell everyone what to do. A lot of decentralization could also work, so long as everyone is aware of the well-known and stable path – and this would probably be more fun for everyone involved, but it’s not required. However, if the path is unknown, or even if it’s known but subject to change before the full vision is achieved, then decentralized management is required. Failure is guaranteed under these circumstances due to the Innovator’s Dilemma – in large organizations, strict hierarchy will inevitably serve the needs of the current business model, leaving the company open to disruptive innovators that eat the large company’s future. The only hope to avoid the dilemma is to have decentralized management: employees with enough freedom to ignore the dictates of management might – with the right resources and a lot of luck – find the disruptive innovation within the company before it’s found outside.

So, to summarize in the obligatory 2×2:

decentralized management 2x2

I’ve noted the fun factor because it’s an important driver of employee criticism of distributed management. It’s not hard to find people who worked in places with “no bosses” and absolutely hated it, comparing the experience to high school and worse. And the truth is, in a large organization with an unknown and unstable path to a big vision, distributed management is definitely not fun for the employees, because:

  1. It is intellectually and emotionally draining. If everyone is supposed to make their own decisions, a lot of information and communication is required, and there is no way of getting around the time demands that this imposes, especially compared to the job you would be doing in a hierarchical company. Worse, making so many decisions is very stressful for most people, especially when you believe in the vision and you are close to your colleagues. You don’t want to let down your dreams and your friends, and it is very hard to face the possibility that every day may be the day you screw it all up for everyone.
  2. It is unrewarded by compensation. People start to think, “Hey waitaminute – I thought managers were supposed to make these decisions. If I’m making them now, why aren’t I being paid like a manager?” Most companies do not adjust their compensation schemes to account for this additional responsibility, because doing so would likely require a complex mechanism for collecting all possible projects, allowing everyone in the company to contribute to decisions on which they are knowledgable, and rewarding both successes and noble failures with monetary compensation commensurate to the effort of the people who implemented the project as well as those who contributed decisionmaking weight to the project. An attempt build this kind of compensation scheme would be regarded as insane, both internally and externally to the company. So most companies don’t try.
  3. The rewards for this kind of system extend beyond the likely employment period, possibly even beyond the lifetime of the employee. The Innovator’s Dilemma takes a long time to become a real threat. A small company first has to grow to a market leader and have such dominance that it is blinded to the threat of disruptive innovation – that can take years, possibly better measured as generations. So people are doing hard, uncompensated work, for the benefit of preventing a problem that might not happen during the lifetime of anyone that works at the company. That is a tough, tough ask of anyone. Even employees who understand the problem wish that the company could be hierarchical until the problem is apparent, and then switch over to this distributed bullshite. But the problem of course is that at that point, it’s too late.

So … should you like a boss or be a boss? Should you like your boss, and should that even be a question when your boss is you?

Bit flip

I was wrong about the PC. As a kid I played with the TRS-80, Apple ][ and C64 – I was engrossed in them all, I thought they were the future. But I didn’t predict the sweeping change the PC would have on society and the economy. I didn’t devote my hobbies and education to learning more about computer science.

I was wrong about the Internet. I was introduced to UNIX as an intern at Bell Labs, I read BBSes, I was on CompuServe and Prodigy and AOL, I used Mosaic. I enjoyed them all, I understood how these were the future. But I didn’t anticipate how all-encompassing this future could become. I didn’t devote my early career plans to working in Internet companies.

I was wrong about Google. As soon as I started using it in 1999, I saw that this combination of simplicity and power was the future of search, and that search was the key to the Web. But I didn’t see the enormous economic engine that search intent could generate. I didn’t want to work at Google while it was still a relatively small company.

So I’m probably wrong about Bitcoin. For reasons I’ll go into towards the end of this post, I feel it’s very important to state this at the beginning. If you already know I’m wrong, your time is much better spent reading and re-reading this wonderful piece by Marc Andreessen, the finest articulation of the potential power of Bitcoin yet written. (Incidentally, I’ve concluded that I was wrong when I said that Andreessen is probably the best living tech entrepreneur, but would be a mediocre VC. He’s already proven he’s a great VC.)

Again: please stop reading if you already know I’m wrong.

I don’t believe in Bitcoin, I don’t believe that it’s the foundation of a new age, a wave to follow the PC, the Internet, the Web. My resistance to the judgment of my betters is broad and deep, logical and emotional, based on fact and conjecture. So clearly, I’m not trying to win an argument here. I just want to someday look back on this and laugh. Or cry, as the case may be.

The roots of my skepticism about Bitcoin grow from three areas, which I’ll call What’s Missing, What I Know From Experience, and What’s Distasteful.

What’s Missing

As I humblebragged above, I knew about some of the key life-changing technologies of our time before most people. I may have been wrong about just how far they would go, but I was right to be curious about them, right to try them before they were popular, and right to enjoy their early incarnations. I had that curiosity and enjoyment from the minute I heard about them, and that enjoyment was sustained and nourished through each and every use.

I’m not curious about Bitcoin, at least, not curious enough to try it. As a consumer (not as a technologist, futurist, or business person), I don’t see why I might enjoy using it. I can understand why it has speculative value, but the joy of a good return from a speculative investment is nonspecific to Bitcoin. As a consumer, what’s in it for me?

The shortest description of the most obvious consumer proposition for Bitcoin is that it’s digital cash. But I’m not actually having a problem with the features of non-digital cash. Making digital payment behave exactly like cash would introduce giant problems into my life without solving any.

The first problem is the fear of seller fraud, i.e. how to address the problem that the person selling the goods might not actually deliver the goods. Bitcoin could, in theory, help quite a lot with buyer fraud, since once Bitcoins are transferred it’s just like receiving cash. But I’m mostly a consumer, not a seller, and as a consumer I don’t like to hand cash over to anyone unless I receive the goods at the same time or before I give the cash. Under what circumstance besides anonymity could I possibly want to use digital cash rather than a credit card? A credit card gives me the assurance that if I’m truly defrauded by the seller, I can always call the credit card company and demand a chargeback. Bitcoin advocates talk about chargebacks as a merchant’s curse (which it is), without addressing how the same thing is an honest consumer’s blessing.

Another big problem is the fear of loss and theft. I have this problem with real cash already, I don’t want to keep an excessive amount on my person or in my home or business. I don’t want to forget where I put it, I don’t want someone to steal it. Digital cash makes this an enormous problem, since I can now have a very large amount of cash, which becomes a very attractive target for theft, and a very sad potential case for loss. Sure, I can protect my digital cash with all manner of digital locks and keys, but this makes my cash security problems worse, not better. Banking has lots and lots of problems, but one of them is not that if I forget my key, I lose all my money.

I understand that these are problems of privilege, first world problems, and I’m not addressing the benefits that Bitcoin’s success would have for problems particular to the developing world. But I’m also not aware of any mass consumer technology that became successful due to features that benefitted developing economies without solving first world problems first. That may be sad, but it’s true.

What I Know From Experience

How many people have managed the growth of a new currency from its early days through its use in hundreds of millions of dollars worth of transactions per year? I don’t know, but I suspect that the number is only in the dozens, and I know that I’m one of them. So I cannot help but view the prospects for Bitcoin through the lens of what I learned from developing the Linden Dollar as a product for Second Life. This experience might provide some special insight, but it also almost certainly comes with bias, false equivalencies, the color of regret and the specter of envy. Nevertheless, I can’t talk about Bitcoin without thinking of the Linden Dollar.

Since memories are short, let me try to explain the Linden Dollar very briefly. Second Life was once a thing that had the same level of interest as Bitcoin does today, actually a bit more judging by search queries:

SL-bitcoin

The Linden Dollar is a virtual currency, the primary medium of exchange for transactions in the virtual world of Second Life. At its peak, people using Second Life used the Linden Dollar to buy and sell virtual goods worth more than half a billion dollars per year. Although there are many other digital worlds featuring the ability to get goods in exchange for some virtual token, the Linden Dollar had some unusual features that didn’t exist or weren’t allowed by similar services. The L$ could be transferred from user to user, and could be exchanged for a price in US dollars (and Euros and other currencies). Linden Lab, the company making Second Life, could issue new Linden Dollars in any amount and at any price, without any guarantee of redemption for any value, making the L$ a true fiat currency (i.e. having value by declaration rather than by guarantee of exchange for something of value, like gold).

It’s fair to say that the Linden Dollar was inferior to Bitcoin in every possible aspect of technical implementation, particularly the cryptological security measures. And it was not only centrally managed, but subject to the inflationary risks inherent to management of a money supply by an unstable government (i.e. a startup). Bitcoin advocates would have no problem listing dozens of feature inadequacies and design mistakes for the Linden Dollar. But I don’t think that the absence of any of Bitcoin’s vaunted features are the reason that the Linden Dollar didn’t reach mass success.

The Linden Dollar failed to reach a mass audience because Second Life failed to reach a mass audience. Even with SL’s shortcomings, the L$ might still have reached a broad audience if it had also become an accepted medium of exchange on another successful platform. The features and design of a currency can preclude certain types of failure (e.g. widespread fraud), but with one possible exception* they cannot be the driving reason for success. A currency, or any payment method, succeeds not because of its features, but because of the adoption of the platform on which the currency is the primary medium of exchange. As I have argued elsewhere, the value of the platform is the dominant factor in determining whether the medium of exchange for that platform will be successful. Consider the US dollar, which is after all Bitcoin’s true competition. The “platform” for the US dollar is the United States economy. The US$ has many feature deficiencies, and has undergone many design changes over the years. Someday the US dollar will fail to be the world’s dominant currency. That day will come after the United States is no longer the world’s largest economy, and not a day before.

Now, it’s arguable that the platform for Bitcoin is the Internet, and that economic transactions running through the Internet could exceed the US GDP (minus the portion running through the Internet). So perhaps we are on the cusp of seeing Bitcoin take the place of the US$, not because the features of the currency make it better than the US$, but because the US GDP is smaller than Internet GDP, and no rising country GDP (i.e. China) grows fast enough to fill the vacuum. But that’s not Bitcoin winning through superior features or technology, that’s the US economy failing and the world not wanting to rely on China’s economy.

What’s Distasteful

If it’s not clear enough already, this post is driven by personal taste, experience and bias as much as it is by fact and logic. So I may as well conclude with the least logical portion. I started this post by admitting that I’ve been wrong about pretty much every important technology trend in my lifetime, and practically begged many readers to read something else. Now I’ll admit that I don’t actually think I’m a moron. As I pointed out, I enjoyed and was excited about the PC, the Internet, the Web as soon as I saw them. I was right, I just didn’t make many important personal decisions based on that belief. (As an aside, I don’t actually regret the decisions I made instead. Life is full of wonderful choices.)

But I wanted to give Bitcoin fanatics every reason to dismiss this post without comment, because I’ve observed that Bitcoin skepticism is often attacked with an onslaught of vituperative insult. Now, this is true of the current sad state of Internet commentary generally, but here I’m excluding the routine trolls and bitter ignoramuses, and thinking of people who are clearly capable of intelligent, reasoned discussion. Some very smart and often nice people are Bitcoin fanatics, but in the eyes of many intelligent true believers on this topic, skeptics aren’t just wrong but idiotic, not just shortsighted but malicious. That reaction is of course is distasteful, but the point here isn’t just that I have delicate sensibilities. The point is pattern recognition: I have seen this kind of fanaticism many times, and it is usually a sign that merit of the proposition cannot speak for itself.

*The possible exception to all of my skepticism for Bitcoin is micropayments. I think this could be a compelling use case, though not in digital content payments because the problem with many digital content models is not that people don’t have a good means to pay, but that they would rather receive inferior free content than superior paid content at any price. But micropayments in antispam implementation or for microtransactions in data transmission generally is very interesting. This is the one area where I’ll continue to think about what Bitcoin could mean. After all, I’ve been wrong before.

the conflict question

Recent events remind me of one of my favorite hiring tales. I used to ask prospective hires an annoying interview question, one of those open-ended travelogues that journeyed through odd pathways and byways but always ended up in the same cramped room, where two colleagues were locked in irreconcilable conflict, and the proposal of only one could proceed. Depending on the mindset and tenacity of the candidate, this question could take 3 minutes or closer to 30.

Once I was the last interviewer in an extended, multi-day interview slate. Around twenty people had already interviewed the candidate – this was for a small, tightly-knit company, and in such circumstances it’s not unusual (though it may be inadvisable) for nearly all of the company to interview new members of the tribe. So by the time she came to me, she’d already been run a bit ragged.

I proceeded to launch into what I called The Conflict Question. Due to some combination of my mood, her mindset, the weather and wicked chance, this version unfurled into an inquisition taking the better part of an hour. I felt like I learned a lot about her, and was happy to recommend a hire.

I barely made it back to my desk when one of the company stalwarts stormed up to me and dragged me to an empty conference room. Although I held a senior position, I hadn’t been with the company very long, and this guy held considerably greater history and moral authority (quite correctly and deservedly, in my view).

“What the hell is wrong with you?” he hissed.

“What?”

“We have a critical hire to fill here, we found this amazing candidate. She’s been interviewing for days and everybody loves her. Every single person gave her rave reviews. She gets to you, the very last interview – more of a formality than anything – and now she doesn’t want to work here. She says if assholes like you are in senior management, this can’t be the kind of place she’d like to work!”

“What?! Uh ….” My mind raced through the interview, playing it back and forth in my head like the Zapruder film. What did I do? What did I say? Admittedly the interview did get a little strained, spending the bulk of time on a question whose very essence is about conflict. Not infrequently, the course of the question presses conflict so extensively that it can be said to generate real conflict as well. I didn’t think I’d crossed any kind of line, but then again if I had it wouldn’t be the first (or the last) time I’d done so without realizing it.

I asked frantic questions about what she said about the interview and whether there was any chance of saving the hire. My colleague furiously insisted she was only reacting appropriately to my unforgivable boorishness. His red-faced anger thrashed me like an invisible whip … and then I saw that this was the strangest thing of all. I’d always admired his grace under pressure – I had watched him work with grim calm through disastrous crises and deplorable failures. He was, as far as I knew then, an obdurate, utterly reliable, improbably emotionless rock of a man.

“Whoa, whoa, waitaminute,” I said. “Are you fucking her? Or trying to?”

The passion drained from his face. It wasn’t the passion of anger, as I thought, but passion itself. He took a moment to gather himself, drew in a deep breath, and finally hung his head as he answered, simply and plainly: “Yes. Yes I am.”

“Ok. Now get the fuck out of my office,” I said, which was a weird thing to say as we were standing in a conference room and I had no office. “Now wait – just so you know – I’ll fix this, I’ll go make nice with her and we’ll bring her on. But I don’t think my interview was wrong, and I don’t think what I found is wrong: she doesn’t like conflict, especially when she thinks she should be walking into a friendly situation. I might be an asshole, but her job might sometimes require making assholes do good work, so that’s something you’ll have to deal with – as her boyfriend, perhaps, but not in any role that could possibly be supervisory. So thanks for letting me know.”

We hired her. She was great. She wasn’t so bad at handling conflict (and assholes), but I’m pretty sure she didn’t like it. And she certainly was never put into a conflict of interest with her romantic relationship, because everyone was transparent from the start. Their relationship with each other has long outlasted any of ours with the company, and is certain to outlast the company itself. At the end, there was nothing left of real or imagined conflict, nothing but a funny story.

It’s all so easy when the truth is out at the beginning.

welcome to SV, DBs

‘That’s why,’ said Azaz, ‘there was one very important thing about your quest that we couldn’t discuss until you returned.’

‘I remember,’ said Milo eagerly, ‘Tell me now.’

‘It was impossible,’ said the king, looking at the Mathemagician.

‘Completely impossible,’ said the Mathemagician, looking at the king.

‘Do you mean –‘ stammered the bug, who suddenly felt a bit faint.

‘Yes indeed,’ they repeated together; ‘but if we’d told you then, you might not have gone — and, as you’ve discovered, so many things are possible just as long as you don’t know they’re impossible.’

— The Phantom Tollbooth

In the startup blogosphere, you’ll regularly see posts about how hard startups are, how hard it is to be an entrepreneur. Mark Suster has an excellent recent entry into the genre, coining the very excellent term Entrepreneurshit. Earlier this summer, Ben Horowitz brought his rapper’s flair to describing The Struggle, a cold and merciless beatdown about a place where nothing is easy and nothing feels right. A few years ago, Paul Graham posted what should have been the definitive piece about What Startups Are Really Like, covering all the high-low points of cofounder conflict, total life immersion, emotional roller coasters, endless persistence, unpredictable customers, clueless investors and heartless luck. But it wasn’t the final word, and it won’t be – why is that?

Dave McClure bends the pattern by noting (blaring, really, in inimitable McClure style) that the passion should be about product, not entrepreneurs. What all the other posts were saying is, Don’t come and try this shite because you think being an entrepreneur is fun, because it’s not. Dave completes the sentence by saying what the passion should really be about: product and customers. It’s a nice continuation of the message to whomever needed to read all the previous entreaties about the pain, the passion, and the not-very-likely glory.

Who exactly are all these posts talking to? To the inexperienced, of course – the battle scarred veterans already know what’s what. But those young tyros, those fresh-off-the-presses CS majors, the hackers, the “design guys,” the would-be world conquerors – all those startup sages want to send a message: think twice before you dive into the deep end of the pool, kiddos. There’s a bit of a concern that an endless horde of former Wall Street DBs will descend upon Silicon Valley, as they have been doing ever since the late ’90s, with their uninformed dreams of being “a startup guy.”

I say, let ’em come. I have no problem with anyone who wants to take the plunge. If you’re even thinking you might want to do it someday, do it now, do it today. I’d rather have you here, facing down those odds, in the Entrepreneurshit, deep in The Struggle, finding out What Startups Are Really Like – rather have you here than constructing a new derivative, grinding it out for the man, toiling away while wondering if this is really all there is to life. Never mind the fact that it’s completely impossible; that’s only true for those who listen to the misguided wisdom of their elders.

what is technology?

Since the 2008 financial crisis, the world economy has been in the doldrums, and every time we think we’re out of the storm, we find that we are still at sea, struggling to stay afloat. Europe appears on the verge of disastrous devolution, and the world economy as a whole is roughly at the same levels as 2000. Will we ever feel confident that we are returning to sustainable growth?

I think the answer lies in technology investing. I think we just have to ask, Have we been investing in technology? But in order to answer that, we have to know what “technology” is.

Ask a random stranger “What is technology?” and you’ll likely hear something about “computers” or “the Internet.” Most people assume that investing in anything having to do with computers or the Internet is automatically “technology investing.”

This can’t be right, of course. The term “technology” has been around since the 1600s, before anything like today’s computers existed. I suppose a compass was considered technology back then, and when the sextant was invented it was “high-tech.” More recently, though still generations ago, the technologies of railroad transformed the world economy from the mid-1800s, and automobiles shortly afterwards extended that transformation further into our economy and culture. Broadcast radio and then television and movies became important technologies as mass media came into our lives from the early-to-mid 20th century. The cycles got shorter and faster with computers in the ’70s and ’80s, and the Internet from the ’90s.

When we say “technology” today we no longer think of trains or cars or even radio or TV. All of those things still have technology in them, but none of them represents what we mean by “technology.” So it only makes sense that someday soon “technology” will bring no reflexive association with computers or the Internet. So what then will it mean when we say “technology”?

My nutshell roadmap of technology from the compass to the Internet stopped in the 1990s. The right investments in seafaring, shipping, autos, broadcast networks, computers and Internet resulted in personal fortunes and worldwide economic growth. I think in any of those eras, you could ask “Are we investing in technology?” and the answer would be a clear yes, and you would have been able to point clearly to the technology. But ask yourself today, “Over the last decade, have we been investing in technology?” and I’m not comfortable with the answer.

“Technology investors” have made personal fortunes and huge companies have been birthed since 2002, but what is the technology? Should social media and games be considered technology? Should mobile phones and tablet computers? If so (or if not), why (not)? What is the definition? What is the test? What is technology?

Here is the simplest definition of technology:

Technology promises a better life.

This begs a question with almost every word. Why a promise? Better by what standard? Whose life? Before trying to clarify, let me propose a test:

Technology delivers what you need while breaking the boundaries of the Speed/Quality/Price triangle.

When technology works, you get what you need at a higher quality, lower price and faster than you could have gotten it before. At introduction, “high-tech” may not include all three right away, but it’s apparent even early on that the speed, quality and price will inevitably improve. This is why technology is a promise – early iterations may give you what you want with clear improvement in only one of the three aspects, but even early on there is an explicit assumption that the other two will follow. It’s also inherently assumed that although only an exclusive few might access and benefit from the early technology, someday everyone will. The definition of “better” is just that “quality” is delivered, in whatever definition of quality that is being used at the time, but that the quality comes faster and cheaper. So: Technology promises a better life.

How well does this definition and test fit the waves of important technology advances of the past? If say, the prime years of your life were from 1930 to 1970, did television give you what you needed, better and faster and cheaper? At a time when we went from worldwide depression, then broad scale war, then peace and increasing interdependency and complexity and societal change – yes, I’d say that the ability to viscerally and quickly deliver news, entertainment and culture gave life what we needed. How about the personal computer, the Internet, and search engines? I think positive answers are similarly easy to construct, and negative answers are mostly dyspeptic dystopianism.

Now how about social media? Well, everyone needs friends. Everyone needs a way to connect with friends, close and distant. Everyone needs to be a part of a community. But are social networking companies truly satisfying these needs? Is that even what they are trying to give us? Do your multiple social networks, hundreds or thousands of “friends” you have on them, their messages and status updates and pictures and quotes of the day – are these giving you what you need? Is this a promise of a better life for you?

I have no problem if your answer is “yes” to these questions. But I can’t answer yes, and I fear that most people wouldn’t answer yes, and this makes me uncomfortable because when I return to the question, Have we been investing in technology over the past decade? – I also cannot answer yes. And that means I cannot see how we will emerge from this worldwide economic slump.

I’m sure there is active investment in technology that really does promise a better life, but that’s not the mainstream of what’s called technology investing today. When autos and radios and TV and computers and the Internet were coming up, there was plenty of investment fervor around these industries. Today, the fervor is around companies that promise all sorts of interesting things, but I wouldn’t call most of these things a promise of a better life. They may be great companies, they are certainly filled with great people, they definitely have smart investors – but they are not making technology. And if we fail to invest in technology – real technology – then the economy will not return to robust health, and life will not get better.

steal this book

Steal This Book by Abbie HoffmanWhy don’t people steal books?

I mean, I’m sure people do steal books, but it doesn’t seem to happen in any extraordinary volume, as compared to, say, music. It’s not unusual to know someone who has downloaded a copyrighted music file without paying for it (aka “stealing”) – you might have even done it yourself, no? – but do you know a single person who has ever downloaded a copyrighted book without paying for it?

The music industry has been famously apoplectic for years about the problem of illegal music file sharing. The movie industry watched the music guys disintegrate, and is aggressively riding the Big Hollywood effort to stop the evil Internet so that what happened to music doesn’t happen to movies.

Now the book industry is also undergoing seismic shifts due to new technology, but this begs the question: why didn’t books, the older and easier medium to steal, come first – why doesn’t anyone steal books?

Is it the medium?

Smaller things are usually easier to steal, and this goes for the digital world as well as the physical world. Constraints on bandwidth, storage and processing power are one reason that music files are more broadly shared or stolen than movie files – a typical movie file is well over 100 times larger than a typical music file. But a book file can easily be less than a tenth the size of a file for a 3-minute song, so again, it seems strange that these little book files don’t get the five finger discount.

Maybe music and movies are different because they require electronics to play a recording. As electronics have gone from analog tape recordings to digital media files, music and movies got swept up in waves of theft because those files played on devices that could be connected to a vast file sharing network. Meanwhile, books did not have a common electronic reading device until the Kindle and Nook.

I’m not sure I buy this narrative – recordings of audiobooks have been around for just as long as music files – do you know anyone who has ever stolen an audiobook? Now that the Kindle and Nook have been around for a while, have you ever heard of anyone using these devices to read troves of stolen books?

Is it possible that the difference is not in the technological trappings of the media, but in its emotional impact? Do music and movies move something in the soul that causes people to steal, because the enjoyment of the media is so irresistible? I doubt it, because there are emotionally gripping books as well as dull songs – I don’t think there’s a category of books that get stolen more often than others, other than the category where the title is a command to steal.

Maybe the reverse is true – perhaps movies and especially music are trivial fluff, not valuable enough to fear stealing, while books are weighty, too precious to steal. Price may provide a clue here: a hit song is now around a dollar, a movie around ten dollars, and a new digital book is ten to fifteen dollars. Perhaps the market is validating the theory that books are more valuable, more emotionally compelling, and therefore harder to steal casually. But I doubt this too – there are a lot of crap books out there, and you can learn more from a three-minute record baby than you ever did in school.

Is it the audience?

Maybe the people who enjoy books are different from the people who enjoy music and movies; or at least, they’re different when they’re enjoying books, even if they’re the same people.

Viewing unauthorized download of copyrighted files as “theft” or “stealing” requires a certain conception of a moral universe. Many books, especially novels, convey some sense of moral order, or even when conveying moral disorder the implicit contrast to a typical moral universe always exists. Maybe the people who enjoy reading books are people who believe in a particular kind of moral universe, one in which unauthorized downloading of copyrighted material is rightfully considered stealing. In short, maybe book lovers are better people, and don’t steal. Presumably under this theory, music lovers are dirty techno-hippies with no sense of right and wrong.

Or … maybe book lovers are just weird. The urge to possess books as a physical object is common enough that even obsessive collection is considered only a gentle madness. Possibly the act of stealing a digital file is simply unsatisfactory, as it doesn’t sate this need to possess the object – shoplifting a file just isn’t the same. While music lovers do have some notable examples of vinyl obsessives, this doesn’t seem as common as the book geekery is among bookworms.

Is it possible that book lovers simply have more to lose, being a smaller and almost by definition more educated (i.e. literate) class of people? Maybe music and movie lovers that are of the same social and economic class as book lovers actually steal music and movies just as infrequently as book lovers steal books?

Is it the industry?

The music industry was famously hostile and arguably stupid in its stance to file sharing, and Big Hollywood seems determined to replicate that stance regarding all the evils of the Internet. In contrast, the book industry seems scared, but oddly accepting of its fate, almost savoring the last days of their bygone ways, lounging on the beach languorously watching the tsunami roll in.

Or maybe the book industry is simply smaller than the music and movie industries, and so hasn’t spent the time and money to raise the hullabaloo that other media industries have raised. And being a smaller industry, maybe it’s simply more accepting of change.

Is it possible that the book industry isn’t in utter panic because they’re aware of the history of media cries of wolf, howls of inevitable doom that accompany each technological change, each of which result in more money and more opportunity? Maybe book publishers are relatively sanguine in the knowledge that they’re making higher profits than before the Internet ruined their industry.

This is all just semi-coherent rambling, but it’s a ramble that’s been rattling around my skull for a while now. I don’t really have a clue why people don’t steal books, or at least don’t seem to steal books in comparison to music and movies. I’m hoping one of the handful of readers who stumble across this post can point me to a better answer.

too early in the game

Last month, I wrote about why Second Life failed so I didn’t have to write about why Second Life failed. I mean, that post wasn’t about reasons for failure, it was about the fact of failure. My thought was that there are many people who simply assume Second Life failed, and they’re wrong, and there are many who will passionately argue that Second Life has succeeded … and they’re wrong too. Failure can only be judged by the ones who were trying to succeed.

It would be safer for me to say that failure is a matter of perspective, for surely failure passes through the same lens as beauty in the eye of the beholder. I do understand that many SL Residents were on their own journeys, and so of course they are their own best judges of the success of those journeys. But it would be an artful evasion to claim that any of those journeys, or even all of them together, constitute the sum total equation for the success of Second Life. We were trying to do something more – or at least, something else – and we failed. (Of course, I’m talking about the team and the company that I knew, years ago. The team there today is on their own journey, which I know next to nothing about.)

So if I’m willing to be this myopic and insular about judging failure, you can bet I’d be just as parochial in reviewing the reasons. I’ve seen and heard a lot of speculation that I don’t agree with: poor strategy, worse execution; lack of focus, misplaced focus; poor technology, doomed architecture; dumb marketing, uncontrollable PR; niche market, bizarre customers; crazy culture, undisciplined development; bad hiring, bad management; feckless board, dominating board, ignorant board. I’ve heard it all, and while there may be a grain of something like truth here and there, none of these things holds real explanatory power as a reason for why Second Life failed.

We failed as people. We failed as a team. Our failure was intensely personal, particular to each person involved, and ruinous to the overall team.

I’m going to switch now from “we” to “I” but I want to be really clear about why. We Lindens were all in it together, and there is a broad sense in which all credit and blame goes to all of us … but not in this post. Here, I’m talking about maybe half a dozen people, and so it would be too much of a personal attack for me to try to describe the failures of anyone other than myself. I’m willing to attack myself in this forum, but not my former colleagues, all of whom I still respect and a few of whom I love like my own family. But I want you to remember the “we” because otherwise the rest of this post is going to seem incredibly egocentric: there’s a certain kind of self-blame that’s really self-aggrandizement, and though I regard my own failures as critical, even the most deluded version of the story couldn’t claim it was all about me.

So. I failed as a person. I failed the team. I was responsible for many elements of our strategy, execution, culture and management, and those decisions aren’t the ones I regret. What I regret, to the extent that I’m capable of regretting such a rich learning experience for me, is giving up. I don’t mean at the end, when I was tired and disillusioned and looking around at a company I didn’t recognize and a future I didn’t want to live. A lot earlier than that, I gave up on people that we needed, people who were flawed and fragile but necessary. I let people fail, I let people go, I let people hide in their illusions and fears, I let them give up because I’d already given up.

The irony was, when I joined the company, I was supposed to be an experienced hand that would bring some sanity to a crazy world. But I indulged my own worst instincts – throughout the craziest times, when I could’ve done the most good, I just brought more crazy. I was having fun, but I chose my own twisted growth over a higher goal, and at times I was just plain mean or selfish or drunk. I really wasn’t ready for the opportunity that Linden Lab presented to me. I really wasn’t the guy I should’ve been when I got there; I didn’t know what I needed to know until I left.

Too many of the key leaders at the Lab were working through similarly damaging personal limitations. You might ask whether this really points to a failure in culture or hiring or leadership, and that would be a fair question. It’s true that Linden had a way of hiring certain kinds of people and forcing them to confront their own deepest flaws – but I think that’s beautiful, a feature not a bug. What we needed was one or more or all of us to conquer our flaws, to enable the entire team to rise above the limitations of each of us. But none of us defeated our own demons, and so all of us perished.

I’ve been gone from Linden Lab for over two and a half years, and still my failure haunts me. The last day of the year is always a good moment to come to terms with the passage of time, and this New Year’s Eve I’ve decided I should finally accept the fact that I’m never going to let it go. I’ll try to reach peace through the zen realization that peace is unattainable.

why second life failed

This post is about why Second Life failed – but not in the sense of, “here are the reasons why Second Life failed,” but instead, “here is why it is true that Second Life failed.”

Slate published an article titled “Why Second Life Failed” that also, like this post, is not an elucidation of reasons why SL failed – but unlike this post, it is not an authentic attempt to support the proposition that SL indeed failed. It is simply an effort to market a new book by posting an article with a catchy headline. There is an unavoidable paradox in that any marketable headline with the structure “Why [X] Failed” must use for X something that has first achieved at least some significant success, otherwise the title would be too obscure to attract readers. I started a company called Bynamite that folded after less than two years – no one writes articles titled “Why Bynamite Failed” because no one’s ever heard of Bynamite.

This mild paradox isn’t sufficient defense for SL’s ardent users and thoughtful critics. As is often the case with posts about SL’s demise, the comments to the Slate article are full of well-informed, intelligent and passionate conversation that puts the original article to shame. At Terra Nova, Greg Lastowka suggests that SL remains fertile ground for study, with the pointed rejoinder that “Second Life never failed – the media reporting on Second Life failed.”

As a former Linden, I appreciate the desire to insist that Second Life hasn’t failed. I joined Linden Lab in 2005, at a time when we had a few dozen employees and registered users in the tens of thousands. By the time I left four years later, we had around 7 times the number of employees, several hundred times as many users, and almost a hundred times the revenue. It certainly felt like success to me. I left sated with a feeling of accomplishment, and great hope for the future of Second Life.

But I also left feeling depleted. We had stumbled our way from obscurity to something like prominence, but I didn’t know how to take it to the next level. We weren’t making progress despite having bountiful talent, desire and resources. We had a beautiful company, a real culture of beauty and love, genuine emotion for each other and for the world we were helping to build. And it wasn’t working, not well enough and not fast enough and not big enough.

Perhaps there never was a next level. Perhaps it was always the destiny of Second Life to be an innovative niche product for a select group of people, a worthy subject of serious study, a constantly evolving emporium of edge cases. Maybe we should have just hunkered down, and focused on maintaining an elaborate playground for only a select audience of passionate and creative people. We could eke out a fine living, and damn the rest of the world who just didn’t get it.

But I couldn’t damn the rest of the world, because dammit, I’m from that rest of the world. I was never a true Resident of Second Life; I was a visitor, an outsider with the good fortune to see the incredible things that people can do in a truly free environment. I was inspired, amazed and delighted by Second Life – as well as occasionally revolted, offended and demoralized – and the diversity and depth of this experience was a revelation to me, one that I believed that everyone can appreciate.

And I still believe that, which is why I have to accept that Second Life has failed (so far, we must always say so far). The reality is that Second Life is still a niche product, and to deny that I wanted it to be something more would dishonor the heartbreaking glory of our ambition. It’s fair to say that Facebook became our second life, but it’s also shortsighted. Not so long ago, people laughed at the proposition that anyone wanted to maintain a virtual presence online that could form the basis of social interaction. Facebook did put an end to the dismissive chuckles on that topic.

But it’s equally laughable to say that this is where we’ll stop, that the final destination of online interaction consists of wall posts and text messages in two dimensions. I still believe that there’s no sensible way to define an impassible boundary between where we are today and a time when people “live” in a three-dimensional virtual environment. I’m still a true believer, an old true Linden in that way. So I have to admit that Second Life has failed.

So far.

great jobs

The death of Steve Jobs raises and answers the question that haunts the psyches of ambitious entrepreneurs everywhere: “Was it worth it?”

Praise follows death like the glowing debris that trails a comet, and the writing in the sky says that Jobs was the greatest CEO ever. A few muted voices remember that he was famously harsh to work with, but this is universally regarded as an entirely justified mania for perfection. Considering his accomplishments, it seems almost irrelevant that he denied the obligations of paternity for one child, and consciously decided that his children should know him through biography rather than time spent with him, even – or especially – in the final stretch towards death, when the remaining time must be remorselessly allotted like oxygen in a sealed room.

This isn’t criticism of a great man. It’s a reminder that many of us would willingly make the same choices, were such greatness within our reach.

We say it’s not so, and try to believe it. We encourage each other to remember family, remember health, remember that a life of striving includes the quest to achieve a full and humane life through our work. But the life of Jobs is the story of his jobs, of his one true job: making a dent in the universe through the creation of products that become a part of our lives. For his success in that, we forgive and excuse his personality defects. We cannot blame a man for failing to uphold principles that we would throw aside ourselves if only we could be assured that the universe was malleable to our touch.

Saying that “you are not your job” is a comfort; it alleviates the cognitive dissonance between your self-image and the productive economic output you contribute to the world. The lessons of Steve Jobs deny that comfort; his strongest exhortations insist that you are all about the things you make for the world – not for yourself, not for your hobbies or leisure, not even for your family and certainly not your friends if you have any. You have to do great work, never settle, remember that each day could be your last, don’t waste time living someone else’s life.

There is no obligation to community, family or friendship in these words – though strangely, there is an overwhelming commitment to society in the desire to dent the universe, for this is not a universe of cold cosmological phenomena, it’s a universe of people, and his ambition is all about changing how people live. For Jobs, if this ambition involved sacrifices of a more universal personal nature, there is no question that it was worth it. It was worth it for him, and his efforts were certainly worth it for us.

It’s touching to see the determination with which Jobs’ sayings are repeated in the wake of his death. But the message of his most appealing words isn’t quite the message of his life. He told us to follow our hearts, to trust our intuitions, to ask ourselves if our plan for this day is how we’d want to spend our last. But those are not goals, they are only beautiful means to an uncompromising end. The goal of Jobs was to be insanely great in a world-changing way. That’s the hard part of the message to understand. All of us can hope to understand what is in our own hearts, and can hope to have the courage to follow it. Almost no one alive has a realistic ambition to change the world – what many of us think of as world changing is merely interesting, hopefully entertaining, and possibly enriching.