the force awakens

Yep, it’s an end-of-the-year technology prediction post …

We’re at a special place in the consumer technology cycle. I’ve seen this movie before. Consumer technology trends are often described as waves, but I like a movie metaphor better, because it captures the notion that I actually saw these events when they were first released in the theater, and that we keep seeing the same plot points, themes and character types. I’ve lived through three really big waves of consumer technology. The third wave – the third movie – is finally coming to an end, which is a relief, because it kinda sucked. I’m really looking forward to the next show.

I’m a fan of the franchise generally, despite the repetitive plots. Each movie starts with the introduction of products that clearly show the possibility of what’s to come, although these are not the products that actually survive the revolution. Those products depend on a crucial underlying technology trend, which is not itself the consumer-facing technology. There is a spectacular platform war that decides the big winners and losers. The story ends, until next time, when the business patterns in the field have matured, and outsized returns for investing in those businesses have therefore disappeared.

The Origin Story: Personal Computers

pirates-of-silicon-valley

Like the first movie in a series, this one defined many of the patterns, tropes and heroic character types of the sequels to come. In a digital desert, a lone gunslinger appeared on the horizon, known only by the mysterious name Altair. The story really picks up when the Commodore PET, the TRS-80, and the Apple II appear on the scene. That trio of bandits opened up the Wild West, only to be dominated by the strongman IBM PC. But IBM only won a hollow victory, as it turned out that they’d unwittingly given the keys to the kingdom to Microsoft, the ambitious vassal that became the overlord. The story of the rise of the PC is the classic foundation of everything that came after in consumer technology.

But it would be a mistake to only pay attention to the foreground. In the backstory, the silicon chip is the key enabling technology that’s powering the other players. Moore’s Law is the inexorable force of progress, and Intel was the master who kept on top of the industry despite laudable challenges by AMD, Motorola, Texas Instruments, and a host of international competitors. This global tale of intrigue and ambition is a worthy accompaniment to the marquee narrative. In fact, the invention of Silicon Valley can be considered the prequel to this series.

The Worthy Sequel: World Wide Web

the-matrix

Many people say The Empire Strikes Back was a better movie than Star Wars. The Godfather was in many ways outclassed by Part II. The explosive success of the World Wide Web was at the very least a worthy sequel to the PC story. A knight in shining armor, Tim Berners-Lee, led a devoted band of heroes on a worthy quest to unite all of the world’s information. Early services like Prodigy and CompuServe leapt on the ensuing opportunity, but latecomer AOL won the day by sending a CD to every mailbox it could find. That was only the first act, as Netscape and Yahoo emerged as the real heroes … until the third act, when eBay and Amazon and Google trampled the field.

It’s usually not worth the effort to make a distinction between the Web and the Internet, but it makes sense to do so here because “World Wide Web” is the story with a beginning and an ending, while the technologies of the Internet are the more enduring enablers of that story. As protocols, the details of TCP/IP, DNS, HTTP and the like are not exactly gripping narrative. But like silicon chips powered the PC revolution, and could be considered the more enduring story, the Internet will live on long after the Web sinks into irrelevance.

The Failed Trilogy: Smartphones

phone-booth

Return Of The Jedi was a very successful movie. And it did have some awesome special effects for the time. But it was all of the same characters, and pretty much the same plot, soiled by dominant commercial motives and treacly pandering to a younger audience. By which I mean, fuck Ewoks. And Godfather Part III? The less said about that, the better.

The story of the last dozen years or so has been the move of personal computing and the Internet to smartphones. There’s some compelling pathos in the storyline of the death of the Web, overrun by mobile apps. But it was mostly dull to watch the Treo and Blackberry reprise the role played in prior movies by the Altair, Prodigy and CompuServe. I’ll admit it was great fan service to see the Apple character repurposed, and maybe there hasn’t been a more colorful personality than Steve Jobs, so that part of the story was pretty entertaining. You could say that the return of Jobs was as momentous as finding out about Luke’s father.

Let’s face it, it just wasn’t that exciting to watch Google and Amazon continue to grow. Facebook is a great new character as a flawed hero, and that whole subplot with Twitter and the rest of social media was a very strong B story. Other new characters like Uber and AirBnB have their minuses and pluses, but I don’t believe they’re going to be big characters in the next movie. (“Uber for X” companies are the goddamn Ewoks.) The overall experience has been like coming in to watch a huge blockbuster mega-sequel: you can really see the dollars up there on the screen, and there’s a certain amount of entertainment value that comes through, but the whole exercise just lacks the originality, joy and passion of the earlier entries.

Not a bad backstory though, and as in the other movies, this one will continue to be meaningful in all future sequels. Cloud computing, software as a service, the evolution to microservices – these things fundamentally changed the way that new businesses start and grow. They reduced the capital costs in starting a new information technology company by orders of magnitudes, letting in many more characters. Unfortunately, most of those new characters are Ewoks.

The Force Awakens

So what’s the next movie going to be about? Will it reinvigorate the franchise? Or will it be a terrible prequel (or worse, prequel trilogy) that we’ll all have to agree to pretend never happened?

I think we don’t know all of the elements, but we do know some of them. Let’s first recap what we saw in the first three installments:tfa-chart

And here’s what I think we know about the chart today:

tfa-chart-f

Main Story: There is a flood of products that don’t have an agreed category name yet – Siri, Google Assistant, Amazon Alexa, Microsoft Cortana, chatbots, chatbots and more chatbots. Some industry terms that are cropping up are intelligent personal assistants, virtual assistants, conversational search. Or chatbots, fer chrissake.

The point is, you will have things in your house (your car, your pocket, etc) that you talk with, and these things will talk back to you in a way that makes sense. You’ll regard your interaction as a conversation rather than button punching or screen swiping. Until people converge on another name for all of these things, I’ll call them “conversational devices” – this captures that you have a productive back-and-forth with a physical object. Yes, you can already do something like this on your smartphone, but those implementations are only a hint of where this will go.

As early as it is, there are plenty of curmudgeons who don’t see the point. Smarter people have said we’ll never need more than five computers, no one wants a computer in their home, the Internet is a fad, the iPhone is going to be a flop. Predictions are hard. But screw it, here’s mine: within 3 years, it will be apparent that the adoption curve of conversational devices is in the same category as PCs, the Web, and smartphones.

Conversational devices will be the story of the next decade in consumer technology. Not that there won’t be other stories, it’s just that this one will be the lens by which we understand the era. I still love virtual reality, but it’s still not time yet. The blockchain isn’t consumer-facing, and  I don’t believe in Bitcoin. Not Internet of Things, not 3D printing, not self-driving cars, not wearable devices (unless they are also conversational devices) – some of these will be big stories, but not the biggest story of the next dozen years.

Backstory: Conversational devices rely on this chain of technologies: Machine Learning -> Natural Language Processing -> Speech Synthesis. These technologies are complex and interrelated, and rather than explain why this is their moment (the foregoing links give that explanation), I’ll just skip to the punchline: People will be able to speak to machines, machines will understand and speak back. Most people already have experience with primitive versions of these technologies, and find those experiences frustrating and unsatisfying. (“Press 9 to go back to the main menu.”) But the rate of improvement here is at an inflection point, and this is about to become undeniably apparent on a mass consumer level.

Platform War: The most successful conversational devices will be on a common platform of delivery. Amazon Echo and Google Home are devices that sit in your home and listen to everything you say, and respond back to help you. Facebook Messenger has bots that will have a conversation with you. Each of these is currently displaying only the limited strengths available in their existing businesses (Amazon:Shopping, Google:Search, Facebook:Brands), but they are all trying to expand to become a delivery platform for third-party conversational devices. Amazon and Facebook already offer developer platforms, Google is focusing on partnerships.

This platform war will have elements of past wars, in hardware vs software, apps vs operating system, open vs closed. That complexity makes it very interesting, but remember, this is theme rather than story. The platform war is the Empire vs the Rebellion, the Mob vs America, it’s the thematic texture that gives the story meaning. You shouldn’t mistake it for the main narrative though. In Mac vs PC, Microsoft won, not Apple or IBM. In open vs closed web, Google won, not Tim Berners-Lee or AOL. Ok, the winners in iOS vs Android were also the platform owners, but that’s yet another reason that movie sucked, maybe it’s the fundamental reason that movie sucked. I hope everyone involved is smart enough not to let that happen again.

Pioneers and Winners: We are far enough into the story that we can guess at pioneers, but we can’t be sure until the extinction event happens: in all previous movies, the early pioneers proved the market, and then died, crushed by an onslaught that included the eventual winners. I’m convinced that this plot point will repeat in the new movie. Look in the chatbot space for potential pioneers – it’s certain than one of these will become historically important. And then it will die.

I’m hoping the platform war victors aren’t also the heroic winners of the main story, as happened in the smartphone movie, because it’s boring and tends to result in Ewoks. Facebook is the pivotal character to watch, as it has a platform opportunity with Messenger, but has huge weaknesses relative to Google, Amazon, Apple and even Microsoft in hardware production and delivery, and hardware will be key to platform ownership. So it will be interesting to watch whether Facebook dives into hardware, or partners with one or more of the other platform players, in the hopes that there’s a bigger opportunity in the main story than the theme.

Well, that’s all I have to say about that. Enjoy the show!

WWGD?

Six months ago, I said that Trump would win the election in part because the rise of new media destroyed the historic function of the media as our Fourth Estate. I was upset that product managers at our most important Internet companies seem to refuse to own the problem that is so clearly theirs.

Now that the chickens have come home to roost in a big orange nest of hair, others are saying that the election was, in a sense, rigged by Facebook. They say fake news has defeated Facebook. Facebook denies responsibility, while people are literally begging them to address the problem.

Product managers at Facebook are surely listening now. If any happen to be listening here, let me say: I’m sorry I called you cowards. I realize that today’s state was hard to foresee, and that the connection to your product even still seems tenuous. I am awed at the great product you’ve built, and I understand that no one knows the data better than you do, and that it is tough to take criticism that comes from sources completely ignorant of your key metrics. It’s not easy to regard something so successful as having deep flaws that are hurting many people. I think it is a very human choice to ignore the criticism, and continue to develop the product on the same principles that you have in the past, with the same goals.

I have faith that you are taking at least some of the criticism to heart. I imagine that you know that you can apply machine learning to identify more truthful content. I am sure that you will experiment with labels that identify fact-checked content, as Google News is doing. Once you reliably separate facts from fiction, I’m sure you’ll do great things with it.

I’m still concerned that facts aren’t enough. I think we’re in a post-fact politics, where people no longer (if they ever did) make their political choices based on facts. I have read many analyses of the election results, many theories about why people voted as they did. There are many fingers pointing blame at the DNC and the Electoral College; at racism, sexism, bigotry; at high finance, globalism, neoliberalism; at wealth inequality, the hollowing out of the middle class, the desperation that comes with loss of privilege. I am not convinced that giving people more correct facts actually will address any of this.

The most incisive theory that I’ve seen about today’s voters says that the divide in our country isn’t about racism or class alone, but about a more comprehensive tribalism, for which facts are irrelevant:

There is definitely some misinformation, some misunderstandings. But we all do that thing of encountering information and interpreting it in a way that supports our own predispositions. Recent studies in political science have shown that it’s actually those of us who think of ourselves as the most politically sophisticated, the most educated, who do it more than others.

So I really resist this characterization of Trump supporters as ignorant.

There’s just more and more of a recognition that politics for people is not — and this is going to sound awful, but — it’s not about facts and policies. It’s so much about identities, people forming ideas about the kind of person they are and the kind of people others are. Who am I for, and who am I against?

Policy is part of that, but policy is not the driver of these judgments. There are assessments of, is this someone like me? Is this someone who gets someone like me?

Under this theory, what is needed isn’t more facts, but more empathy. I have no doubt that Facebook can spread more facts, but I don’t think it will help. The great question for Facebook product managers is, Can this product spread more empathy?

The rest of this might be a little abstruse, but here I’m speaking directly to product managers of Facebook News Feed, who know exactly what I mean. You have an amazing opportunity to apply deep learning to this question. There is a problem that the feedback loop is long, so it will be difficult to retrain the production model to identify the best models for empathetic behavior, but I think you can still try to do something. There is some interesting academic research about short-term empathy training that can provide some food for thought.

I am convinced that you, and only you, have the data to tackle this problem. It is beyond certainty that there are Facebook users that have become more empathetic during the last five years. It is likely that you can develop a model of these users, and from there you can recreate the signals that they experienced, and see if those signals foster empathy in other users. I don’t think I need to lay it out for you, but the process looks something like this:

  1. Interview 1000 5-year Facebook users to identify which ones have gained in empathy over the last five years, which have reduced their empathy, and which are unchanged.
  2. Provide those three user cohorts to your machine learning system to develop three models of user behavior, Empathy Gaining, Empathy Losing, Empathy Neutral.
  3. Use each of those 3 models to identify 1000 more users in each of those categories. Interview those 3000 people, feed their profiles back into the system as training data.
  4. See if the models have improved by again using them to identify 1000 more users in each category.

At this point (or maybe a few more cycles), you will know whether Facebook has a model of Empathy Gaining user behavior. If it turns out that you do have a successful model, of course the next thing to do would be to expose Empathy Losing and Empathy Neutral users to the common elements in the Empathy Gaining cohort that were not in the other two cohorts.

But now at this point you are in a place where the regression cycle is very long. Is it too long? Only you will know. How amazing would it be to find out that there’s a model of short-term empathy training that is only a week or two long? People use Facebook for hours a day, way more than they would ever attend empathy training classes. This seems to me to be an amazing opportunity. Why wouldn’t you try to find out whether there’s something to this theory?

One reason might be a risk to revenue models. Here I’d encourage you too see what Matt Cutts said to Tim O’Reilly about Google’s decision to reduce the prominence of content farms in search results, even though that meant losing revenue:

Google took a big enough revenue hit via some partners that Google actually needed to disclose Panda as a material impact on an earnings call. But I believe it was the right decision to launch Panda, both for the long-term trust of our users and for a better ecosystem for publishers.

I understand this mindset personally because I was there too. At the same time Matt was dealing with Google’s organic search results, I was dealing with bad actors in Google’s ads systems. So I was even more directly in the business of losing revenue – every time we found bad ads, Google lost money. Nevertheless, we had the support of the entire organization in reducing bad ads, because we knew that allowing our system to be a toxic cesspool was bad for business in the long run, even if there were short-term benefits. In fact, we knew that killing bad ads would be great for business in the longer run.

News Feed product managers, I’m not writing this from a position of blaming you. I was in a situation very much like yours and I know it’s hard. I can also tell you, it feels really really good to solve this type of problem. I am convinced that an empathy-fostering Facebook would create enormous business opportunities far exceeding your current path. It is also entirely consistent with the company mission of making the world more open and connected. You can make a great product, advance your company’s mission, and do great good in the world all at the same time. You are so fortunate to be in the position you’re in, and I hope you make the best of it.

an indecent proposal

For over 20 years, Internet businesses have grown under the protection of a special law that provides extraordinary privileges. This law has properly been hailed as a boon to innovation, and has become enshrined in some quarters as an indispensable pillar of free speech. However, no law regarding technology can survive the merciless rule of unintended consequences; what was once a necessary sanctuary has become a virtual menace to society. If you wonder how the United States has reached the brink of electing a deplorable villain as its leader, at least part of the answer rests with the Internet’s most generous law.

This law is Section 230 of the Communications Decency Act of 1996. The bulk of the act was a misguided attempt to regulate “indecent” content on the Internet, most of which was rightfully struck down by the Supreme Court in the name of the First Amendment. But Section 230 was a special provision inserted late in the legislative process, out of concern that nascent Internet businesses would drown in legal liability for statements made by others. Section 230 states:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

This is a shield from libel and defamation suits, an amazing advantage in the rise of the new media of the Internet. The impetus for this law came from a 1995 case where an Internet service provider was found liable for defamatory statements made by a user of its message board. The court’s reasoning included the fact that the Internet service had exercised some editorial control over some of the message board content; therefore the service could be treated as the publisher of all the content, just as a newspaper would be.

In 1995, this was a horrific decision made by technology-illiterate judges who had no understanding of the power and potential of the Internet. It would be nice to think that the Congressmen who inserted Section 230 into the CDA were blessed with extraordinary foresight into the future of technology. But no – actually they just wanted to be sure that Internet companies would be willing to help hide boobies.

Remember, the bulk of the CDA was an insane Sisyphean effort to stop the spread of pornography on the Internet. Internet providers were rightly concerned that they would never be able to stop all the boobies. They argued that the 1995 case showed that any failed attempt to censor boobies would be interpreted as editorial control, holding them liable for all the boobies that did get through. So these Congressman inserted Section 230 as a way of saying to companies “Hey, just try your best to censor boobies, you won’t be held liable as a publisher of the boobies that did get through.” Internet companies, even in 1995, were smarter than Congressmen. Although the CDA was about as effective at reducing pornography on the Internet as a cocktail umbrella in a hailstorm, Section 230 emerged from this fragile legislation as an enduring and invaluable shield against liability. Now you can’t sue Facebook for publishing information that is verifiably false and harmful. Lives can be destroyed on the sites we live on, and those sites will never be held responsible.

The EFF says Section 230 is “one of the most valuable tools for protecting freedom of expression and innovation on the Internet” and ACLU says that this law “defines Internet culture as we know it.” These eminent bastions of free speech have been tremendous warriors for a lot of good in our society, but like anyone else, they could not predict the future and they may cling too long to brittle ideas that are past their expiration date. When Section 230 was adopted, the Internet was the Wild West, the new American frontier for development. There were no dominant Internet companies. The law was written with Prodigy and CompuServe in mind; AOL was the up-and-comer, Yahoo was barely a year old. The media lifeblood of the nation were the three broadcast networks, the New York Times and the Washington Post, and the many local newspapers throughout the country. People who understood the Internet then were rightly concerned about legal liability crushing the industry in its infancy.

We live in a very different world today. Network effects make some large portions of the Internet into a winner-take-all game where the behemoths can quickly grow into billion-dollar enterprises, affecting billions of lives daily. Traditional media is dead and dying, a boon to experimentation and diversity, but a blow to authority and truth. Technologists were proud to disintermediate and destroy the old gatekeepers, but we engaged in this merry destruction without any thought to the vital purpose that the Fourth Estate served in our politics. And now we live in a nation where most days it seems like the only people who don’t believe the next president could be a racist, misogynist, fascist despot are the ones who believe she could be an acceptably corrupt continuation of a broken political system.

The gatekeepers are dead and most people only get their news from their friends and others in the same echo chamber on Facebook. Our public discourse is conducted on Twitter, where online harassment by anonymous, cowardly sexists and racists is treated as an acceptable form of free speech. And we are still, as we always are in technology, only at the beginning of our problems. I don’t know where this is going any better than lawmakers did in 1996; I don’t have a solution – but I do think we should take the thumb off the scales that favor Internet businesses.

A similar situation occurred with respect to state sales taxes. In a 1992 case, the Supreme court ruled that businesses with no physical presence in a state did not have to collect sales tax in that state. Amazon exploited this ruling, carefully building its business to avoid having to impose state sales taxes, giving it an advantage over local businesses. By 2012, Amazon saw the writing on the wall, and began “voluntarily” collecting sales tax in many more states than it had previously done. But by that time, the West had been won: Amazon was the dominant online retailer, and Main Street businesses had been all but destroyed. Amazon had the foresight to act ahead of the change in the laws, which is coming anyway. I fear our dominant Internet services lack the moral courage to act in the interests of our country.

Facebook and Twitter are our new public square, and although they are private businesses they should not be exempt from the laws and social requirements of other businesses that regularly gather large groups of people together. No shopping mall, for example, would allow the public posting of verifiably untrue, insane ramblings, not without damage to their business as well as legal liability. No sporting venue would allow its women to be spit on, its minorities to be subject to vile racist invective, without losing business and facing lawsuits. And yet we allow our most significant public gatherings online to be completely free of the obligations of being a publisher, obligations that supported the kind of media that have been vital to our proper functioning politics.

The internet destroyed vast portions of traditional media that depended on fact, truth and integrity. This hasn’t been solely a triumph of progress and free market principles, it has been a creative destruction assisted by a sweetheart deal with the government. Under this mantle of government protection, technology companies replaced essential elements of democracy with endless misinformation, lies and insanity. Free speech should allow much of this to be possible, but those who would build a business on irresponsible dissemination of speech should be subject to the same laws as the businesses that they destroyed. It’s time to take the training wheels off of Internet culture. Section 230 of the CDA should be repealed.

69

Wired UK just published a pair of articles that are a great explication of the potential of Virtual Reality to become as powerful as the Web. They fairly report the vision that Philip Rosedale has been pursuing for most of his professional lifetime. My one-sentence summaries:

Second Life was just the beginning – Philip wanted to connect the world in a seamless 3D environment, but was greatly limited by technology of the time; today many of these limitations are lifting.

VR and the CD-ROM – People are most excited about closed VR experiences today, but this is like being excited about Encarta on CD-ROM before people understood how powerful Wikipedia would become.

Good articles; read them if you are interested in VR. I have just one, entirely personal, embarrassingly picayune, totally irrelevant problem …

The first article says: “Then, in 2006, Second Life stopped growing.”

I know this to be untrue. I ran finance for SL from 2005-2006, and remained on the exec team until I left the company in 2009. We raised money in 2006, and I personally prepared the financial projections that predicted our growth through 2008. Financial projections for startups are notoriously optimistic, which is to say they are mostly composed of fairy dust and bullshit. I was surprised as anyone to notice, in 2008, that my projections of fast growth held up, quarter over quarter, with a margin of error of no more than 10% (and even at that, the projection was usually lower than actual growth). So I know that SL was still growing quite well in 2006, in every meaningful aspect of usage and business metrics. The growth rate slowed in 2008, but absolute growth was still positive in 2009 when I left. Yes, SL did stop growing eventually. But not on my watch.

Ok, that’s prideful, and it’s petty. But it’s fair to say that I’m the single most authoritative source in the world on this topic. So when I read the article, I sent a note to the reporter with a correction. He replied that he’d “check it out.” A day later, he said that he followed up and he seems to be right, and cited an article by another reporter.

That is seriously annoying. The other reporter has no better access to the facts than the original reporter. That other reporter is just another source of rumor and speculation. In this case, I am the actual source of truth, and the reporter with access to the truth chose to ignore it!

Obviously, this is trivial. Who cares? No one but me and my wounded pride. But it’s frightening to consider how easily reporters will ignore the truth when it gets in the way of their own goals.

greatness and lateness

The late, great Bill Campbell passed away this week, and there is no shortage of encomia from the technorati about him. He was the greatest coach in Silicon Valley, and the list of leaders that have paid tributes is appropriately star-studded. Some of the most successful people in the business world have benefitted from his wise counsel and friendship. It’s not hard to find stories of some pearl of advice that Bill gave to change the direction of a company, or even a life. I’d like to share a story that’s different, though no less illustrative of his greatness, because it’s a story of what happens when you don’t listen to Bill Campbell.

Back when Linden Lab was one of the most hyped companies in the world, in the interregnum between Google and Facebook, we had typical growing pains that were no less painful for being typical. Through the extraordinary pleading of one of our board members, we had the good fortune to receive some time from Bill Campbell. It was a tough time to get his time. He’d recently found out that his close friend was suffering from a terminal disease, and he knew that supporting his friend and his friend’s family would soon becoming an all-consuming task. He could not agree to a team-wide mentoring relationship. But even in the face of this tragedy and his many other commitments, he agreed to spend some one-on-one time with our CEO in several sessions, and just one round of discussions through the rest of the exec team.

I was very excited when my turn came, having known not only of The Coach’s legendary reputation, but having heard and seen his sharp advice to our CEO implemented on a few occasions in our company already. We sat down in a fishbowl conference room, centrally located on the company’s main floor, with a view out across the desks on an otherwise normal day. As I began responding to his initial questions about my background and context, I saw his attention drawn sharply away to the window.

In just a few seconds of observation, he saw something he didn’t like outside the conference room. “Do you see that?” he asked me. Yes I did, I responded, I knew exactly what he was talking about. “What’s it about?” he probed. I gave my best explanation, no doubt biased, certainly incomplete, filled with my caveats and allowances for things that I perhaps did not understand completely. “Nonsense,” he said, “Your job is to take care of that situation. Do you think this company is going to make or break on the new markets you’re after, on the business deals you’re trying to swing? No. You are here for that, no one else on the team is going to do it. Fix it. That is your most important job.”

I’m sorry I’m being vague about the details of the problem that Bill saw. The details don’t matter in this particular telling of the story. What matters is how quickly Bill could see a critical problem in barely more than a glance, how few questions he had to ask to understand the nature of the problem, how firmly he could direct action where it was needed, how incisively he could assess character and roles on a team. That he could do all this in seconds was simply stunning.

The sad, though hopefully instructive, remainder of the story is how poorly I executed on his insight. Fixing the problem immediately would require an extreme action that would disrupt the company in a sudden and unwelcome manner. I thought that the safer course of action was to confine the problem to a tight but explosive space, allowing it to self-destruct in a formidable container, like a bomb going off under a fortified blast dome. In retrospect, of course this was the wrong choice. The problem lingered longer than it should have, was not completely isolated or contained, and rather than have an explosion in the air that the winds could blow away, I had poison in the ground that was now part and parcel with the soil on which the company was built.

I wish we’d had more time than we got with Bill, I don’t think I would have handled things the same way with just a little bit more counsel. It was not the difference in our company’s success or failure, but it was the best advice for the moment and for the team in place. The lesson, I suppose, if there must be a lesson here, is that when you are fortunate enough to access the wisdom of the great, act on it decisively before it’s too late.

magnificent seven

Tomorrow Oculus Rift is taking consumer pre-orders for its heralded VR headset, and many people are wondering whether 2016 will be the year of virtual reality … just as they wondered in 2015, and 2014 … as they wondered in 2009 when I left Second Life. At that time, I remained certain that virtual reality was the future of online interaction, but that it would be at least 10 years before the field could achieve mass consumer success. Virtual reality actually has many decades of history as a technical project, as well as a rich history in fiction that demonstrates the enduring attraction.

People keep mistaking “THE year” for virtual reality because they fail to properly assess the progress in each field required to make a truly compelling VR experience. Observers see great progress in just one field, and they assume that it’s enough to break open mass consumer interest. But in fact there are SEVEN fields required for VR success – “the year of virtual reality” won’t happen until every one of these fields has progressed past the minimum development threshold. Here’s a brief rundown of each field, WHAT it is and WHY it’s important, and WHEN it’ll be ready for a truly compelling VR experience.

1. GRAPHICS COMPUTING POWER

WHAT: The most obvious requirement is that a computer needs to be powerful enough to make a compelling simulation of reality. Now, what’s “compelling” is open to argument, and I would argue that some relatively primitive figures can comprise a compelling environment if they move, interact with each other, and react to you in an engaging way.

WHY: I guess you could have virtual reality without computers, just as you can have it without compelling graphics. I mean, that’s called “storytelling” and it’s pretty cool. But that’s not what we’re talking about here. Some minimum level of simulated graphics is required.

WHEN: Sufficient power exists now, and has existed for at least seven years. But if your requirement for visual fidelity is very high, then you might think that even today’s computers aren’t powerful enough.

The technical measurement discussion isn’t too interesting, so please skip this paragraph and the graph below if you’re not inclined to pick over this kind of detail. There’s no single measure of computing power, but as a rough analogue I’d pick FLOPS and say that to simplify further we should talk only about GPU FLOPS, noting that there’s CPU-equivalent performance. Because I believe that an experience comprised of rough primitives can be compelling, I’d say that even one GPU GFLOPS is sufficient to support a compelling experience, and we’ve had that in home computers since 1999. But giving room for argument, I can raise the requirement to 500 times that, and still we’re talking about 2007-8 as the time when consumer-level computers had enough power to make virtual reality.

cpu-vs-gpu2. PERSONAL COMPUTING DEVICES

WHAT: Unlike the first field, this is less about predicting the power that computers have than it is about predicting what type of computer people will use. “Personal computing” used to mean desktop computers, but now people actually carry computers on their persons. Today, the type of computer that is most commonly in use is the mobile smartphone.

WHY: Philip Rosedale frequently said that when he started SL, he underestimated the time that it would take to get to mass market use of virtual reality, because he was only looking at the increasing power of desktop computers. He didn’t predict the shift to laptops, which happened in the early 2000s. Using smaller computers generally means using less powerful computers, so although desktop computing power was sufficient to simulate reality by the mid-2000s, the computers that people actually used were laptops, which were not powerful enough. Today, the computer that most people use is a mobile phone, which is even less powerful.

WHEN: Using the same standards above, smartphones will be able to simulate a compelling VR experience in 2017.

(Boring, skippable paragraph follows.) As above, this assumes the requirement is 500 GPU GFLOPS, without arguing too much about what that number really means. A high-end smartphone today can do about 180 GPU GFLOPS, with more power coming soon. (For comparison, a PS4 game console can do over 1800 GPU GFLOPS.) Taking Moore’s Law narrowly and literally, it will be 2017 before smartphones will get over 500 GFLOPS.

But should we even be talking about smartphones here? Forget about “PC” meaning desktop – the truly personal computer has moved from your desk to your lap to your pocket. Where is it going next? On your wrist, on your face? This is a question about the intersection of device use and power, not either one alone. The precise question is, “When is the type of computer that most people use every day going to be capable of 500 GFLOPS?” I still think this is a question about smartphones, but who knows?

3. VISUAL DISPLAY

WHAT: A computer just simulates the environment, you need to be able to see what the computer is simulating. For many years, the way most people see a computer’s output has been through a monitor. Now, Oculus Rift and other goggles are coming into the mass consumer market, and these are so good that they’ve ushered in the current wave of excitement about VR.

WHY: Sight is the most important sense in giving people a feeling that they are somewhere other than where they’re sitting. It’s not the only required sense, but without seeing a virtual environment, most people cannot begin to immerse themselves in the experience. I used to think that a flat monitor of sufficient size and resolution could provide a compelling enough VR experience, but using the most advanced VR goggles today simply blows away the monitor experience.

WHEN: The major unsolved problem with VR goggles is that using them for too long induces nausea. Although Oculus and others have made a lot of progress on this, it’s only to the point where nausea is delayed rather than eliminated. A product that makes you puke is never going to be mass market. Based on nothing more than a rough guess based on many years of observation of consumer hardware cycles, I’m going to say that it will take three years to sufficiently refine VR goggles to smooth away the nausea and other early problems, so it will be late 2018 before this field is really ready for mass consumption.

4. AUDIO FIDELITY

WHAT: Properly spatial audio means that sound should be directional, you should be able to hear where a sound is coming from, and more than just direction, you should be able to distinguish sounds from each other even when they are coming from the same direction or obscured by ambient sounds. This latter goal is called the “cocktail party problem” – even in a noisy cocktail party, you can focus on and hear a single speaker who isn’t necessarily louder than the party noise.

WHY: Seeing may be believing, but hearing is confirmation. The audio experience is often overlooked and undervalued, but the sound of being in a space is crucial confirmation that your brain can believe what you see. It’s possible that the nausea of the VR goggles experience is due to insufficient confirmation from other senses, and hearing is probably the most important and easiest sense to add to the virtual environment, though some might advocate for smell or taste (uh, yuck).

WHEN: “3D audio” has been around for many years, but the cocktail party problem remains unsolved despite recent advances. Still, the current state of the art in spatial audio is very good, and probably good enough without fully solving the cocktail party problem. I think we’ll see really excellent integration of audio fidelity with VR goggles even before the VR goggles are fully nausea-free, so let’s say that the audio component will be ready by 2017.

5. 3D INPUT PERIPHERALS

WHAT: This is the most important area that not enough people are talking about. Virtual reality requires a host of new technologies for allowing a whole body to interact in a 3D space: hand and finger movement, body position, eye-tracking, multidirectional treadmills. Every single one of these is a new Oculus-size opportunity in the making.

WHY: A keyboard and mouse or trackpad are not designed, and not sufficiently adaptable, for a person to move easily in three-dimensional computed environment. The only innovation in input in mass computing devices that we’ve seen in the last 20 years has been multitouch on a smartphone or tablet, and that doesn’t help much for 3D.

WHEN: We have yet to see a breakthrough product here, despite many promising efforts. The field is extremely varied and diverse, and it could take many years to sort out the winners. Somewhat arbitrarily, I guess it will be at least 2019 before we have mass consumer products that enable all of the tactile, visual and auditory input needed for compelling VR.

6. BANDWIDTH

WHAT: Though it’s easy to imagine a VR experience that is entirely created by a single computer at a single place (somewhat like watching a movie at a theater), it is much more likely that many computers will need to talk with each other over distance, and that requires access bandwidth to communicate (like the Web).

WHY: The design of the particular VR experience that defines success really comes into play here. For example, this post assumes “mass market VR” will be enabled by personal computing devices and that multiple people can share a VR experience from different locations. That means that larger computers will perform important tasks and coordinate communication with the smaller computers that people have. The amount of bandwidth required can vary greatly depending on what demands the system is making on the computers involved.

WHEN: If you think that VR is going to get to mass market through smartphones or whatever successor computer that we carry around with us, then you’re bottlenecked by the state of wireless cell networks. Although high-speed data connection is broadly available in major metropolitan areas, it is unreliable even there and unavailable outside of the most densely populated areas. Given the slow rate of evolution of cell networks, it would be at least 2022 before bandwidth is sufficient for VR everywhere.

Many VR enthusiasts picture mass-market adoption through desktop computers, gaming consoles, or other specialized hardware yet to penetrate mass market, but all of which would use wired connections up until the wireless access point, so for that camp, we could say that bandwidth is already sufficient.

7. LATENCY DESIGN

WHAT: The delay in computers communicating with each other is sometimes related to bandwidth, but this field is included as a separate factor to encompass other network quality issues as well as the sheer physics of data traveling across large terrestrial distances.

WHY: Some amount of perceptible latency is unavoidable as a matter of physics if we are talking about communication across the world. So to the extent that the VR experience relies on real-time interaction with a global population, acceptable latency must be designed into the experience, mediated somehow to make the perception of latency acceptable.

WHEN: Arguably, this is a problem that is solved now for many types of VR experiences, but I include it here just because I’ve seen many VR proposals that don’t consider how latency must be designed into the experience. We’ll see some common design patterns in the first year of release of the current crop, and they’ll formalize into best practices by next year, so we’ll call this solved by 2017.

So, when will “The Year of VR” really be? My rough guess is 2019 at earliest, 2022 for the more ambitious visions of mass-market VR that include mobile computing.

My point isn’t to be right on the prediction though; here I just wanted to give a more rigorous framework for making predictions of mass market success. When people claim that this or any year will be the year of VR, they should be clear on what the state of progress needs to be on each of these seven pieces. Considering progress in just one field alone has led to many, many mistakenly optimistic predictions.

real time

At Second Life, we occasionally debated the merits of virtual reality vs augmented reality. In caricature:

Virtual reality was the core dream of SL, same as the core proposition of Snow Crash, the Holodeck, the Matrix – the idea that a computer simulated world could have all of the sensory and intellectual stimulus, all of the emotion and vitality, all of the commerce and society, of the “real” world (quotations necessary because virtual reality would be so real that non-simulated reality has no better claim on the term).

Augmented reality said that the virtual realists dropped too much acid in their youth. A fully simulated environment might be escapist pleasure for the overcommitted few, but computers would show their real power by adding a layer to our existing lives, not creating entirely new ones. Computers would sink themselves into our phones, our clothes, eventually our fingers and eyeballs and brains, not in the service of making another world, but enhancing the world we live in.

If that debate sounded ridiculously theoretical to you, then I hope that was yesterday because today it’s as real as it gets.

Google Glass is the vanguard of augmented reality, and obviously important to the company.* Google’s mission has always been to organize the world’s information – not to create a fantasy world but to organize our world.

Second Life had its heyday after Google established itself as the new tech titan, but before any serious challenger had risen up behind it. We spent a lot of time trying to convince people that SL could be the next big thing … trying to explain that people wanted to have an online identity, instantiations of themselves that would interact with other online personalities, creating tiny bits of content that might not have individual value, but would have enormous value as a whole fabric of an online world where people would go and interact every day …

I was laughed out of a lot of buildings after explaining SL. Who wants to live online? Who wants friends that they see only in a computer? Who wants to spend their leisure hours pecking away at a keyboard and looking at the cascades of dreck that other non-professional users create?

Second Life missed the mark for a lot of reasons, but not because we were wrong about online life. Facebook came along, and gave us all of the virtual life that the Web could really handle – only 2D, status updates instead of atomic 3D content, kitten pictures instead of furries – but Facebook succeeded in creating a virtual world.

And now they’ve acquired Oculus VR. If it wasn’t clear before – and perhaps it wasn’t clear even to them – they have now taken a side in that old debate, the same side that they’ve been on since the beginning. Facebook is going to go more and more towards virtual reality, while Google expands further and further into augmented reality.

 

*I don’t work on Glass, have no special knowledge of the product or strategy, and actually have never even tried it.