ch-ch-ch-changes

This has been a watershed week for sexism and Silicon Valley. The New York Times published a searing article implicating well known VCs in harassing behavior. It feels like the culmination of a years-long effort spearheaded by Sarah Lacy, whose relentless reporting helped lead to the resignation of the CEO of the most valuable private company in tech as well as the dismantling of a VC firm.

For men in tech, it’s been a good week to reflect on the injustices done to women, to think about the women in these stories and the women in our own lives. A focus on the women’s perspectives is clearly the most necessary, just and safest line of introspection. This post is not for people who haven’t undertaken that line of thought. This post is about the men.

Chris Sacca and Dave McClure are two of the men highlighted (lowlighted?) in the Times. Each responded with a well-written admission of guilt. Sacca said “I am sorry” five times in a single post. McClure admitted “I’m a creep.” I’ve seen two kinds of responses to these mea culpas:

Group 1: “This is a transparent PR move. These guys are only interested in saving their own skins. They don’t deserve praise for coming clean after being exposed, and the actions they’ve taken in their ‘woke’ stage will never be enough to clean their record. People don’t change, they are what they did.

Group 2: “Kudos to these guys for coming clean. It takes some bravery to face the crowd, to admit what you did, to make a public statement about your efforts to do better. Everyone makes mistakes, it’s the rare few who can improve upon their past. People can change, there’s no hope for any of us if that’s not true.

Group 1 is right … and so is Group 2. The day I write my admission of guilt, even if only to myself, it will be driven by this truth: You can’t change who you are, you can only change your reaction to it.

You are what you’ve done, full stop. You might think that there’s more to it, that your own private thoughts count for something, that the high opinion of your loving friends and family mean something, that the dollars and ratings and likes and tweets show the true score. But no. You are what you’ve done, that’s it. And you can’t change what you’ve already done.

Everyone has done bad things. When we do bad things, we often want to believe that they’re not so bad, that they’re not consistent with our “true” character, that we somehow can make up for it in other ways. This kind of self-denial, of course, allows us to continue doing bad things. I’d argue further that this self-denial leaves us with little choice other than to continue doing bad things.

Being a good person is about choice, for most of us. If you are someone who has just always been a good person, who’s never done wrong, who’s always been on the side of the angels – well, I think you’ve probably just been lucky in this regard, if unlucky in others. You had good parents, good friends, good influences. You’ve never been tempted by sex or power or money or fame. But you’ve lived a life outside of the more typical human condition.

Once you’ve done something bad, your options typically diminish: you can only feel guilt and shame, or denial. You would think that a “good” person would choose guilt and shame – but that’s just as dangerous as denial! Guilt and shame lead to self-flagellation, often self-medication, and ultimately to an amplification and repetition of the behaviors that led to the bad actions.

It may seem perverse, but accepting your faults gives you more options for how to react in any situation. If you can accept what you’ve done, accept that it’s who you are, you are more free to choose how to react to it. You don’t have to choose the cover-up, you don’t have to choose to deny it, you don’t have to choose to ignore it. You are much more free to address it, and to make a different choice in the future.

I think that’s what Sacca and McClure are doing in their posts; they are publicly accepting who they are, and trying to make choices in the harsh light of that reality. Is it self-interested? Yes. Is it brave? Yes. I know that some people reading this are going to think I’m going all Stuart Smalley, and I get it. That’s their choice. You can’t change who you are.

the jungle

Upton Sinclair was a novelist, but the impact of his work was more akin to today’s investigative journalism. He went undercover to expose the harsh labor and unsanitary conditions in the meat-packing industry, he exposed the sensationalist “fake news” of his day, he pilloried Wall Street as well as the coal, oil and auto companies that drove the American economy. Industrialists hated him; the mainstream press only begrudgingly acknowledged his accomplishments. President Theodore Roosevelt called him “a crackpot,” and said further, “He is hysterical, unbalanced, and untruthful. Three-fourths of the things he said were absolute falsehoods. For some of the remainder there was only a basis of truth.”

I doubt Sinclair cared at all what the President thought, and he probably did have less concern for truthful details than he had for the larger cause of social justice. We should be glad for that. Today we still enjoy the fruits of Sinclair’s relentless fervor and incendiary writing: food safety standards, journalistic ethics, and heightened scrutiny of the giants of business on matters of fair and safe labor practices.

I regard Sarah Lacy as the Upton Sinclair of the tech industry, especially with regard to fair treatment of women in Silicon Valley workplaces. She has been a consistent and powerful advocate of social justice, and her impassioned writing has contributed to highly visible changes in highly visible businesses. She might occasionally trample on a smaller truth in pursuit of a larger justice – and if she can be anywhere near as successful as Sinclair in bringing about social change, I don’t really blame her. But nevertheless, we hold our idols to higher standards than our enemies.

Pando recently published an article about a VC firm, with one partner who had recently resigned due to highly credible allegations of sexual misconduct, one about-to-join partner deciding not to join after all, and the remaining partner dealing with the fallout. Since the firm stridently denied the initial claims of the victims, this remaining partner continues to receive Sarah’s scrutiny. In criticizing the firm’s promotion of “baller bro culture,” Pando published pictures of this remaining partner at parties with women.

This is where she lost me, a bit. These are just pictures of a guy at a party, he does not appear to be doing anything inappropriate. The women in the pictures are not doing anything inappropriate. How is this “baller bro culture”? I felt bad for the women, as it seemed that publication of the pictures was a sort of “party-shaming” implying that these women could have no possible role other than objectification. It seems oddly Puritan and retrograde in a way that doesn’t fit Sarah’s other writing.

But I get her point. If it turns out that these pictures were used in promotion of the VC firm’s activities, then they are illustrative of a “baller” image that the firm wanted to convey. Even if these particular pictures were never used that way, Sarah claims to have enough off-the-record information to be sure that the firm consciously promoted such an image, and I believe her. And more importantly, I believe the women who have come forward to claim that they were victimized by the VC firm. So at the end of the day, we are aligned on the larger cause, even though I am very sure that Sarah has done an injustice to the innocent women in the pictures she used. (She’s taken one of the pictures down, but not because she admits she was wrong, but because the copyright owner objected to its use.)

So I wondered, how can I help the larger cause? If I knew of any investors or company managers that abuse their positions for sexual advantage, I’d speak of it openly. But I don’t, so I asked myself whether I knew of anyone who was promoting a “baller bro image” that supports an environment that disadvantages women …

And here’s where I got stuck. I know one investor who frequently posts images of himself in glamorous locations, often with attractive women (and attractive men, to be fair). I know him pretty well, in fact. Everybody knows this guy, I’m sure Sarah knows him too. And everyone knows he’s a good, honest and fundamentally decent human. He’s known so well and spends so much time around attractive people, with such a sparkling reputation, that it’s basically impossible he could have done anything inappropriate without everyone finding out. Is he baller? Hellz yeah, he baller. Does that mean he uses a “baller bro culture” to promote his business and take advantage of women? I don’t think so …

I feel very confident that he’s never taken advantage of the inherently unfair investor power dynamic to pursue sex. But now we have to consider the question of whether his distribution of the images of innocent fun supports an overall culture in tech that’s bad for women. Again, I don’t think so – but I also don’t think I’m the best judge of this question. So let’s say for the moment that yes, these images contribute to an overall culture that objectifies women. What then? Do I reach out to this guy and insist that he stop posting pictures of his fabulous life? That seems oddly Puritan and retrograde.

WWSD? (What Would Sarah Do?)

I don’t know. Despite how unfairly she treated women in those other pictures, I have a hard time believing she’d engage in a crusade against this kind of aspirational Insta-journaling. I don’t think I can ask her, as she regards my concerns as absurd. So I’m left with few options … other than that impotent cry into the ether known as blogging, aka the last resort of a scoundrel.

the force awakens

Yep, it’s an end-of-the-year technology prediction post …

We’re at a special place in the consumer technology cycle. I’ve seen this movie before. Consumer technology trends are often described as waves, but I like a movie metaphor better, because it captures the notion that I actually saw these events when they were first released in the theater, and that we keep seeing the same plot points, themes and character types. I’ve lived through three really big waves of consumer technology. The third wave – the third movie – is finally coming to an end, which is a relief, because it kinda sucked. I’m really looking forward to the next show.

I’m a fan of the franchise generally, despite the repetitive plots. Each movie starts with the introduction of products that clearly show the possibility of what’s to come, although these are not the products that actually survive the revolution. Those products depend on a crucial underlying technology trend, which is not itself the consumer-facing technology. There is a spectacular platform war that decides the big winners and losers. The story ends, until next time, when the business patterns in the field have matured, and outsized returns for investing in those businesses have therefore disappeared.

The Origin Story: Personal Computers

pirates-of-silicon-valley

Like the first movie in a series, this one defined many of the patterns, tropes and heroic character types of the sequels to come. In a digital desert, a lone gunslinger appeared on the horizon, known only by the mysterious name Altair. The story really picks up when the Commodore PET, the TRS-80, and the Apple II appear on the scene. That trio of bandits opened up the Wild West, only to be dominated by the strongman IBM PC. But IBM only won a hollow victory, as it turned out that they’d unwittingly given the keys to the kingdom to Microsoft, the ambitious vassal that became the overlord. The story of the rise of the PC is the classic foundation of everything that came after in consumer technology.

But it would be a mistake to only pay attention to the foreground. In the backstory, the silicon chip is the key enabling technology that’s powering the other players. Moore’s Law is the inexorable force of progress, and Intel was the master who kept on top of the industry despite laudable challenges by AMD, Motorola, Texas Instruments, and a host of international competitors. This global tale of intrigue and ambition is a worthy accompaniment to the marquee narrative. In fact, the invention of Silicon Valley can be considered the prequel to this series.

The Worthy Sequel: World Wide Web

the-matrix

Many people say The Empire Strikes Back was a better movie than Star Wars. The Godfather was in many ways outclassed by Part II. The explosive success of the World Wide Web was at the very least a worthy sequel to the PC story. A knight in shining armor, Tim Berners-Lee, led a devoted band of heroes on a worthy quest to unite all of the world’s information. Early services like Prodigy and CompuServe leapt on the ensuing opportunity, but latecomer AOL won the day by sending a CD to every mailbox it could find. That was only the first act, as Netscape and Yahoo emerged as the real heroes … until the third act, when eBay and Amazon and Google trampled the field.

It’s usually not worth the effort to make a distinction between the Web and the Internet, but it makes sense to do so here because “World Wide Web” is the story with a beginning and an ending, while the technologies of the Internet are the more enduring enablers of that story. As protocols, the details of TCP/IP, DNS, HTTP and the like are not exactly gripping narrative. But like silicon chips powered the PC revolution, and could be considered the more enduring story, the Internet will live on long after the Web sinks into irrelevance.

The Failed Trilogy: Smartphones

phone-booth

Return Of The Jedi was a very successful movie. And it did have some awesome special effects for the time. But it was all of the same characters, and pretty much the same plot, soiled by dominant commercial motives and treacly pandering to a younger audience. By which I mean, fuck Ewoks. And Godfather Part III? The less said about that, the better.

The story of the last dozen years or so has been the move of personal computing and the Internet to smartphones. There’s some compelling pathos in the storyline of the death of the Web, overrun by mobile apps. But it was mostly dull to watch the Treo and Blackberry reprise the role played in prior movies by the Altair, Prodigy and CompuServe. I’ll admit it was great fan service to see the Apple character repurposed, and maybe there hasn’t been a more colorful personality than Steve Jobs, so that part of the story was pretty entertaining. You could say that the return of Jobs was as momentous as finding out about Luke’s father.

Let’s face it, it just wasn’t that exciting to watch Google and Amazon continue to grow. Facebook is a great new character as a flawed hero, and that whole subplot with Twitter and the rest of social media was a very strong B story. Other new characters like Uber and AirBnB have their minuses and pluses, but I don’t believe they’re going to be big characters in the next movie. (“Uber for X” companies are the goddamn Ewoks.) The overall experience has been like coming in to watch a huge blockbuster mega-sequel: you can really see the dollars up there on the screen, and there’s a certain amount of entertainment value that comes through, but the whole exercise just lacks the originality, joy and passion of the earlier entries.

Not a bad backstory though, and as in the other movies, this one will continue to be meaningful in all future sequels. Cloud computing, software as a service, the evolution to microservices – these things fundamentally changed the way that new businesses start and grow. They reduced the capital costs in starting a new information technology company by orders of magnitudes, letting in many more characters. Unfortunately, most of those new characters are Ewoks.

The Force Awakens

So what’s the next movie going to be about? Will it reinvigorate the franchise? Or will it be a terrible prequel (or worse, prequel trilogy) that we’ll all have to agree to pretend never happened?

I think we don’t know all of the elements, but we do know some of them. Let’s first recap what we saw in the first three installments:tfa-chart

And here’s what I think we know about the chart today:

tfa-chart-f

Main Story: There is a flood of products that don’t have an agreed category name yet – Siri, Google Assistant, Amazon Alexa, Microsoft Cortana, chatbots, chatbots and more chatbots. Some industry terms that are cropping up are intelligent personal assistants, virtual assistants, conversational search. Or chatbots, fer chrissake.

The point is, you will have things in your house (your car, your pocket, etc) that you talk with, and these things will talk back to you in a way that makes sense. You’ll regard your interaction as a conversation rather than button punching or screen swiping. Until people converge on another name for all of these things, I’ll call them “conversational devices” – this captures that you have a productive back-and-forth with a physical object. Yes, you can already do something like this on your smartphone, but those implementations are only a hint of where this will go.

As early as it is, there are plenty of curmudgeons who don’t see the point. Smarter people have said we’ll never need more than five computers, no one wants a computer in their home, the Internet is a fad, the iPhone is going to be a flop. Predictions are hard. But screw it, here’s mine: within 3 years, it will be apparent that the adoption curve of conversational devices is in the same category as PCs, the Web, and smartphones.

Conversational devices will be the story of the next decade in consumer technology. Not that there won’t be other stories, it’s just that this one will be the lens by which we understand the era. I still love virtual reality, but it’s still not time yet. The blockchain isn’t consumer-facing, and  I don’t believe in Bitcoin. Not Internet of Things, not 3D printing, not self-driving cars, not wearable devices (unless they are also conversational devices) – some of these will be big stories, but not the biggest story of the next dozen years.

Backstory: Conversational devices rely on this chain of technologies: Machine Learning -> Natural Language Processing -> Speech Synthesis. These technologies are complex and interrelated, and rather than explain why this is their moment (the foregoing links give that explanation), I’ll just skip to the punchline: People will be able to speak to machines, machines will understand and speak back. Most people already have experience with primitive versions of these technologies, and find those experiences frustrating and unsatisfying. (“Press 9 to go back to the main menu.”) But the rate of improvement here is at an inflection point, and this is about to become undeniably apparent on a mass consumer level.

Platform War: The most successful conversational devices will be on a common platform of delivery. Amazon Echo and Google Home are devices that sit in your home and listen to everything you say, and respond back to help you. Facebook Messenger has bots that will have a conversation with you. Each of these is currently displaying only the limited strengths available in their existing businesses (Amazon:Shopping, Google:Search, Facebook:Brands), but they are all trying to expand to become a delivery platform for third-party conversational devices. Amazon and Facebook already offer developer platforms, Google is focusing on partnerships.

This platform war will have elements of past wars, in hardware vs software, apps vs operating system, open vs closed. That complexity makes it very interesting, but remember, this is theme rather than story. The platform war is the Empire vs the Rebellion, the Mob vs America, it’s the thematic texture that gives the story meaning. You shouldn’t mistake it for the main narrative though. In Mac vs PC, Microsoft won, not Apple or IBM. In open vs closed web, Google won, not Tim Berners-Lee or AOL. Ok, the winners in iOS vs Android were also the platform owners, but that’s yet another reason that movie sucked, maybe it’s the fundamental reason that movie sucked. I hope everyone involved is smart enough not to let that happen again.

Pioneers and Winners: We are far enough into the story that we can guess at pioneers, but we can’t be sure until the extinction event happens: in all previous movies, the early pioneers proved the market, and then died, crushed by an onslaught that included the eventual winners. I’m convinced that this plot point will repeat in the new movie. Look in the chatbot space for potential pioneers – it’s certain than one of these will become historically important. And then it will die.

I’m hoping the platform war victors aren’t also the heroic winners of the main story, as happened in the smartphone movie, because it’s boring and tends to result in Ewoks. Facebook is the pivotal character to watch, as it has a platform opportunity with Messenger, but has huge weaknesses relative to Google, Amazon, Apple and even Microsoft in hardware production and delivery, and hardware will be key to platform ownership. So it will be interesting to watch whether Facebook dives into hardware, or partners with one or more of the other platform players, in the hopes that there’s a bigger opportunity in the main story than the theme.

Well, that’s all I have to say about that. Enjoy the show!

WWGD?

Six months ago, I said that Trump would win the election in part because the rise of new media destroyed the historic function of the media as our Fourth Estate. I was upset that product managers at our most important Internet companies seem to refuse to own the problem that is so clearly theirs.

Now that the chickens have come home to roost in a big orange nest of hair, others are saying that the election was, in a sense, rigged by Facebook. They say fake news has defeated Facebook. Facebook denies responsibility, while people are literally begging them to address the problem.

Product managers at Facebook are surely listening now. If any happen to be listening here, let me say: I’m sorry I called you cowards. I realize that today’s state was hard to foresee, and that the connection to your product even still seems tenuous. I am awed at the great product you’ve built, and I understand that no one knows the data better than you do, and that it is tough to take criticism that comes from sources completely ignorant of your key metrics. It’s not easy to regard something so successful as having deep flaws that are hurting many people. I think it is a very human choice to ignore the criticism, and continue to develop the product on the same principles that you have in the past, with the same goals.

I have faith that you are taking at least some of the criticism to heart. I imagine that you know that you can apply machine learning to identify more truthful content. I am sure that you will experiment with labels that identify fact-checked content, as Google News is doing. Once you reliably separate facts from fiction, I’m sure you’ll do great things with it.

I’m still concerned that facts aren’t enough. I think we’re in a post-fact politics, where people no longer (if they ever did) make their political choices based on facts. I have read many analyses of the election results, many theories about why people voted as they did. There are many fingers pointing blame at the DNC and the Electoral College; at racism, sexism, bigotry; at high finance, globalism, neoliberalism; at wealth inequality, the hollowing out of the middle class, the desperation that comes with loss of privilege. I am not convinced that giving people more correct facts actually will address any of this.

The most incisive theory that I’ve seen about today’s voters says that the divide in our country isn’t about racism or class alone, but about a more comprehensive tribalism, for which facts are irrelevant:

There is definitely some misinformation, some misunderstandings. But we all do that thing of encountering information and interpreting it in a way that supports our own predispositions. Recent studies in political science have shown that it’s actually those of us who think of ourselves as the most politically sophisticated, the most educated, who do it more than others.

So I really resist this characterization of Trump supporters as ignorant.

There’s just more and more of a recognition that politics for people is not — and this is going to sound awful, but — it’s not about facts and policies. It’s so much about identities, people forming ideas about the kind of person they are and the kind of people others are. Who am I for, and who am I against?

Policy is part of that, but policy is not the driver of these judgments. There are assessments of, is this someone like me? Is this someone who gets someone like me?

Under this theory, what is needed isn’t more facts, but more empathy. I have no doubt that Facebook can spread more facts, but I don’t think it will help. The great question for Facebook product managers is, Can this product spread more empathy?

The rest of this might be a little abstruse, but here I’m speaking directly to product managers of Facebook News Feed, who know exactly what I mean. You have an amazing opportunity to apply deep learning to this question. There is a problem that the feedback loop is long, so it will be difficult to retrain the production model to identify the best models for empathetic behavior, but I think you can still try to do something. There is some interesting academic research about short-term empathy training that can provide some food for thought.

I am convinced that you, and only you, have the data to tackle this problem. It is beyond certainty that there are Facebook users that have become more empathetic during the last five years. It is likely that you can develop a model of these users, and from there you can recreate the signals that they experienced, and see if those signals foster empathy in other users. I don’t think I need to lay it out for you, but the process looks something like this:

  1. Interview 1000 5-year Facebook users to identify which ones have gained in empathy over the last five years, which have reduced their empathy, and which are unchanged.
  2. Provide those three user cohorts to your machine learning system to develop three models of user behavior, Empathy Gaining, Empathy Losing, Empathy Neutral.
  3. Use each of those 3 models to identify 1000 more users in each of those categories. Interview those 3000 people, feed their profiles back into the system as training data.
  4. See if the models have improved by again using them to identify 1000 more users in each category.

At this point (or maybe a few more cycles), you will know whether Facebook has a model of Empathy Gaining user behavior. If it turns out that you do have a successful model, of course the next thing to do would be to expose Empathy Losing and Empathy Neutral users to the common elements in the Empathy Gaining cohort that were not in the other two cohorts.

But now at this point you are in a place where the regression cycle is very long. Is it too long? Only you will know. How amazing would it be to find out that there’s a model of short-term empathy training that is only a week or two long? People use Facebook for hours a day, way more than they would ever attend empathy training classes. This seems to me to be an amazing opportunity. Why wouldn’t you try to find out whether there’s something to this theory?

One reason might be a risk to revenue models. Here I’d encourage you too see what Matt Cutts said to Tim O’Reilly about Google’s decision to reduce the prominence of content farms in search results, even though that meant losing revenue:

Google took a big enough revenue hit via some partners that Google actually needed to disclose Panda as a material impact on an earnings call. But I believe it was the right decision to launch Panda, both for the long-term trust of our users and for a better ecosystem for publishers.

I understand this mindset personally because I was there too. At the same time Matt was dealing with Google’s organic search results, I was dealing with bad actors in Google’s ads systems. So I was even more directly in the business of losing revenue – every time we found bad ads, Google lost money. Nevertheless, we had the support of the entire organization in reducing bad ads, because we knew that allowing our system to be a toxic cesspool was bad for business in the long run, even if there were short-term benefits. In fact, we knew that killing bad ads would be great for business in the longer run.

News Feed product managers, I’m not writing this from a position of blaming you. I was in a situation very much like yours and I know it’s hard. I can also tell you, it feels really really good to solve this type of problem. I am convinced that an empathy-fostering Facebook would create enormous business opportunities far exceeding your current path. It is also entirely consistent with the company mission of making the world more open and connected. You can make a great product, advance your company’s mission, and do great good in the world all at the same time. You are so fortunate to be in the position you’re in, and I hope you make the best of it.

the right stuff

I’ve been reading post-election analysis all day. It’s exhausting, infuriating, debilitating … and necessary. When an outcome is wildly off expectations, it’s important to understand what really happened, otherwise you could expend a lot of effort “fixing” the wrong things. There sure are a lot of people who are confident that they know what happened, I’ve been reading them all day. Then I suddenly realized that these are precisely the same people who turned out to be so spectacularly wrong in predicting the outcome. Hey, maybe I should pay attention to people who were right all along …

There were some early, correct predictions, by a prescient pollster (March 2016), a hack blogger (May 2016), a partisan policy expert (August 2016), and a politics professor (Sept 2016), but two sources stand out for their entertaining writing and strong post-election advice.

Michael Moore predicted a Trump victory in July. He precisely identified the four critical “Rust Belt” states that were vulnerable to a Republican flip. Everything he said in his analysis turned out to be exactly correct. So we should probably take his post-election advice pretty seriously, you should read it on his page but I’ll paraphrase here:

  1. Reform the Democratic Party nomination process.
  2. Ignore the media sources that got stuck on their false narrative before the election.
  3. Get active in telling Democratic Congresspeople to obstruct the Republican administration.
  4. Get over your surprise and stop treating Trump like a joke.
  5. Remember that the popular vote went to Clinton. The populace wants liberal policies.

Solid advice, pretty straightforward. The tone overall is combative and spirited; I’m a little unhappy about how it implies we need 4 more years of complete legislative gridlock, but I suppose that’s the fight we must sign up for if we fear that Trump will try to fulfill his campaign promises.

But … why would we assume that? Trump never fulfills promises that aren’t to his advantage, why would he start now? This is the perspective of the other guy who got it right in a really interesting way, Scott Adams. Famous as the creator of Dilbert, he’s become an oddly narcissistic but really entertaining blogger. He predicted a Trump victory in August 2015, by far the earliest correct prediction. He says that President Trump will preside over the most direct democracy in the history of our republic. In Adams’ view, campaign promises mean nothing to Trump. He says what he says to get what he wants. He got elected by one set of people, now he’ll govern another set of people. He’ll say whatever is needed to placate the largest set of those people that he can, regardless of whether they elected him, and that generally that set will lead to kind outcomes. Trump is a con man, which actually makes him a safe choice for President, because he has no intention of hurting his real-estate interests around the world, or his self-centered media business.

Reading Adams is going to be infuriating for many, but the practical advice there is actually pretty similar to Moore’s: be active with your neighbors and representatives, especially on social media; ignore the pundits who were wrong all along; remember that the majority of the country wants good outcomes for as many people as possible. I kind of want to slap him upside the head, but I can’t say he’s wrong.

an indecent proposal

For over 20 years, Internet businesses have grown under the protection of a special law that provides extraordinary privileges. This law has properly been hailed as a boon to innovation, and has become enshrined in some quarters as an indispensable pillar of free speech. However, no law regarding technology can survive the merciless rule of unintended consequences; what was once a necessary sanctuary has become a virtual menace to society. If you wonder how the United States has reached the brink of electing a deplorable villain as its leader, at least part of the answer rests with the Internet’s most generous law.

This law is Section 230 of the Communications Decency Act of 1996. The bulk of the act was a misguided attempt to regulate “indecent” content on the Internet, most of which was rightfully struck down by the Supreme Court in the name of the First Amendment. But Section 230 was a special provision inserted late in the legislative process, out of concern that nascent Internet businesses would drown in legal liability for statements made by others. Section 230 states:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

This is a shield from libel and defamation suits, an amazing advantage in the rise of the new media of the Internet. The impetus for this law came from a 1995 case where an Internet service provider was found liable for defamatory statements made by a user of its message board. The court’s reasoning included the fact that the Internet service had exercised some editorial control over some of the message board content; therefore the service could be treated as the publisher of all the content, just as a newspaper would be.

In 1995, this was a horrific decision made by technology-illiterate judges who had no understanding of the power and potential of the Internet. It would be nice to think that the Congressmen who inserted Section 230 into the CDA were blessed with extraordinary foresight into the future of technology. But no – actually they just wanted to be sure that Internet companies would be willing to help hide boobies.

Remember, the bulk of the CDA was an insane Sisyphean effort to stop the spread of pornography on the Internet. Internet providers were rightly concerned that they would never be able to stop all the boobies. They argued that the 1995 case showed that any failed attempt to censor boobies would be interpreted as editorial control, holding them liable for all the boobies that did get through. So these Congressman inserted Section 230 as a way of saying to companies “Hey, just try your best to censor boobies, you won’t be held liable as a publisher of the boobies that did get through.” Internet companies, even in 1995, were smarter than Congressmen. Although the CDA was about as effective at reducing pornography on the Internet as a cocktail umbrella in a hailstorm, Section 230 emerged from this fragile legislation as an enduring and invaluable shield against liability. Now you can’t sue Facebook for publishing information that is verifiably false and harmful. Lives can be destroyed on the sites we live on, and those sites will never be held responsible.

The EFF says Section 230 is “one of the most valuable tools for protecting freedom of expression and innovation on the Internet” and ACLU says that this law “defines Internet culture as we know it.” These eminent bastions of free speech have been tremendous warriors for a lot of good in our society, but like anyone else, they could not predict the future and they may cling too long to brittle ideas that are past their expiration date. When Section 230 was adopted, the Internet was the Wild West, the new American frontier for development. There were no dominant Internet companies. The law was written with Prodigy and CompuServe in mind; AOL was the up-and-comer, Yahoo was barely a year old. The media lifeblood of the nation were the three broadcast networks, the New York Times and the Washington Post, and the many local newspapers throughout the country. People who understood the Internet then were rightly concerned about legal liability crushing the industry in its infancy.

We live in a very different world today. Network effects make some large portions of the Internet into a winner-take-all game where the behemoths can quickly grow into billion-dollar enterprises, affecting billions of lives daily. Traditional media is dead and dying, a boon to experimentation and diversity, but a blow to authority and truth. Technologists were proud to disintermediate and destroy the old gatekeepers, but we engaged in this merry destruction without any thought to the vital purpose that the Fourth Estate served in our politics. And now we live in a nation where most days it seems like the only people who don’t believe the next president could be a racist, misogynist, fascist despot are the ones who believe she could be an acceptably corrupt continuation of a broken political system.

The gatekeepers are dead and most people only get their news from their friends and others in the same echo chamber on Facebook. Our public discourse is conducted on Twitter, where online harassment by anonymous, cowardly sexists and racists is treated as an acceptable form of free speech. And we are still, as we always are in technology, only at the beginning of our problems. I don’t know where this is going any better than lawmakers did in 1996; I don’t have a solution – but I do think we should take the thumb off the scales that favor Internet businesses.

A similar situation occurred with respect to state sales taxes. In a 1992 case, the Supreme court ruled that businesses with no physical presence in a state did not have to collect sales tax in that state. Amazon exploited this ruling, carefully building its business to avoid having to impose state sales taxes, giving it an advantage over local businesses. By 2012, Amazon saw the writing on the wall, and began “voluntarily” collecting sales tax in many more states than it had previously done. But by that time, the West had been won: Amazon was the dominant online retailer, and Main Street businesses had been all but destroyed. Amazon had the foresight to act ahead of the change in the laws, which is coming anyway. I fear our dominant Internet services lack the moral courage to act in the interests of our country.

Facebook and Twitter are our new public square, and although they are private businesses they should not be exempt from the laws and social requirements of other businesses that regularly gather large groups of people together. No shopping mall, for example, would allow the public posting of verifiably untrue, insane ramblings, not without damage to their business as well as legal liability. No sporting venue would allow its women to be spit on, its minorities to be subject to vile racist invective, without losing business and facing lawsuits. And yet we allow our most significant public gatherings online to be completely free of the obligations of being a publisher, obligations that supported the kind of media that have been vital to our proper functioning politics.

The internet destroyed vast portions of traditional media that depended on fact, truth and integrity. This hasn’t been solely a triumph of progress and free market principles, it has been a creative destruction assisted by a sweetheart deal with the government. Under this mantle of government protection, technology companies replaced essential elements of democracy with endless misinformation, lies and insanity. Free speech should allow much of this to be possible, but those who would build a business on irresponsible dissemination of speech should be subject to the same laws as the businesses that they destroyed. It’s time to take the training wheels off of Internet culture. Section 230 of the CDA should be repealed.

69

Wired UK just published a pair of articles that are a great explication of the potential of Virtual Reality to become as powerful as the Web. They fairly report the vision that Philip Rosedale has been pursuing for most of his professional lifetime. My one-sentence summaries:

Second Life was just the beginning – Philip wanted to connect the world in a seamless 3D environment, but was greatly limited by technology of the time; today many of these limitations are lifting.

VR and the CD-ROM – People are most excited about closed VR experiences today, but this is like being excited about Encarta on CD-ROM before people understood how powerful Wikipedia would become.

Good articles; read them if you are interested in VR. I have just one, entirely personal, embarrassingly picayune, totally irrelevant problem …

The first article says: “Then, in 2006, Second Life stopped growing.”

I know this to be untrue. I ran finance for SL from 2005-2006, and remained on the exec team until I left the company in 2009. We raised money in 2006, and I personally prepared the financial projections that predicted our growth through 2008. Financial projections for startups are notoriously optimistic, which is to say they are mostly composed of fairy dust and bullshit. I was surprised as anyone to notice, in 2008, that my projections of fast growth held up, quarter over quarter, with a margin of error of no more than 10% (and even at that, the projection was usually lower than actual growth). So I know that SL was still growing quite well in 2006, in every meaningful aspect of usage and business metrics. The growth rate slowed in 2008, but absolute growth was still positive in 2009 when I left. Yes, SL did stop growing eventually. But not on my watch.

Ok, that’s prideful, and it’s petty. But it’s fair to say that I’m the single most authoritative source in the world on this topic. So when I read the article, I sent a note to the reporter with a correction. He replied that he’d “check it out.” A day later, he said that he followed up and he seems to be right, and cited an article by another reporter.

That is seriously annoying. The other reporter has no better access to the facts than the original reporter. That other reporter is just another source of rumor and speculation. In this case, I am the actual source of truth, and the reporter with access to the truth chose to ignore it!

Obviously, this is trivial. Who cares? No one but me and my wounded pride. But it’s frightening to consider how easily reporters will ignore the truth when it gets in the way of their own goals.