don’t be evil

B42

“Don’t Be Evil.” 

If you recognize this as Google’s former corporate motto, you probably regard it as a broken promise. But arriving too quickly at this judgment misses the lesson of the journey. It may be true that we now live in a tech dystopia created at least in part by those who once proclaimed, “Don’t Be Evil.” But in the beginning, that motto contained a magnetic True North that once meant something, that still means something, something that is awaiting our rediscovery. 

So before memorializing “Don’t Be Evil” as a broken promise, we must remember what it once meant.

We have to remember the time before the first widespread criticism of this mantra, before the semantic noodlers complained that it is impossible to define what “evil” means. See, the thing is, before this criticism was widely shared, it wasn’t relevant. It wasn’t relevant because the real audience for “Don’t Be Evil” already knew what the phrase was supposed to mean.

The real audience was undoubtedly the employees of Google at the turn of the millennium, when either Buchheit or Patel (depending on the storyteller) first proposed this as the company’s motto. Google had fewer than 250 employees at the time. “Don’t Be Evil” was a phrase that was easily understood by not only those 250 employees, but also by all of the company’s potential employee base. Yep, I’m claiming that every person who had the qualifications to be hireable by Google at that time (1999-2001) would easily understand the basic meaning of Don’t Be Evil.

See, if you knew enough about computing in those days to be employable at Google, then you grew up in technology watching IBM lose to Microsoft, then watching Microsoft crush Apple, and then watching the government strangle Microsoft. And then you got to enjoy watching Google beat the crap out of Microsoft. It’s just human nature to watch all this and make it into a morality play, with extremely domain-specific notions of “good” and “evil.”

When the government hampered Microsoft in the ’90s, that was a fair comeuppance for an abusive player, just as had happened to IBM in the ’80s when Microsoft was coming up. Small new companies innovate into the spaces left by the decrepitude of large old companies. The cycle of life applies to all of us, businesses too. In business, as in life, that cycle plays out in predictable patterns. And as humans, we love telling ourselves a story about our patterns. And to be compelling, our stories must have good guys and bad guys, good and evil. “Don’t Be Evil” is a morality play, and it is just a fiction, but still, these notions of good and evil move us – especially when we’re deciding where to work and how to win competitive battles.

So IBM vs Microsoft, Microsoft vs Apple, Microsoft vs Google – that was the drama that played out in information technology at the time, and our notions of “good” and “evil” were aligned with the prevailing morality play that everyone knew as orthodoxy, even if they disagreed with it: Microsoft was the bad guy, Apple was awesome and cool before MSFT used monopolistic advantages to crush them (this was before the Second Coming of Jobs). Microsoft was Evil. Google was Good.

So in this morality play, “evil” means, basically: using “business techniques” instead of superior technology to win. Don’t Be Evil simply means: win with technology, not with business techniques. 

“Business techniques” include perfectly legitimate and absolutely necessary decisions and deals around pricing, packaging, and distribution. But that’s just the bare minimum. The expanded world of business techniques gets pretty gray pretty fast, and eventually you end up where we are today: dark patterns that manipulate users, platform rent-seeking, externalization of business costs into the community, lobbying and other political manipulation. I don’t really like calling these things “evil,” but it’s fair to say that these are the tactics and methods of mature businesses, and they are not what successful startups do.

I worry that the tech world has been so dominated by the usual BigTech suspects for so long now that entrepreneurs have forgotten the difference between Good and Evil. But no matter: the world doesn’t need to remember because the truth will out: for the first time in a long time, nearly all the BigTech companies are grappling with disruptive technologies that they do not understand. When there is this much disruption in the air, fancy business techniques become less valuable, and a True North for product development becomes far more valuable. For the first time in a long time, opportunity is everywhere, all incumbents are vulnerable, and all startups have this one incontestable upper hand: Don’t Be Evil is a winning strategy, not an empty corporate motto.

a lever and a place to stand

“Give me a lever and a place to stand, and I will move the world.”

Archimedes (apocryphal)

When I was a kid growing up in New Jersey, all I ever wanted was to get out, across the river to the bright lights big city. I assumed that New Yawk City was the place that moves the world, because what else would a Jersey kid think amirite? And I loved everything about living there: I loved the hard work and the harder play, the high stakes and the almost tangible power and raw human energy that buzzed through the canyons between the skyscrapers. But after starting my career in “high” finance, I was disappointed in the financial engineering that passed as “creation” in that industry, and by 1999 it was obvious that the future was really being created across the country, in Silicon Valley.

So I headed West, searching for an industry that builds levers to move the world, searching for my place to stand. About a decade in, people began to tell me that my career path looked a little weird. From leveraged buyout lawyer in NYC to startup counsel in Silicon Valley to Korean venture capitalist to Fortune 100 corporate deal maker. And then it just kept getting weirder: international marketing, developer relations, enterprise product development, startup founder, BigTech product manager, startup sales manager and more. Not content with the variety of roles, I also wandered across sectors and products: enterprise hardware, metaverse consumer software, adtech, content moderation systems, maps, devops SaaS ….

Oh, and then I got sick of tech and ran for political office … now that was a weird move. But not to me, still just trying to understand what, if anything, moves the world in a better direction. Campaigning was a deeply moving experience for me, as I’m sure it is for any child of immigrants. I learned a lot, but the long and the short of it here is just that the political industry isn’t a place I can stand.

When I look back on it all, I feel lucky to have started my career in tech during that first decade from 1999-2008, before the global financial crisis, before BigTech was a thing … and maybe the last time we could have avoided the consequences we’re living out today. The dreams were big, the schemes were fun, and the common ambition was to put a dent in the universe with technology so good it seems like magic. The “why” behind this sparkling ambition was often unspoken, but I never thought it was about the money. Most of my friends in tech thought it was wonderful to see explosions of wealth of course, but we weren’t in technology to play the lottery, we were in it because we loved technology. 

We were mostly dorky kids who were lucky enough to have access to an Apple ][ or Commodore 64 in middle school or high school, we played Atari and Intellivision, we wrote our first programs in BASIC and we fell in love with the future. And though we might have loved technology for different reasons, I think the common thread was that we loved what technology could do for humanity. We loved the spirit of innovation for its delight, not the dollars. We loved the fun that tech could add to our lives.

So the place where I stood in the best years of my first decade in tech was in San Francisco, at a company called Linden Lab, and we tried to move the world with Second Life. Enough has been written about Second Life, I don’t like to add to the noise. But I can’t say enough about the company we “Lindens” called “the Lab,” especially now that people are recognizing Second Life as an OG when they talk about “the metaverse” today … 

Of course the product innovation was fascinating, but even more than that, I appreciate the workplace innovations we implemented at the Lab. Many of these are lost already in the sands of time, and frankly not all of our innovations were good ideas, but we had an authentic commitment to transparency, openness, and trying new ways to enable emergent bottoms-up innovation rather than top-down command-and-control management. We had open floor offices because it flattened hierarchy, not to save costs on real estate. We had snacks and games because we genuinely liked to have fun with each other, not as a nefarious scheme to keep overgrown adolescents at work. We had peer bonuses as a bonding experience, not as a competition for brownie points in the next performance review. We experimented in democratic decisionmaking, as messy as any experiment in democracy. We had remote offices, work from home, chat and video collaboration before any of these things were regarded as rational costs for a startup.

The Lab was also fearless with new business models, defining and implementing product lines in a way that felt like feeling around in the dark back then, but now seem prescient. “Freemium” as an acquisition strategy, the power of subscription metrics, data-driven decisions, SaaS-like pricing and practices before SaaS was a thing, defining product management roles before the internet industry had standardized skills for the role. We didn’t invent any of these by ourselves, but they were all relatively new business practices in our context.

So we endlessly experimented and adopted internal management and business practices on the fly while also attempting a product so ridiculously difficult that the largest technology companies in the world continue to fail today in their modern attempts to replicate the possibilities we demonstrated fifteen years ago. Maybe the only way we were old-school was that we built a profitable business, even though many companies had already amply demonstrated that tech investors prefer a fanciful growth story to the reality of profitable results.

[I’m leaving out the best part about the Lab: I could write a book about the people, but to even begin that here would be to raise uncontrollable emotions that are not at all the point here. Suffice to say that to this day I feel a bond with every Linden, past and present.]

What I realize now was that rather than being ahead of its time, the Lab was at the end of an era, before technology became Big Tech. The people that first populated Silicon Valley with technology workers were geeky idealists. Many of them, especially those who entered the scene from San Francisco, descended from a local cultural heritage of hackers and pranksters, the kind of Merry Pranksters that gave rise to the Cacophony Club and Burning Man – a culture of anti-authoritarianism, a community of individualists, a spirit of creativity and freedom and fun.

After the global financial crisis, for a variety of reasons, that culture gave way to people who … well, let me not judge any person, because we all live in glass houses, but looking at where we are today … the legacy of my last decade or so in technology is not about any of that spirit from my first decade. Too many technologists began to insist that technology could lead humanity, going so far as to believe in the inevitability of technological progress as if it were some natural force more powerful than the needs of humanity. And so we got surveillance capitalism, walled gardens, dark patterns, monopolistic rent-seeking, more and more exploitative and community-destroying business models and practices, and ever bigger and bolder next-gen Ponzi plays. None of those are technology; they are instead the social and economic results of favoring technology over humanity.

I’m an old man now, perhaps just yelling at the clouds. Sure, sure, I understand that some kind souls will object that I’m not that old, that there’s plenty of life ahead, plenty to do, plenty to dream. But see, I don’t think there’s anything to object to, I don’t think there’s anything wrong with being old. There are a lot of things that I see and understand now that I simply could not have understood with less experience in life. That experience – not just the technology and business experience, but ALL of the experience of living – is the lever that I’ve sought all my life.

And now I’d like to share the leverage of experience with as many people as I can who might use it to move the world in the right direction.

And the place to stand? Well, it has to be San Francisco. There are places in the world that I love more, but there is no other place that I know with that particular spirit of love for humanity over technology. That spirit has been dominated of late, it has been beaten, it has been bruised … but it is not gone – I just know it because I have been around long enough to know it. San Francisco is currently in the worst shape that I’ve seen in my quarter-century in California, so bad that it almost reminds me of New York City in the ’70s and ’80s … a place that we Jersey kids regarded as a bankrupt disaster, only later to realize that we should have spent way more time trying to get into CBGB. What I’m saying here is that we’ll later remember now as the time when San Francisco was authentically cool.

So – this is all my way of saying that I’m going to be spending my time in San Francisco working on technology startups in generative AI, virtual currencies, and metaverse technologies. I have the idealism of my first decade in tech, the experience of my second decade, and the determination to put humanity over technology. Most importantly, I have a few like-minded friends figuring out how to work together, and we have room for more.

If you are looking for a lever and a place to stand, let’s talk 🙂 A ping on LinkedIn is best if you don’t already have other contact info.

Why The Next Financial Crash Will Be The Last

or, an Outline of Everything I’ve Read on Twitter in 2020 so far

  1. The most basic criticism of capitalism is that it is inexorably tied to growth.
    1. Capitalism is the most efficient way to allocate resources.
    2. Efficiency always favors scale.
    3. Scale favors inequality, because greater extraction of resources is enabled by more underclass, so resources are effectively allocated to create larger and larger underclass, with an elite class almost as byproduct. 
  2. We are now at a scale where resource extraction of some form will break some kind of infrastructure required to maintain growth.
    1. The key types of infrastructure that enable large societies are: finance, energy, water, food, housing, military and policing, politics, and environment.
  3. Of infrastructure types, finance is the most fragile and environment is the most vital. So finance will likely break first, and the environment will probably break last – reserving a healthy respect for the combined odds of an unlikely explosive event in any of the other types.
  4. What we are seeing in 2020 is a large scale demonstration that money isn’t an undeniable law of the universe – it is a social convention that is strong enough to call fact, but weak enough to deny as real. In shorthand, we can label people who both see this and feel this as “radicalized.”
    1. We can call people who act on this as “extremist” – while acknowledging that there are many instances where what is now considered just was first considered extreme.
  5. In the current demonstration of financial fragility, we can already see that the owners of the capitalist system will succeed in maintaining the most efficient allocation of resource to reward scale without destroying the system. 
    1. The underclass will receive the minimum concessions required to continue to reach greater scale for continued extraction of resources.
  6. Scale is now at a level that enables the largest and fastest dissemination of information in all of recorded history. The percentage of radicalized people is a minority, but it is larger both in size and in proportion than it has ever been.
    1. There are many very cool things about the dissemination of information and associated technologies, but these are a side show.
  7. Since information feeds radicalization, there is no way to stop the growth of radicalization other than increasing authoritarianism, which is the only way to decrease the flow of information at this point.
  8. Capitalism will therefore allocate resources to authoritarianism because that maximizes the scale of the underclass required to extract maximum resources.
    1. The underclass will receive only the amount of goods and services required for them to accept the devil’s bargain of surviving for further exploitation under authoritarianism.
  9. The size of the radicalized population is large enough to foretell a kind of civil war in the United States. Capitalism is efficient enough to allocate resources to preventing this war from becoming one of blood, though with a “blood and soil” culture as a potential byproduct of the authoritarianism required to slow radicalization.
  10. The most peaceful outcome to hope for is a slow balkanization of the United States. It’s hard to see that trajectory ending in anything other than states not united – not culturally, politically, economically, or legally.
  11. Hopefully a bloodless war is coming first, even if a bloody war might follow later. Since the financial system is the most fragile of infrastructures required to support capitalism, it should not be surprising to see it collapse first.
  12. The collapse of our financial system has now been demonstrated in periodical financial system shocks going back almost four decades.
    1. This started with the initial petrodollar shock of the ’70s – finance is intertwined with energy extraction, which of course drives environmental exploitation.
  13. The observable cracks in the financial system are so large now that it’s hard to believe that an even larger financial system could possibly survive the next crash, which would be due to come in about another decade.
  14. The real limit is not the size of the cracks but that the efficient allocation of resource required for the masses to accept authoritarianism is becoming increasingly indistinguishable from what authoritarians want people to call socialism.
    1. In other words, the next blowup will only be repairable by allocating even more resources to the underclass, which increases the ability of the underclass to communicate and understand radicalism.
  15. The capitalist system will stop short of allowing socialism to end capitalism. As a last gasp, it will empower authoritarians who would kill people if helpful to maintain capitalism.
    1. These people would be largely but not exclusively radicalized. There will be plenty of collateral damage.
    2. These people would be almost exclusively underclass.
  16. The peaceful Hail Mary to hope for is the rapid advancement of technology that would replace human labor with robots and artificial intelligence.
    1. The excess humans would be placated with limitless entertainment, legalized drugs, and universal basic income.
  17. Despite our best hopes, the most likely outcome is that the next financial shock will be the last, either through failure to scale or a war that necessarily includes the destruction of the financial system, which can theoretically be rebuilt under an authoritarian regime. 
  18. There are really funny jokes about each and every one of the points and subpoints above.
    1. The ones about the subpoints are the funniest ones.
    2. A plurality of the jokes are about sex, which means they are the most obvious ones, but doesn’t mean they’re not the funniest ones.

FAQ

WTF is this?

A few months ago, I decided to radically increase my consumption of Twitter. Reading a constant stream of information, I never really took the time to try to assemble everything I’ve learned into a coherent narrative. Now that we are in Covid-19 shelter-in-place, the combination of a huge amount of free time and a rapid amplification and culmination of every message I’ve read previously compels me to write an outline of everything I’ve read recently.

Do you want to argue about this?

No. This is not an argument. This is an outline of everything I’ve absorbed on Twitter in 2020, in the order of a coherent narrative. I’m aware that there are arguments against every single point. I did not include any of them.

I’m aware that my sources are biased, both in my selection and in their content – that is how Twitter works. I’m aware that there are many omissions. I’m aware that some terminology is clumsy, or confusing, or potentially offensive. I’m aware that there are missing perspectives, and there’s a glaring lack of data or even citation. The lack of citation may seem particularly galling to many people. I’m not interested in arguing about any of this.

Are you aware that you’ve made an error? If I explain it to you, will you fix it?

No – if I was aware of an error, I wouldn’t have made it. If you try to explain something to me, I might listen, but I probably won’t go back and “fix” this outline because there is nothing to fix – it’s an accurate outline of what I’ve read on Twitter. Perhaps if there is another pandemic and that gives me free time instead of killing me, I will include your explanation in another outline. I don’t think I’ll want do do this again during this pandemic. I should probably get off of Twitter.

Are you aware that people smarter than you disagree with you?

Yes.

Do you think you’re original?

There is nothing original here. It is an outline of what I’ve read, which means that someone else said it.

Don’t you think you’re missing something?

No. This is an outline. By definition, it excludes the vast majority of content. It also excludes a huge number of relevant historical events, fascinating theories, and all sources not mentioned frequently in my Twitter feed, which necessarily favors time in recent living memory. 

Whatever it is you think I’m missing: If I’m aware of it, I left it out on purpose. If I’m unaware of it, then I didn’t find out about it or didn’t remember it for this outline anyway.

You don’t even mention the pandemic – don’t you think it’s relevant?

The pandemic is a proximate cause of many points in the outline, and gave me time to write the outline, but is not intrinsically important to the points of the outline. The exact same outline could have occurred in an event of an alien attack, assuming the aliens were defeated. In theory either the pandemic or the aliens would end up making this outline irrelevant, but I did not find that theory interesting enough to outline.

Whoa, you mean like how Adrian Veidt fabricated a fake alien octopus to attack the Earth in the hopes of uniting everyone in peace? Do you think that Elon Musk is like Veidt? Do you think the Chinese regime fabricated the novel coronovirus as a subtle act of war, or even as a devious act of peace, à la Veidt?

I too loved The Watchmen. Sorta, he wishes, and no.

What do you think about this technical solution?

This is an outline of what I’ve read on Twitter, it’s not a problem solving session. In any case, all relevant technical solutions are covered in 6.a., 16, and some of 16.a.

Aren’t 5.a and 8.a the same?

No. 5.a. refers to monetary concessions to the underclass in an attempt to repair a financial shock. Such a repair attempt was just passed by the U.S. Senate, and is a current example of point 5. 8.a. refers to goods and services produced by a capitalist system for the underclass, which even under the authoritarianism noted in point 8, can be relatively comfortable. That is why it’s called a “devil’s bargain.” 16.a. is the best case outcome of the devil’s bargain.

Why are there main points and subpoints?

It just seemed to me that some points were not truly necessary to a coherent narrative, but very helpful to understanding a related point. I put these in as subpoints but maybe could have made them main points or probably could have left them out entirely. 

Also, as mentioned in the outline, subpoints tend to inspire the best humor.

Your jokes aren’t funny. Also, there are many things I don’t like about you. I demand that you explain or prove anything you’ve written. I challenge you. You are worthy of neither attention nor admiration, only my unending scorn.

That is not a question. This is a FAQ, which means Frequently Asked Questions.

Do you believe anything you’re saying here? Does this outline align in any way with your personal beliefs or political positions?

Yes, some of it and somewhat.

So then is this your manifesto?

No. This is an outline of everything I’ve read on Twitter recently, assembled in the order of one coherent narrative. This is my manifesto.

Again, WTF is this?

Again: this is March 2020 and we’re under a shelter-in-place order. There’s not a lot to do.

from Linden to Libra

Join me, friends, in the Wayback Machine …

In 2007, Facebook sent a couple of strategists to Linden Lab to ask us about virtual currency. Of course they would ask us – at the time, we were the world’s leading experts in managing a virtual economy, heading towards a billion dollars of L$ transactions. Yes, that’s a billion real US dollars – unique among all virtual currencies at the time, we supported the exchange of L$ to real US$, so our virtual currency had real world value.

When we heard that they wanted to meet, my colleagues huddled in a room to decide how much we should tell them. We decided to emphasize the difficulties of managing a virtual currency: complexity of implementation, responsibility for users’ financial transactions, intrusive governmental inquiry and oversight, competitive dynamics with banks and payment partners. We went into the meeting and told them this story about how terrible it all was, and how they’d be better off simply issuing credits paid for with real money.

We never heard from them again, but in 2010 they launched Facebook Credits. I laughed at the thought that it seemed our little misdirection had worked – they went down a path that was entirely uninteresting and ultimately untenable, just as we’d hoped. Yeah, I know: that was kinda evil. But at the time, I was just a little evil, trying to stay ahead of bigger evils.

Why didn’t we want Facebook to work on virtual currency? Because I believed that the Linden Dollar was the greatest innovation created by the Lab. Sure, the 3D virtual world was mind-bending – all the avatars and the world building and the art and the boob physics – but for me, the virtual currency was the one element of Second Life that had the opportunity to break out of SL and into prominence in the whole wide world. Facebook had only 50 million users in 2007, and I didn’t want them to get their virtual currency right, so early in the game.

Well, it’s a dozen years later, and blockchain inspired a Facebook exec to figure it out. Facebook has launched Libra, a new cryptocurrency. It is a brilliant implementation: meticulously researched, expertly engineered, broadly partnered, poised for global domination. There’s only two problems: it’s too late, and they’re doing it wrong.

The right time for Facebook to launch a virtual currency would have been, oh, around 2007. That’s right: I’m saying you can thank me and Chris Collins for talking them out of it at the time. As I’ve written previously, a cryptocurrency can only succeed as a medium of exchange if it is a core currency of a powerful platform. Don’t even get me started on Bitcoin. What I didn’t call out in those posts is that the platform must implement currency strategy early in its growth. This is because when you are messing around with payments, you are in a field of giants – global banks and entire nations that have a vested interest in preventing your success. You have to implement your new currency while your platform is still small enough to ignore, or at least dismiss as “merely a game.” Then when you reach enormous scale, it’s too late to do anything about the economy that’s been baked in since the early days.

When a platform already has billions of people, it’s not going to fly under the radar. Facebook is already seeing immediate regulatory interest in Libra. Even with less than a million users, Second Life had to deal with aggressive regulatory interest from Congress and international bodies. I like to think that we talked our way out if it with my silver tongue, but the truth is that we were too small for sustained inquiry. Facebook is far, far, far past that point. Libra will be hounded by regulators until the cost outweighs the benefits.

The part that Libra has wrong is its reserve policy. This is getting into the weeds of managing virtual currency, but to vastly oversimplify: the reserve is a guarantee of currency redemption. If you buy Libra with real currency, you can sell it back to the Libra consortium for a relatively stable amount of real currency. Libra has launched this way in the hopes that a stable currency value will engender trust. The amusing mistake here is that only in the insular world of technocracy could someone believe that Facebook has consumer trust problems that can be cured by a stable rate of exchange on their cryptocurrency. The more serious mistake is that requiring a full reserve limits the utility of the currency.

All major world currencies are fiat currencies, which means that they can be issued at the will of the governing authority. They are not backed by gold or any other asset – though nearly all of them started out backed by a guarantee of redemption in gold. But there is a reason that all of them have moved off of the gold standard: fiat provides the maximum flexibility to manage the currency and its related economy. While it’s true that fiat currencies are more susceptible to hyperinflation, that is only a consequence of bad management. If the manager (i.e. the government, or in this case, Facebook) can be trusted to make good economic decisions, inflation is a limited risk.

Perhaps Facebook is aware of all this, and their plan is to launch with a full reserve, but later evolve into a fiat currency, after some history has demonstrated their trustworthy stewardship. After all, this is actually how all the major world currencies developed: first on the gold standard, then eventually declaring a switch to fiat currency. So if the launch with reserve is a bit of knowing subterfuge, kudos to them.

At this point, I could launch into an extended discussion about the relationship between virtual currencies and MMT. But I’ll leave that exercise for another day. In the meantime, for Linden historians who have stayed with me this long through the discussion, I’ll give you a little blast from the past: a record of posts from Linden Lab as we decided how to think about our currency, and whether to implement fiat sales of L$ into existing exchanges. Enjoy!

The Morality of Ads and The End of Zuck

The “An Open Letter To …” format has always struck me as inescapably self-aggrandizing in a particularly duplicitous way. The explicit presumption is that the addressee will actually read the letter and care about the advice and admonitions within, when in fact the entire exercise is so transparently a cri de coeur that serves only the writer’s need for attention.

Nevertheless, I have to admit that I’m writing this post for one person, and one person only. If I could send this to him directly and be sure that he would take it seriously, I would simply send it to him. If there were no chance of him ever reading this, I wouldn’t bother writing it. However, Facebook tells me that my social distance to Mark Zuckerberg is quite short, so it’s possible that someone that Mark takes seriously will send this to him. I feel compelled to write this silly letter in this annoying format, because the future of the free world is at stake.

Dear Mark,

At this point, I hope you are past the point of denying that you are in fact The Leader of the Free World. This honorary title has traditionally (in our myopic nation) gone to the President of the United States, but the current occupant of the White House explicitly denies this “globalist” worldview, and implicitly disqualifies himself with his statements and actions. If there is such a thing as a leader of the free world, you’re it. Sorry.

Surely you already know what I’m going to write about here, but you don’t know why you should listen to me, so let me start with that. I am the only person in the entire world who (a) has faced a problem of the kind and magnitude of the one you face today, (b) has hands-on experience in implementing solutions to this problem, and (c) is willing to tell you all about it.

In 2010, I was hired to lead Product Management for Ads Policy at Google. This was an odd role: Policy isn’t thought of as a Product problem; it seems more like something that might be addressed by legal or operational or PR functions. But Google recognized that they had a serious problem, and felt that a product approach to this problem was required, in addition to all the other approaches.

By the way, the existence of this problem at Google was partially albeit indirectly your fault. Google had historically implemented Ads Policy through sales ops, which was led by Sheryl. You lured her away at a critical time, when Google was reaching yet another level of scale and impact, and the leadership vacuum in sales ops resulted in many small cracks in an implicit system of rivers and dams of policy issues. It was inevitable that one of these cracks would burst a dam somewhere, which is a pleasingly vague way of glossing over the numerous ads policy problems that led to the DOJ imposing a $500 million fine on Google. As you might imagine, a half-billion dollar fine tends to sharpen one’s attention.

So I had a Facebook-scale problem … but bigger. Facebook is arguably more important now, but Google still has more of everything: more users, more data, more dollars, more decisions. Billions of users, trillions of ads, the tiniest fractions of a second to make decisions: how do you decide what ads NOT to show? The clueless commentariat think it’s easy, but I know what it really takes.

I also know there is almost no margin for error. You can get it right 99.999% of the time, but for every billion results per day, that means you got ten thousand wrong that day. Not a lot of businesses can survive getting ten thousand decisions wrong every day. Each one of those errors is not only potentially ruinous, but each one can seem almost impossible to debug. When something gets through all of your best efforts, how do you know what went wrong?

So yeah, I think I understand your problem. Here’s my advice …

Question Your Attitude

Obviously, I don’t know what your attitude is, I can only make assumptions from your public statements, and I understand that there are many legitimate reasons why we must make public statements that don’t reveal our true attitudes.

So at the risk of making obnoxious assumptions, your attitude towards this problem can be summed up as: “Well, it’s very hard. I’m uncomfortable making these decisions.

Having had the same problem, I can say that it wasn’t any harder than other hard problems. I mean, of course it was a challenge, but I’m not sure it was any more challenging than dozens of other initiatives at Google. I don’t mean that we solved it perfectly, clearly there are still challenges, but addressing these problems is just another part of the business, not some special, impossible area.

I understand why you dream of a dynamic system that reflects different values for different communities, but that is an abdication of responsibility. I also understand the enormous business advantage in claiming that Facebook is just a “neutral platform.” I happen to think that it’s high time that all tech companies stop advancing the fictions that allow them to continue to benefit from the legal sacred cow that feeds tech, but it’s not necessary for you to admit that publicly or privately. You just have to understand that you really have a business problem and you have address it with a straightforward business attitude.

Your business is ads. The funny thing is, lots of people hate ads, and ad businesses justify ads to users by saying that ads fund the great experiences that users get for free. But it’s so much more than that: Ads are the conduit for the only morality that exists when we cling to the idea that we run neutral platforms.

You can blame “the algorithm” for a lot of things that you claim weren’t the result of human judgment. The Algorithm – the holy algorithm, the all-powerful, the unknowable – sure, you’ll fool the people who don’t actually understand computing. But even if you continue this claim into the ads business, you cannot escape the pressures that ultimately impose a kind of morality through the ads business.

Ads have advertisers, and the truly important advertisers care about their reputations. They have limited tolerance for being on a platform that hurts those reputations. That tolerance is limited by the fact that their customers are actual people, and almost all of those people have some sense of morality. So even though we may have amoral (i.e. “neutral”) algorithms, even if advertisers themselves might be amoral, ultimately the common morality of people flows up through the advertisers, and through our ads systems, and finally imposes a sense of morality on the people who run the most powerful ads businesses. It is this slow flow of morality that has finally become a deluge upon you.

It’s not that hard to understand the downstream impacts of your business, and get ahead of the trickle of backwash before it becomes a deluge. The problem here isn’t about being a neutral platform, it’s not about avoiding the content business with its obligations and regulatory attention. It’s about understanding the cycle of users, advertisers and apps in the world’s most powerful ads business – that’s you now, apologies to my Google friends – and protecting each properly so that you are limiting the appearance and impact of bad ads.

I realize that the unwashed masses think that ads are evil. Only people who don’t understand business and don’t understand ads think that a powerful platform would knowingly sacrifice user interests for short-term revenue gains. Advertisers flee platforms that treat their users poorly. “Focus on the user and all else will follow” is a business mantra, not a moral mantra.

You struggle publicly like this is some kind of impossible problem. For that struggle, I can only play you the world’s smallest violin. You have a business problem, and it’s your business and therefore your problem.

Invest In People First

I had a medium size team at Google. Eight product managers working with over a hundred engineers, closely partnered with several hundred internal operations people and several thousand contract operations people. Yeah, I understand that most of the world looks at that and says “This is medium??” But as you know, that’s merely a sub-team when you’re talking about a critical function in a (then) $40 billion business.

How big is the policy team at Facebook, Mark?

All those people worked together to produce thoughtful policies, powerful computing systems, and vigilant human operations, working closely in a virtuous cycle. I could detail all of what we did, but you are better off just giving your own people in this area many more people.

Yes, I know AI can make this a lot more efficient than it was in Ye Olde 2010. I still don’t believe that AI is sufficiently advanced enough to get where you need to be without many many humans, though I’m no expert in AI. More importantly, I don’t think that the type of expert who can make that assessment is the type of person who should be deciding how many humans to put on this problem.

Here’s the part that will look like bragging, but I’ll take that risk. I want you to know what it takes to manage ads policy products, so I have to talk about myself. I studied political philosophy and law, under the great conservative theorist Robert P. George as well as the liberal giant Ronald Dworkin. I learned economics from Alan Blinder. I started my career in high finance law, working on leveraged buyouts for Mitt Romney, before I chased Silicon Valley dreams, first coming to Craig Johnson‘s firm, then going into venture capital and eventually working for “the Willy Wonka of virtual reality,” Philip Rosedale.

My point isn’t that I’m so great. I’ve done a lot of things, but I was mediocre or worse at many of them – a C grade in macroeconomics! My point is that this isn’t a job for just programmers, or philosophers, or economists – it’s highly multidisciplinary. Now that you know the template, it will take you less than a second to find the thousands of people who are basically just like me (except with higher grades). It’s not hard to put together a team to go after this particular kind of problem, but you have to know what you’re looking for.

You need to truly empower the people working on this problem. I wasn’t particularly powerful by title. And yet, when I told one exec that he was letting me down, he stepped up. When I told another SVP he was getting in my way, he pulled back. When I told yet another exec he had to give up sales, he gave it up. When I showed the GC we had a fire, he brought the fire trucks. When I told another leader I needed her help, she became my greatest ally. None of this was because I was great or powerful – I doubt any of these highly distinguished people remember my name. And yet they always cooperated with me, because they knew that when I got in their faces, they weren’t talking to me; they were talking to the leadership behind me. And there was never any question that my leadership would back me.

I wonder if policy leaders at Facebook feel that way? I wonder if they can go around to literally anyone at the company, insist on doing what is good and what is right for the business, and act with complete confidence that everyone will cooperate, all the way to the very top?

Assess Your Leadership

Let’s take the gloves off, shall we? You have built a company that has played a great part in letting a foreign influence endanger the integrity of our democracy. Have you even yet truly internalized the failure of leadership for which you bear complete responsibility? I mean to ask this clinically, not as an attack on your ego, character or capabilities. Do you have a complete grasp of how you have failed as a leader, and do you truly want to institute the change in yourself and in your company that would be required to make amends?

It’s really not a terrible thing if you understand the challenge and don’t think it’s yours at this point. Lots of people believe that you could be the actual President, not just the holder of the mythical “Leader of the Free World” title. Maybe you should make your impact on the world from the White House rather than Menlo Park. Given the current state of affairs, I would happily vote for a Sandberg/Zuckerberg ticket. Maybe it’s time to elevate yourself to the board Chair at Facebook, and focus on preparing for your campaign.

You’d have almost any option in the world to take on the CEO role at Facebook. I certainly don’t know everyone, but I can tell you who I know is great, because they were great with exactly the same problem at Google. Oh, I guess that this part of the open letter is addressed to them –

Nick, Susan, Sridhar: you guys don’t get enough credit for handling Google’s problems way before they could turn into the problems that Facebook has now. You would be the first to admit that of course Google still has problems, but we know they would be a lot worse without your leadership. 

– back to Mark, in closing – You probably can’t get Susan or Sridhar out of there. Why would they want the headache? You could probably get Nick, if you were serious about giving him true leadership authority to fix your problem.

If you still intend to fix Facebook yourself, I sincerely wish you luck. You’re going to have to change: the “Zuck” who created Facebook is not the person who can fix it. I haven’t seen you doing the things that I know would work, and it truly worries me. The future of the free world depends on your success.

ch-ch-ch-changes

This has been a watershed week for sexism and Silicon Valley. The New York Times published a searing article implicating well known VCs in harassing behavior. It feels like the culmination of a years-long effort spearheaded by Sarah Lacy, whose relentless reporting helped lead to the resignation of the CEO of the most valuable private company in tech as well as the dismantling of a VC firm.

For men in tech, it’s been a good week to reflect on the injustices done to women, to think about the women in these stories and the women in our own lives. A focus on the women’s perspectives is clearly the most necessary, just and safest line of introspection. This post is not for people who haven’t undertaken that line of thought. This post is about the men.

Chris Sacca and Dave McClure are two of the men highlighted (lowlighted?) in the Times. Each responded with a well-written admission of guilt. Sacca said “I am sorry” five times in a single post. McClure admitted “I’m a creep.” I’ve seen two kinds of responses to these mea culpas:

Group 1: “This is a transparent PR move. These guys are only interested in saving their own skins. They don’t deserve praise for coming clean after being exposed, and the actions they’ve taken in their ‘woke’ stage will never be enough to clean their record. People don’t change, they are what they did.

Group 2: “Kudos to these guys for coming clean. It takes some bravery to face the crowd, to admit what you did, to make a public statement about your efforts to do better. Everyone makes mistakes, it’s the rare few who can improve upon their past. People can change, there’s no hope for any of us if that’s not true.

Group 1 is right … and so is Group 2. The day I write my admission of guilt, even if only to myself, it will be driven by this truth: You can’t change who you are, you can only change your reaction to it.

You are what you’ve done, full stop. You might think that there’s more to it, that your own private thoughts count for something, that the high opinion of your loving friends and family mean something, that the dollars and ratings and likes and tweets show the true score. But no. You are what you’ve done, that’s it. And you can’t change what you’ve already done.

Everyone has done bad things. When we do bad things, we often want to believe that they’re not so bad, that they’re not consistent with our “true” character, that we somehow can make up for it in other ways. This kind of self-denial, of course, allows us to continue doing bad things. I’d argue further that this self-denial leaves us with little choice other than to continue doing bad things.

Being a good person is about choice, for most of us. If you are someone who has just always been a good person, who’s never done wrong, who’s always been on the side of the angels – well, I think you’ve probably just been lucky in this regard, if unlucky in others. You had good parents, good friends, good influences. You’ve never been tempted by sex or power or money or fame. But you’ve lived a life outside of the more typical human condition.

Once you’ve done something bad, your options typically diminish: you can only feel guilt and shame, or denial. You would think that a “good” person would choose guilt and shame – but that’s just as dangerous as denial! Guilt and shame lead to self-flagellation, often self-medication, and ultimately to an amplification and repetition of the behaviors that led to the bad actions.

It may seem perverse, but accepting your faults gives you more options for how to react in any situation. If you can accept what you’ve done, accept that it’s who you are, you are more free to choose how to react to it. You don’t have to choose the cover-up, you don’t have to choose to deny it, you don’t have to choose to ignore it. You are much more free to address it, and to make a different choice in the future.

I think that’s what Sacca and McClure are doing in their posts; they are publicly accepting who they are, and trying to make choices in the harsh light of that reality. Is it self-interested? Yes. Is it brave? Yes. I know that some people reading this are going to think I’m going all Stuart Smalley, and I get it. That’s their choice. You can’t change who you are.

the force awakens

Yep, it’s an end-of-the-year technology prediction post …

We’re at a special place in the consumer technology cycle. I’ve seen this movie before. Consumer technology trends are often described as waves, but I like a movie metaphor better, because it captures the notion that I actually saw these events when they were first released in the theater, and that we keep seeing the same plot points, themes and character types. I’ve lived through three really big waves of consumer technology. The third wave – the third movie – is finally coming to an end, which is a relief, because it kinda sucked. I’m really looking forward to the next show.

I’m a fan of the franchise generally, despite the repetitive plots. Each movie starts with the introduction of products that clearly show the possibility of what’s to come, although these are not the products that actually survive the revolution. Those products depend on a crucial underlying technology trend, which is not itself the consumer-facing technology. There is a spectacular platform war that decides the big winners and losers. The story ends, until next time, when the business patterns in the field have matured, and outsized returns for investing in those businesses have therefore disappeared.

The Origin Story: Personal Computers

pirates-of-silicon-valley

Like the first movie in a series, this one defined many of the patterns, tropes and heroic character types of the sequels to come. In a digital desert, a lone gunslinger appeared on the horizon, known only by the mysterious name Altair. The story really picks up when the Commodore PET, the TRS-80, and the Apple II appear on the scene. That trio of bandits opened up the Wild West, only to be dominated by the strongman IBM PC. But IBM only won a hollow victory, as it turned out that they’d unwittingly given the keys to the kingdom to Microsoft, the ambitious vassal that became the overlord. The story of the rise of the PC is the classic foundation of everything that came after in consumer technology.

But it would be a mistake to only pay attention to the foreground. In the backstory, the silicon chip is the key enabling technology that’s powering the other players. Moore’s Law is the inexorable force of progress, and Intel was the master who kept on top of the industry despite laudable challenges by AMD, Motorola, Texas Instruments, and a host of international competitors. This global tale of intrigue and ambition is a worthy accompaniment to the marquee narrative. In fact, the invention of Silicon Valley can be considered the prequel to this series.

The Worthy Sequel: World Wide Web

the-matrix

Many people say The Empire Strikes Back was a better movie than Star Wars. The Godfather was in many ways outclassed by Part II. The explosive success of the World Wide Web was at the very least a worthy sequel to the PC story. A knight in shining armor, Tim Berners-Lee, led a devoted band of heroes on a worthy quest to unite all of the world’s information. Early services like Prodigy and CompuServe leapt on the ensuing opportunity, but latecomer AOL won the day by sending a CD to every mailbox it could find. That was only the first act, as Netscape and Yahoo emerged as the real heroes … until the third act, when eBay and Amazon and Google trampled the field.

It’s usually not worth the effort to make a distinction between the Web and the Internet, but it makes sense to do so here because “World Wide Web” is the story with a beginning and an ending, while the technologies of the Internet are the more enduring enablers of that story. As protocols, the details of TCP/IP, DNS, HTTP and the like are not exactly gripping narrative. But like silicon chips powered the PC revolution, and could be considered the more enduring story, the Internet will live on long after the Web sinks into irrelevance.

The Failed Trilogy: Smartphones

phone-booth

Return Of The Jedi was a very successful movie. And it did have some awesome special effects for the time. But it was all of the same characters, and pretty much the same plot, soiled by dominant commercial motives and treacly pandering to a younger audience. By which I mean, fuck Ewoks. And Godfather Part III? The less said about that, the better.

The story of the last dozen years or so has been the move of personal computing and the Internet to smartphones. There’s some compelling pathos in the storyline of the death of the Web, overrun by mobile apps. But it was mostly dull to watch the Treo and Blackberry reprise the role played in prior movies by the Altair, Prodigy and CompuServe. I’ll admit it was great fan service to see the Apple character repurposed, and maybe there hasn’t been a more colorful personality than Steve Jobs, so that part of the story was pretty entertaining. You could say that the return of Jobs was as momentous as finding out about Luke’s father.

Let’s face it, it just wasn’t that exciting to watch Google and Amazon continue to grow. Facebook is a great new character as a flawed hero, and that whole subplot with Twitter and the rest of social media was a very strong B story. Other new characters like Uber and AirBnB have their minuses and pluses, but I don’t believe they’re going to be big characters in the next movie. (“Uber for X” companies are the goddamn Ewoks.) The overall experience has been like coming in to watch a huge blockbuster mega-sequel: you can really see the dollars up there on the screen, and there’s a certain amount of entertainment value that comes through, but the whole exercise just lacks the originality, joy and passion of the earlier entries.

Not a bad backstory though, and as in the other movies, this one will continue to be meaningful in all future sequels. Cloud computing, software as a service, the evolution to microservices – these things fundamentally changed the way that new businesses start and grow. They reduced the capital costs in starting a new information technology company by orders of magnitudes, letting in many more characters. Unfortunately, most of those new characters are Ewoks.

The Force Awakens

So what’s the next movie going to be about? Will it reinvigorate the franchise? Or will it be a terrible prequel (or worse, prequel trilogy) that we’ll all have to agree to pretend never happened?

I think we don’t know all of the elements, but we do know some of them. Let’s first recap what we saw in the first three installments:tfa-chart

And here’s what I think we know about the chart today:

tfa-chart-f

Main Story: There is a flood of products that don’t have an agreed category name yet – Siri, Google Assistant, Amazon Alexa, Microsoft Cortana, chatbots, chatbots and more chatbots. Some industry terms that are cropping up are intelligent personal assistants, virtual assistants, conversational search. Or chatbots, fer chrissake.

The point is, you will have things in your house (your car, your pocket, etc) that you talk with, and these things will talk back to you in a way that makes sense. You’ll regard your interaction as a conversation rather than button punching or screen swiping. Until people converge on another name for all of these things, I’ll call them “conversational devices” – this captures that you have a productive back-and-forth with a physical object. Yes, you can already do something like this on your smartphone, but those implementations are only a hint of where this will go.

As early as it is, there are plenty of curmudgeons who don’t see the point. Smarter people have said we’ll never need more than five computers, no one wants a computer in their home, the Internet is a fad, the iPhone is going to be a flop. Predictions are hard. But screw it, here’s mine: within 3 years, it will be apparent that the adoption curve of conversational devices is in the same category as PCs, the Web, and smartphones.

Conversational devices will be the story of the next decade in consumer technology. Not that there won’t be other stories, it’s just that this one will be the lens by which we understand the era. I still love virtual reality, but it’s still not time yet. The blockchain isn’t consumer-facing, and  I don’t believe in Bitcoin. Not Internet of Things, not 3D printing, not self-driving cars, not wearable devices (unless they are also conversational devices) – some of these will be big stories, but not the biggest story of the next dozen years.

Backstory: Conversational devices rely on this chain of technologies: Machine Learning -> Natural Language Processing -> Speech Synthesis. These technologies are complex and interrelated, and rather than explain why this is their moment (the foregoing links give that explanation), I’ll just skip to the punchline: People will be able to speak to machines, machines will understand and speak back. Most people already have experience with primitive versions of these technologies, and find those experiences frustrating and unsatisfying. (“Press 9 to go back to the main menu.”) But the rate of improvement here is at an inflection point, and this is about to become undeniably apparent on a mass consumer level.

Platform War: The most successful conversational devices will be on a common platform of delivery. Amazon Echo and Google Home are devices that sit in your home and listen to everything you say, and respond back to help you. Facebook Messenger has bots that will have a conversation with you. Each of these is currently displaying only the limited strengths available in their existing businesses (Amazon:Shopping, Google:Search, Facebook:Brands), but they are all trying to expand to become a delivery platform for third-party conversational devices. Amazon and Facebook already offer developer platforms, Google is focusing on partnerships.

This platform war will have elements of past wars, in hardware vs software, apps vs operating system, open vs closed. That complexity makes it very interesting, but remember, this is theme rather than story. The platform war is the Empire vs the Rebellion, the Mob vs America, it’s the thematic texture that gives the story meaning. You shouldn’t mistake it for the main narrative though. In Mac vs PC, Microsoft won, not Apple or IBM. In open vs closed web, Google won, not Tim Berners-Lee or AOL. Ok, the winners in iOS vs Android were also the platform owners, but that’s yet another reason that movie sucked, maybe it’s the fundamental reason that movie sucked. I hope everyone involved is smart enough not to let that happen again.

Pioneers and Winners: We are far enough into the story that we can guess at pioneers, but we can’t be sure until the extinction event happens: in all previous movies, the early pioneers proved the market, and then died, crushed by an onslaught that included the eventual winners. I’m convinced that this plot point will repeat in the new movie. Look in the chatbot space for potential pioneers – it’s certain than one of these will become historically important. And then it will die.

I’m hoping the platform war victors aren’t also the heroic winners of the main story, as happened in the smartphone movie, because it’s boring and tends to result in Ewoks. Facebook is the pivotal character to watch, as it has a platform opportunity with Messenger, but has huge weaknesses relative to Google, Amazon, Apple and even Microsoft in hardware production and delivery, and hardware will be key to platform ownership. So it will be interesting to watch whether Facebook dives into hardware, or partners with one or more of the other platform players, in the hopes that there’s a bigger opportunity in the main story than the theme.

Well, that’s all I have to say about that. Enjoy the show!

WWGD?

Six months ago, I said that Trump would win the election in part because the rise of new media destroyed the historic function of the media as our Fourth Estate. I was upset that product managers at our most important Internet companies seem to refuse to own the problem that is so clearly theirs.

Now that the chickens have come home to roost in a big orange nest of hair, others are saying that the election was, in a sense, rigged by Facebook. They say fake news has defeated Facebook. Facebook denies responsibility, while people are literally begging them to address the problem.

Product managers at Facebook are surely listening now. If any happen to be listening here, let me say: I’m sorry I called you cowards. I realize that today’s state was hard to foresee, and that the connection to your product even still seems tenuous. I am awed at the great product you’ve built, and I understand that no one knows the data better than you do, and that it is tough to take criticism that comes from sources completely ignorant of your key metrics. It’s not easy to regard something so successful as having deep flaws that are hurting many people. I think it is a very human choice to ignore the criticism, and continue to develop the product on the same principles that you have in the past, with the same goals.

I have faith that you are taking at least some of the criticism to heart. I imagine that you know that you can apply machine learning to identify more truthful content. I am sure that you will experiment with labels that identify fact-checked content, as Google News is doing. Once you reliably separate facts from fiction, I’m sure you’ll do great things with it.

I’m still concerned that facts aren’t enough. I think we’re in a post-fact politics, where people no longer (if they ever did) make their political choices based on facts. I have read many analyses of the election results, many theories about why people voted as they did. There are many fingers pointing blame at the DNC and the Electoral College; at racism, sexism, bigotry; at high finance, globalism, neoliberalism; at wealth inequality, the hollowing out of the middle class, the desperation that comes with loss of privilege. I am not convinced that giving people more correct facts actually will address any of this.

The most incisive theory that I’ve seen about today’s voters says that the divide in our country isn’t about racism or class alone, but about a more comprehensive tribalism, for which facts are irrelevant:

There is definitely some misinformation, some misunderstandings. But we all do that thing of encountering information and interpreting it in a way that supports our own predispositions. Recent studies in political science have shown that it’s actually those of us who think of ourselves as the most politically sophisticated, the most educated, who do it more than others.

So I really resist this characterization of Trump supporters as ignorant.

There’s just more and more of a recognition that politics for people is not — and this is going to sound awful, but — it’s not about facts and policies. It’s so much about identities, people forming ideas about the kind of person they are and the kind of people others are. Who am I for, and who am I against?

Policy is part of that, but policy is not the driver of these judgments. There are assessments of, is this someone like me? Is this someone who gets someone like me?

Under this theory, what is needed isn’t more facts, but more empathy. I have no doubt that Facebook can spread more facts, but I don’t think it will help. The great question for Facebook product managers is, Can this product spread more empathy?

The rest of this might be a little abstruse, but here I’m speaking directly to product managers of Facebook News Feed, who know exactly what I mean. You have an amazing opportunity to apply deep learning to this question. There is a problem that the feedback loop is long, so it will be difficult to retrain the production model to identify the best models for empathetic behavior, but I think you can still try to do something. There is some interesting academic research about short-term empathy training that can provide some food for thought.

I am convinced that you, and only you, have the data to tackle this problem. It is beyond certainty that there are Facebook users that have become more empathetic during the last five years. It is likely that you can develop a model of these users, and from there you can recreate the signals that they experienced, and see if those signals foster empathy in other users. I don’t think I need to lay it out for you, but the process looks something like this:

  1. Interview 1000 5-year Facebook users to identify which ones have gained in empathy over the last five years, which have reduced their empathy, and which are unchanged.
  2. Provide those three user cohorts to your machine learning system to develop three models of user behavior, Empathy Gaining, Empathy Losing, Empathy Neutral.
  3. Use each of those 3 models to identify 1000 more users in each of those categories. Interview those 3000 people, feed their profiles back into the system as training data.
  4. See if the models have improved by again using them to identify 1000 more users in each category.

At this point (or maybe a few more cycles), you will know whether Facebook has a model of Empathy Gaining user behavior. If it turns out that you do have a successful model, of course the next thing to do would be to expose Empathy Losing and Empathy Neutral users to the common elements in the Empathy Gaining cohort that were not in the other two cohorts.

But now at this point you are in a place where the regression cycle is very long. Is it too long? Only you will know. How amazing would it be to find out that there’s a model of short-term empathy training that is only a week or two long? People use Facebook for hours a day, way more than they would ever attend empathy training classes. This seems to me to be an amazing opportunity. Why wouldn’t you try to find out whether there’s something to this theory?

One reason might be a risk to revenue models. Here I’d encourage you too see what Matt Cutts said to Tim O’Reilly about Google’s decision to reduce the prominence of content farms in search results, even though that meant losing revenue:

Google took a big enough revenue hit via some partners that Google actually needed to disclose Panda as a material impact on an earnings call. But I believe it was the right decision to launch Panda, both for the long-term trust of our users and for a better ecosystem for publishers.

I understand this mindset personally because I was there too. At the same time Matt was dealing with Google’s organic search results, I was dealing with bad actors in Google’s ads systems. So I was even more directly in the business of losing revenue – every time we found bad ads, Google lost money. Nevertheless, we had the support of the entire organization in reducing bad ads, because we knew that allowing our system to be a toxic cesspool was bad for business in the long run, even if there were short-term benefits. In fact, we knew that killing bad ads would be great for business in the longer run.

News Feed product managers, I’m not writing this from a position of blaming you. I was in a situation very much like yours and I know it’s hard. I can also tell you, it feels really really good to solve this type of problem. I am convinced that an empathy-fostering Facebook would create enormous business opportunities far exceeding your current path. It is also entirely consistent with the company mission of making the world more open and connected. You can make a great product, advance your company’s mission, and do great good in the world all at the same time. You are so fortunate to be in the position you’re in, and I hope you make the best of it.

an indecent proposal

For over 20 years, Internet businesses have grown under the protection of a special law that provides extraordinary privileges. This law has properly been hailed as a boon to innovation, and has become enshrined in some quarters as an indispensable pillar of free speech. However, no law regarding technology can survive the merciless rule of unintended consequences; what was once a necessary sanctuary has become a virtual menace to society. If you wonder how the United States has reached the brink of electing a deplorable villain as its leader, at least part of the answer rests with the Internet’s most generous law.

This law is Section 230 of the Communications Decency Act of 1996. The bulk of the act was a misguided attempt to regulate “indecent” content on the Internet, most of which was rightfully struck down by the Supreme Court in the name of the First Amendment. But Section 230 was a special provision inserted late in the legislative process, out of concern that nascent Internet businesses would drown in legal liability for statements made by others. Section 230 states:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

This is a shield from libel and defamation suits, an amazing advantage in the rise of the new media of the Internet. The impetus for this law came from a 1995 case where an Internet service provider was found liable for defamatory statements made by a user of its message board. The court’s reasoning included the fact that the Internet service had exercised some editorial control over some of the message board content; therefore the service could be treated as the publisher of all the content, just as a newspaper would be.

In 1995, this was a horrific decision made by technology-illiterate judges who had no understanding of the power and potential of the Internet. It would be nice to think that the Congressmen who inserted Section 230 into the CDA were blessed with extraordinary foresight into the future of technology. But no – actually they just wanted to be sure that Internet companies would be willing to help hide boobies.

Remember, the bulk of the CDA was an insane Sisyphean effort to stop the spread of pornography on the Internet. Internet providers were rightly concerned that they would never be able to stop all the boobies. They argued that the 1995 case showed that any failed attempt to censor boobies would be interpreted as editorial control, holding them liable for all the boobies that did get through. So these Congressman inserted Section 230 as a way of saying to companies “Hey, just try your best to censor boobies, you won’t be held liable as a publisher of the boobies that did get through.” Internet companies, even in 1995, were smarter than Congressmen. Although the CDA was about as effective at reducing pornography on the Internet as a cocktail umbrella in a hailstorm, Section 230 emerged from this fragile legislation as an enduring and invaluable shield against liability. Now you can’t sue Facebook for publishing information that is verifiably false and harmful. Lives can be destroyed on the sites we live on, and those sites will never be held responsible.

The EFF says Section 230 is “one of the most valuable tools for protecting freedom of expression and innovation on the Internet” and ACLU says that this law “defines Internet culture as we know it.” These eminent bastions of free speech have been tremendous warriors for a lot of good in our society, but like anyone else, they could not predict the future and they may cling too long to brittle ideas that are past their expiration date. When Section 230 was adopted, the Internet was the Wild West, the new American frontier for development. There were no dominant Internet companies. The law was written with Prodigy and CompuServe in mind; AOL was the up-and-comer, Yahoo was barely a year old. The media lifeblood of the nation were the three broadcast networks, the New York Times and the Washington Post, and the many local newspapers throughout the country. People who understood the Internet then were rightly concerned about legal liability crushing the industry in its infancy.

We live in a very different world today. Network effects make some large portions of the Internet into a winner-take-all game where the behemoths can quickly grow into billion-dollar enterprises, affecting billions of lives daily. Traditional media is dead and dying, a boon to experimentation and diversity, but a blow to authority and truth. Technologists were proud to disintermediate and destroy the old gatekeepers, but we engaged in this merry destruction without any thought to the vital purpose that the Fourth Estate served in our politics. And now we live in a nation where most days it seems like the only people who don’t believe the next president could be a racist, misogynist, fascist despot are the ones who believe she could be an acceptably corrupt continuation of a broken political system.

The gatekeepers are dead and most people only get their news from their friends and others in the same echo chamber on Facebook. Our public discourse is conducted on Twitter, where online harassment by anonymous, cowardly sexists and racists is treated as an acceptable form of free speech. And we are still, as we always are in technology, only at the beginning of our problems. I don’t know where this is going any better than lawmakers did in 1996; I don’t have a solution – but I do think we should take the thumb off the scales that favor Internet businesses.

A similar situation occurred with respect to state sales taxes. In a 1992 case, the Supreme court ruled that businesses with no physical presence in a state did not have to collect sales tax in that state. Amazon exploited this ruling, carefully building its business to avoid having to impose state sales taxes, giving it an advantage over local businesses. By 2012, Amazon saw the writing on the wall, and began “voluntarily” collecting sales tax in many more states than it had previously done. But by that time, the West had been won: Amazon was the dominant online retailer, and Main Street businesses had been all but destroyed. Amazon had the foresight to act ahead of the change in the laws, which is coming anyway. I fear our dominant Internet services lack the moral courage to act in the interests of our country.

Facebook and Twitter are our new public square, and although they are private businesses they should not be exempt from the laws and social requirements of other businesses that regularly gather large groups of people together. No shopping mall, for example, would allow the public posting of verifiably untrue, insane ramblings, not without damage to their business as well as legal liability. No sporting venue would allow its women to be spit on, its minorities to be subject to vile racist invective, without losing business and facing lawsuits. And yet we allow our most significant public gatherings online to be completely free of the obligations of being a publisher, obligations that supported the kind of media that have been vital to our proper functioning politics.

The internet destroyed vast portions of traditional media that depended on fact, truth and integrity. This hasn’t been solely a triumph of progress and free market principles, it has been a creative destruction assisted by a sweetheart deal with the government. Under this mantle of government protection, technology companies replaced essential elements of democracy with endless misinformation, lies and insanity. Free speech should allow much of this to be possible, but those who would build a business on irresponsible dissemination of speech should be subject to the same laws as the businesses that they destroyed. It’s time to take the training wheels off of Internet culture. Section 230 of the CDA should be repealed.

69

Wired UK just published a pair of articles that are a great explication of the potential of Virtual Reality to become as powerful as the Web. They fairly report the vision that Philip Rosedale has been pursuing for most of his professional lifetime. My one-sentence summaries:

Second Life was just the beginning – Philip wanted to connect the world in a seamless 3D environment, but was greatly limited by technology of the time; today many of these limitations are lifting.

VR and the CD-ROM – People are most excited about closed VR experiences today, but this is like being excited about Encarta on CD-ROM before people understood how powerful Wikipedia would become.

Good articles; read them if you are interested in VR. I have just one, entirely personal, embarrassingly picayune, totally irrelevant problem …

The first article says: “Then, in 2006, Second Life stopped growing.”

I know this to be untrue. I ran finance for SL from 2005-2006, and remained on the exec team until I left the company in 2009. We raised money in 2006, and I personally prepared the financial projections that predicted our growth through 2008. Financial projections for startups are notoriously optimistic, which is to say they are mostly composed of fairy dust and bullshit. I was surprised as anyone to notice, in 2008, that my projections of fast growth held up, quarter over quarter, with a margin of error of no more than 10% (and even at that, the projection was usually lower than actual growth). So I know that SL was still growing quite well in 2006, in every meaningful aspect of usage and business metrics. The growth rate slowed in 2008, but absolute growth was still positive in 2009 when I left. Yes, SL did stop growing eventually. But not on my watch.

Ok, that’s prideful, and it’s petty. But it’s fair to say that I’m the single most authoritative source in the world on this topic. So when I read the article, I sent a note to the reporter with a correction. He replied that he’d “check it out.” A day later, he said that he followed up and he seems to be right, and cited an article by another reporter.

That is seriously annoying. The other reporter has no better access to the facts than the original reporter. That other reporter is just another source of rumor and speculation. In this case, I am the actual source of truth, and the reporter with access to the truth chose to ignore it!

Obviously, this is trivial. Who cares? No one but me and my wounded pride. But it’s frightening to consider how easily reporters will ignore the truth when it gets in the way of their own goals.