Six months ago, I said that Trump would win the election in part because the rise of new media destroyed the historic function of the media as our Fourth Estate. I was upset that product managers at our most important Internet companies seem to refuse to own the problem that is so clearly theirs.
Now that the chickens have come home to roost in a big orange nest of hair, others are saying that the election was, in a sense, rigged by Facebook. They say fake news has defeated Facebook. Facebook denies responsibility, while people are literally begging them to address the problem.
Product managers at Facebook are surely listening now. If any happen to be listening here, let me say: I’m sorry I called you cowards. I realize that today’s state was hard to foresee, and that the connection to your product even still seems tenuous. I am awed at the great product you’ve built, and I understand that no one knows the data better than you do, and that it is tough to take criticism that comes from sources completely ignorant of your key metrics. It’s not easy to regard something so successful as having deep flaws that are hurting many people. I think it is a very human choice to ignore the criticism, and continue to develop the product on the same principles that you have in the past, with the same goals.
I have faith that you are taking at least some of the criticism to heart. I imagine that you know that you can apply machine learning to identify more truthful content. I am sure that you will experiment with labels that identify fact-checked content, as Google News is doing. Once you reliably separate facts from fiction, I’m sure you’ll do great things with it.
I’m still concerned that facts aren’t enough. I think we’re in a post-fact politics, where people no longer (if they ever did) make their political choices based on facts. I have read many analyses of the election results, many theories about why people voted as they did. There are many fingers pointing blame at the DNC and the Electoral College; at racism, sexism, bigotry; at high finance, globalism, neoliberalism; at wealth inequality, the hollowing out of the middle class, the desperation that comes with loss of privilege. I am not convinced that giving people more correct facts actually will address any of this.
The most incisive theory that I’ve seen about today’s voters says that the divide in our country isn’t about racism or class alone, but about a more comprehensive tribalism, for which facts are irrelevant:
There is definitely some misinformation, some misunderstandings. But we all do that thing of encountering information and interpreting it in a way that supports our own predispositions. Recent studies in political science have shown that it’s actually those of us who think of ourselves as the most politically sophisticated, the most educated, who do it more than others.
So I really resist this characterization of Trump supporters as ignorant.
There’s just more and more of a recognition that politics for people is not — and this is going to sound awful, but — it’s not about facts and policies. It’s so much about identities, people forming ideas about the kind of person they are and the kind of people others are. Who am I for, and who am I against?
Policy is part of that, but policy is not the driver of these judgments. There are assessments of, is this someone like me? Is this someone who gets someone like me?
Under this theory, what is needed isn’t more facts, but more empathy. I have no doubt that Facebook can spread more facts, but I don’t think it will help. The great question for Facebook product managers is, Can this product spread more empathy?
The rest of this might be a little abstruse, but here I’m speaking directly to product managers of Facebook News Feed, who know exactly what I mean. You have an amazing opportunity to apply deep learning to this question. There is a problem that the feedback loop is long, so it will be difficult to retrain the production model to identify the best models for empathetic behavior, but I think you can still try to do something. There is some interesting academic research about short-term empathy training that can provide some food for thought.
I am convinced that you, and only you, have the data to tackle this problem. It is beyond certainty that there are Facebook users that have become more empathetic during the last five years. It is likely that you can develop a model of these users, and from there you can recreate the signals that they experienced, and see if those signals foster empathy in other users. I don’t think I need to lay it out for you, but the process looks something like this:
- Interview 1000 5-year Facebook users to identify which ones have gained in empathy over the last five years, which have reduced their empathy, and which are unchanged.
- Provide those three user cohorts to your machine learning system to develop three models of user behavior, Empathy Gaining, Empathy Losing, Empathy Neutral.
- Use each of those 3 models to identify 1000 more users in each of those categories. Interview those 3000 people, feed their profiles back into the system as training data.
- See if the models have improved by again using them to identify 1000 more users in each category.
At this point (or maybe a few more cycles), you will know whether Facebook has a model of Empathy Gaining user behavior. If it turns out that you do have a successful model, of course the next thing to do would be to expose Empathy Losing and Empathy Neutral users to the common elements in the Empathy Gaining cohort that were not in the other two cohorts.
But now at this point you are in a place where the regression cycle is very long. Is it too long? Only you will know. How amazing would it be to find out that there’s a model of short-term empathy training that is only a week or two long? People use Facebook for hours a day, way more than they would ever attend empathy training classes. This seems to me to be an amazing opportunity. Why wouldn’t you try to find out whether there’s something to this theory?
One reason might be a risk to revenue models. Here I’d encourage you too see what Matt Cutts said to Tim O’Reilly about Google’s decision to reduce the prominence of content farms in search results, even though that meant losing revenue:
Google took a big enough revenue hit via some partners that Google actually needed to disclose Panda as a material impact on an earnings call. But I believe it was the right decision to launch Panda, both for the long-term trust of our users and for a better ecosystem for publishers.
I understand this mindset personally because I was there too. At the same time Matt was dealing with Google’s organic search results, I was dealing with bad actors in Google’s ads systems. So I was even more directly in the business of losing revenue – every time we found bad ads, Google lost money. Nevertheless, we had the support of the entire organization in reducing bad ads, because we knew that allowing our system to be a toxic cesspool was bad for business in the long run, even if there were short-term benefits. In fact, we knew that killing bad ads would be great for business in the longer run.
News Feed product managers, I’m not writing this from a position of blaming you. I was in a situation very much like yours and I know it’s hard. I can also tell you, it feels really really good to solve this type of problem. I am convinced that an empathy-fostering Facebook would create enormous business opportunities far exceeding your current path. It is also entirely consistent with the company mission of making the world more open and connected. You can make a great product, advance your company’s mission, and do great good in the world all at the same time. You are so fortunate to be in the position you’re in, and I hope you make the best of it.
One thought on “WWGD?”