steal this book

Steal This Book by Abbie HoffmanWhy don’t people steal books?

I mean, I’m sure people do steal books, but it doesn’t seem to happen in any extraordinary volume, as compared to, say, music. It’s not unusual to know someone who has downloaded a copyrighted music file without paying for it (aka “stealing”) – you might have even done it yourself, no? – but do you know a single person who has ever downloaded a copyrighted book without paying for it?

The music industry has been famously apoplectic for years about the problem of illegal music file sharing. The movie industry watched the music guys disintegrate, and is aggressively riding the Big Hollywood effort to stop the evil Internet so that what happened to music doesn’t happen to movies.

Now the book industry is also undergoing seismic shifts due to new technology, but this begs the question: why didn’t books, the older and easier medium to steal, come first – why doesn’t anyone steal books?

Is it the medium?

Smaller things are usually easier to steal, and this goes for the digital world as well as the physical world. Constraints on bandwidth, storage and processing power are one reason that music files are more broadly shared or stolen than movie files – a typical movie file is well over 100 times larger than a typical music file. But a book file can easily be less than a tenth the size of a file for a 3-minute song, so again, it seems strange that these little book files don’t get the five finger discount.

Maybe music and movies are different because they require electronics to play a recording. As electronics have gone from analog tape recordings to digital media files, music and movies got swept up in waves of theft because those files played on devices that could be connected to a vast file sharing network. Meanwhile, books did not have a common electronic reading device until the Kindle and Nook.

I’m not sure I buy this narrative – recordings of audiobooks have been around for just as long as music files – do you know anyone who has ever stolen an audiobook? Now that the Kindle and Nook have been around for a while, have you ever heard of anyone using these devices to read troves of stolen books?

Is it possible that the difference is not in the technological trappings of the media, but in its emotional impact? Do music and movies move something in the soul that causes people to steal, because the enjoyment of the media is so irresistible? I doubt it, because there are emotionally gripping books as well as dull songs – I don’t think there’s a category of books that get stolen more often than others, other than the category where the title is a command to steal.

Maybe the reverse is true – perhaps movies and especially music are trivial fluff, not valuable enough to fear stealing, while books are weighty, too precious to steal. Price may provide a clue here: a hit song is now around a dollar, a movie around ten dollars, and a new digital book is ten to fifteen dollars. Perhaps the market is validating the theory that books are more valuable, more emotionally compelling, and therefore harder to steal casually. But I doubt this too – there are a lot of crap books out there, and you can learn more from a three-minute record baby than you ever did in school.

Is it the audience?

Maybe the people who enjoy books are different from the people who enjoy music and movies; or at least, they’re different when they’re enjoying books, even if they’re the same people.

Viewing unauthorized download of copyrighted files as “theft” or “stealing” requires a certain conception of a moral universe. Many books, especially novels, convey some sense of moral order, or even when conveying moral disorder the implicit contrast to a typical moral universe always exists. Maybe the people who enjoy reading books are people who believe in a particular kind of moral universe, one in which unauthorized downloading of copyrighted material is rightfully considered stealing. In short, maybe book lovers are better people, and don’t steal. Presumably under this theory, music lovers are dirty techno-hippies with no sense of right and wrong.

Or … maybe book lovers are just weird. The urge to possess books as a physical object is common enough that even obsessive collection is considered only a gentle madness. Possibly the act of stealing a digital file is simply unsatisfactory, as it doesn’t sate this need to possess the object – shoplifting a file just isn’t the same. While music lovers do have some notable examples of vinyl obsessives, this doesn’t seem as common as the book geekery is among bookworms.

Is it possible that book lovers simply have more to lose, being a smaller and almost by definition more educated (i.e. literate) class of people? Maybe music and movie lovers that are of the same social and economic class as book lovers actually steal music and movies just as infrequently as book lovers steal books?

Is it the industry?

The music industry was famously hostile and arguably stupid in its stance to file sharing, and Big Hollywood seems determined to replicate that stance regarding all the evils of the Internet. In contrast, the book industry seems scared, but oddly accepting of its fate, almost savoring the last days of their bygone ways, lounging on the beach languorously watching the tsunami roll in.

Or maybe the book industry is simply smaller than the music and movie industries, and so hasn’t spent the time and money to raise the hullabaloo that other media industries have raised. And being a smaller industry, maybe it’s simply more accepting of change.

Is it possible that the book industry isn’t in utter panic because they’re aware of the history of media cries of wolf, howls of inevitable doom that accompany each technological change, each of which result in more money and more opportunity? Maybe book publishers are relatively sanguine in the knowledge that they’re making higher profits than before the Internet ruined their industry.

This is all just semi-coherent rambling, but it’s a ramble that’s been rattling around my skull for a while now. I don’t really have a clue why people don’t steal books, or at least don’t seem to steal books in comparison to music and movies. I’m hoping one of the handful of readers who stumble across this post can point me to a better answer.

too early in the game

Last month, I wrote about why Second Life failed so I didn’t have to write about why Second Life failed. I mean, that post wasn’t about reasons for failure, it was about the fact of failure. My thought was that there are many people who simply assume Second Life failed, and they’re wrong, and there are many who will passionately argue that Second Life has succeeded … and they’re wrong too. Failure can only be judged by the ones who were trying to succeed.

It would be safer for me to say that failure is a matter of perspective, for surely failure passes through the same lens as beauty in the eye of the beholder. I do understand that many SL Residents were on their own journeys, and so of course they are their own best judges of the success of those journeys. But it would be an artful evasion to claim that any of those journeys, or even all of them together, constitute the sum total equation for the success of Second Life. We were trying to do something more – or at least, something else – and we failed. (Of course, I’m talking about the team and the company that I knew, years ago. The team there today is on their own journey, which I know next to nothing about.)

So if I’m willing to be this myopic and insular about judging failure, you can bet I’d be just as parochial in reviewing the reasons. I’ve seen and heard a lot of speculation that I don’t agree with: poor strategy, worse execution; lack of focus, misplaced focus; poor technology, doomed architecture; dumb marketing, uncontrollable PR; niche market, bizarre customers; crazy culture, undisciplined development; bad hiring, bad management; feckless board, dominating board, ignorant board. I’ve heard it all, and while there may be a grain of something like truth here and there, none of these things holds real explanatory power as a reason for why Second Life failed.

We failed as people. We failed as a team. Our failure was intensely personal, particular to each person involved, and ruinous to the overall team.

I’m going to switch now from “we” to “I” but I want to be really clear about why. We Lindens were all in it together, and there is a broad sense in which all credit and blame goes to all of us … but not in this post. Here, I’m talking about maybe half a dozen people, and so it would be too much of a personal attack for me to try to describe the failures of anyone other than myself. I’m willing to attack myself in this forum, but not my former colleagues, all of whom I still respect and a few of whom I love like my own family. But I want you to remember the “we” because otherwise the rest of this post is going to seem incredibly egocentric: there’s a certain kind of self-blame that’s really self-aggrandizement, and though I regard my own failures as critical, even the most deluded version of the story couldn’t claim it was all about me.

So. I failed as a person. I failed the team. I was responsible for many elements of our strategy, execution, culture and management, and those decisions aren’t the ones I regret. What I regret, to the extent that I’m capable of regretting such a rich learning experience for me, is giving up. I don’t mean at the end, when I was tired and disillusioned and looking around at a company I didn’t recognize and a future I didn’t want to live. A lot earlier than that, I gave up on people that we needed, people who were flawed and fragile but necessary. I let people fail, I let people go, I let people hide in their illusions and fears, I let them give up because I’d already given up.

The irony was, when I joined the company, I was supposed to be an experienced hand that would bring some sanity to a crazy world. But I indulged my own worst instincts – throughout the craziest times, when I could’ve done the most good, I just brought more crazy. I was having fun, but I chose my own twisted growth over a higher goal, and at times I was just plain mean or selfish or drunk. I really wasn’t ready for the opportunity that Linden Lab presented to me. I really wasn’t the guy I should’ve been when I got there; I didn’t know what I needed to know until I left.

Too many of the key leaders at the Lab were working through similarly damaging personal limitations. You might ask whether this really points to a failure in culture or hiring or leadership, and that would be a fair question. It’s true that Linden had a way of hiring certain kinds of people and forcing them to confront their own deepest flaws – but I think that’s beautiful, a feature not a bug. What we needed was one or more or all of us to conquer our flaws, to enable the entire team to rise above the limitations of each of us. But none of us defeated our own demons, and so all of us perished.

I’ve been gone from Linden Lab for over two and a half years, and still my failure haunts me. The last day of the year is always a good moment to come to terms with the passage of time, and this New Year’s Eve I’ve decided I should finally accept the fact that I’m never going to let it go. I’ll try to reach peace through the zen realization that peace is unattainable.

why second life failed

This post is about why Second Life failed – but not in the sense of, “here are the reasons why Second Life failed,” but instead, “here is why it is true that Second Life failed.”

Slate published an article titled “Why Second Life Failed” that also, like this post, is not an elucidation of reasons why SL failed – but unlike this post, it is not an authentic attempt to support the proposition that SL indeed failed. It is simply an effort to market a new book by posting an article with a catchy headline. There is an unavoidable paradox in that any marketable headline with the structure “Why [X] Failed” must use for X something that has first achieved at least some significant success, otherwise the title would be too obscure to attract readers. I started a company called Bynamite that folded after less than two years – no one writes articles titled “Why Bynamite Failed” because no one’s ever heard of Bynamite.

This mild paradox isn’t sufficient defense for SL’s ardent users and thoughtful critics. As is often the case with posts about SL’s demise, the comments to the Slate article are full of well-informed, intelligent and passionate conversation that puts the original article to shame. At Terra Nova, Greg Lastowka suggests that SL remains fertile ground for study, with the pointed rejoinder that “Second Life never failed – the media reporting on Second Life failed.”

As a former Linden, I appreciate the desire to insist that Second Life hasn’t failed. I joined Linden Lab in 2005, at a time when we had a few dozen employees and registered users in the tens of thousands. By the time I left four years later, we had around 7 times the number of employees, several hundred times as many users, and almost a hundred times the revenue. It certainly felt like success to me. I left sated with a feeling of accomplishment, and great hope for the future of Second Life.

But I also left feeling depleted. We had stumbled our way from obscurity to something like prominence, but I didn’t know how to take it to the next level. We weren’t making progress despite having bountiful talent, desire and resources. We had a beautiful company, a real culture of beauty and love, genuine emotion for each other and for the world we were helping to build. And it wasn’t working, not well enough and not fast enough and not big enough.

Perhaps there never was a next level. Perhaps it was always the destiny of Second Life to be an innovative niche product for a select group of people, a worthy subject of serious study, a constantly evolving emporium of edge cases. Maybe we should have just hunkered down, and focused on maintaining an elaborate playground for only a select audience of passionate and creative people. We could eke out a fine living, and damn the rest of the world who just didn’t get it.

But I couldn’t damn the rest of the world, because dammit, I’m from that rest of the world. I was never a true Resident of Second Life; I was a visitor, an outsider with the good fortune to see the incredible things that people can do in a truly free environment. I was inspired, amazed and delighted by Second Life – as well as occasionally revolted, offended and demoralized – and the diversity and depth of this experience was a revelation to me, one that I believed that everyone can appreciate.

And I still believe that, which is why I have to accept that Second Life has failed (so far, we must always say so far). The reality is that Second Life is still a niche product, and to deny that I wanted it to be something more would dishonor the heartbreaking glory of our ambition. It’s fair to say that Facebook became our second life, but it’s also shortsighted. Not so long ago, people laughed at the proposition that anyone wanted to maintain a virtual presence online that could form the basis of social interaction. Facebook did put an end to the dismissive chuckles on that topic.

But it’s equally laughable to say that this is where we’ll stop, that the final destination of online interaction consists of wall posts and text messages in two dimensions. I still believe that there’s no sensible way to define an impassible boundary between where we are today and a time when people “live” in a three-dimensional virtual environment. I’m still a true believer, an old true Linden in that way. So I have to admit that Second Life has failed.

So far.

great jobs

The death of Steve Jobs raises and answers the question that haunts the psyches of ambitious entrepreneurs everywhere: “Was it worth it?”

Praise follows death like the glowing debris that trails a comet, and the writing in the sky says that Jobs was the greatest CEO ever. A few muted voices remember that he was famously harsh to work with, but this is universally regarded as an entirely justified mania for perfection. Considering his accomplishments, it seems almost irrelevant that he denied the obligations of paternity for one child, and consciously decided that his children should know him through biography rather than time spent with him, even – or especially – in the final stretch towards death, when the remaining time must be remorselessly allotted like oxygen in a sealed room.

This isn’t criticism of a great man. It’s a reminder that many of us would willingly make the same choices, were such greatness within our reach.

We say it’s not so, and try to believe it. We encourage each other to remember family, remember health, remember that a life of striving includes the quest to achieve a full and humane life through our work. But the life of Jobs is the story of his jobs, of his one true job: making a dent in the universe through the creation of products that become a part of our lives. For his success in that, we forgive and excuse his personality defects. We cannot blame a man for failing to uphold principles that we would throw aside ourselves if only we could be assured that the universe was malleable to our touch.

Saying that “you are not your job” is a comfort; it alleviates the cognitive dissonance between your self-image and the productive economic output you contribute to the world. The lessons of Steve Jobs deny that comfort; his strongest exhortations insist that you are all about the things you make for the world – not for yourself, not for your hobbies or leisure, not even for your family and certainly not your friends if you have any. You have to do great work, never settle, remember that each day could be your last, don’t waste time living someone else’s life.

There is no obligation to community, family or friendship in these words – though strangely, there is an overwhelming commitment to society in the desire to dent the universe, for this is not a universe of cold cosmological phenomena, it’s a universe of people, and his ambition is all about changing how people live. For Jobs, if this ambition involved sacrifices of a more universal personal nature, there is no question that it was worth it. It was worth it for him, and his efforts were certainly worth it for us.

It’s touching to see the determination with which Jobs’ sayings are repeated in the wake of his death. But the message of his most appealing words isn’t quite the message of his life. He told us to follow our hearts, to trust our intuitions, to ask ourselves if our plan for this day is how we’d want to spend our last. But those are not goals, they are only beautiful means to an uncompromising end. The goal of Jobs was to be insanely great in a world-changing way. That’s the hard part of the message to understand. All of us can hope to understand what is in our own hearts, and can hope to have the courage to follow it. Almost no one alive has a realistic ambition to change the world – what many of us think of as world changing is merely interesting, hopefully entertaining, and possibly enriching.

drive me crazy

Bob Lutz was a product development executive at BMW, Ford, Chrysler and GM over a 47-year career in the auto industry. His book Car Guys vs Bean Counters focuses on his second stint at GM, from 2001-2010.

In an excerpt in the WSJ, Lutz phrases a classic question of executive management, about the tension between leading by example or by autocratic demand:

I had to ask myself, and still do today, if it is the proper role … to get down in the trenches for hours on end, teaching the love of perfection in the smallest details when perhaps a more impatient autocrat would simply have ordered—nay, demanded—that it happen ….

This question has been asked and debated across many industries over many years. In information technology, we’ve seen different answers at HP, Intel, Microsoft, Google, Apple and Facebook. Often within the same company, the story swings between democratic (“emergent” is the trendier term) and autocratic over time, but you could roughly say that HP and Google have been known for emergent corporate cultures, and Intel, Microsoft, Apple and Facebook have been thought of as more autocratic. The public imagination tends to favor stories based on a single personality as leader, so it is likely that every tale of an “autocratic” workplace radically overstates the effect that any one person can have on a large organization.

But still, leaders matter even in the most emergent management styles, and Lutz’s question is a deep one. The tension exists because when a leader is right, autocratic demand will always lead to the best outcome in the shortest possible time – but no one is always right, and the flip side is that autocratic demand leads to the most disastrous failures very quickly when the leader is wrong. Emergent management is an attempt to institutionalize greatness over a long period of time, a period exceeding the career length of any single leader. Lutz asks the right questions again:

But does the autocrat, no matter how gifted, create sustainable success? Or does his style drive away other capable leaders who would form a leadership team after the great man’s departure? . . .

The fact is, though, that my effort to instill into the organization a drive for perfection and customer delight in all things was successful. And still I wonder—was I right? Did I change the core of the product development culture by teaching, or did I rely too much on my own will and my considerable influence to get what I wanted?

Strikingly, Lutz is haunted by the failure of his lessons to stick at Chrysler. He had left that company secure in the knowledge that his standards and principles were permanently embedded in the corporate culture. But it didn’t work – new leadership quickly shifted the company into a bean-counting mentality, and the passion he’d invested there evaporated as easily as spilled alcohol. He thinks there will be a different outcome at GM, but it’s not clear why there’s any reason to believe this.

I find some divisions in Lutz’s dichotomy questionable: an autocratic leader can certainly get down in the trenches, and an emergent leader can certainly demand great results. I agree that sustainable success is the ultimate arbiter of greatness – but if the company doesn’t succeed through crisis points, which sometimes require an autocratic hand, then it will not have the chance to measure a track record over generations of leadership. So I would say that a company – and its leaders – have to be able to master both styles, and most crucially, know when and how to switch from one to the other.

start me up

A couple of months ago, a good friend was talking to me about the differences between most people and “entrepreneurs like us.” I had to recoil at the phrase. He’s a real entrepreneur – founded a couple of successful companies, working on a third, constantly driving and innovating and dreaming and creating. At my best I never reached his heights. I’d been a “startup guy” for a dozen years, and proudly wore that badge – as a startup lawyer learning business basics, boardroom battles, and founder secrets; as a venture capitalist investing across sectors and geographies; as a startup manager in multiple different roles and companies. When I finally founded my own company, I felt I could finally accept the label entrepreneur, and it felt great. But it didn’t last very long. I’d accepted a job at a large company not too long before that conversation, so “entrepreneurs like us” couldn’t include me anymore.

I’m not too flexible about the term, unlike those who believe in four types of entrepreneurs. I think an entrepreneur makes a for-profit business that didn’t exist before, without the benefit of existing infrastructure. That rules out what some call social entrepreneurship, because working for nonprofit good is too different than pursuit of viable commercial enterprise. And it rules out corporate entrepreneurship, because starting a new division or business line for an existing company is very different from starting a company from a cocktail napkin.

I said different – I didn’t say harder or more admirable. The numbers probably say that social and corporate efforts are harder, as there seem to be more new companies than there are new social efforts or successful businesses started within large companies.

I’ll differentiate some more: Although I’d include both the fruit stand owner and the tech company titan within my view of entrepreneurs, I don’t think they’re the same in most ways, even at their respective starts. Fruit stands aim for some daily living, selling a well-understood product, within a social infrastructure that understands and supports the concept of buying and eating fruit. The most extreme tech founder dreams of all the money imaginable, with a product that initially seems bizarre, with no apparent revenue model, distribution channel, or plausible customer interest. Although these two kinds of people have something in common, they have a lot more differences. So “entrepreneur” isn’t a binary label – it’s possible for one entrepreneur to be more entrepreneurial than another. Labels are most useful when we use them to distinguish and measure concepts. I don’t like seeing a meaningful word diluted to appease egos or ease conversation.

Because the company I work for now is fairly well known, I should doubly-triply-quadruply emphasize that this is all my opinion, and moreover it’s my opinion about me. I can believe that for many entrepreneurs, coming to Google doesn’t mean that your days as an entrepreneur are over – those entrepreneurs are more entrepreneurial than I ever was, which I’ve admitted isn’t a high bar.

And although I’m still a startup guy at heart, I can believe that Google can in important ways return to its startup roots, even though I’m naturally inclined to disbelieve that a large company can have the “energy, pace and soul of a startup.” But I’d say that you have to measure the energy and pace in the context of the scale of the ambition. People who think that Google is slow or that the competition is anything other than the unknown future are probably underestimating the enormous opportunity remaining in the information economy.

Ah, but that last bit, the “soul” of a startup … what does that even mean? That’s tricky, and probably the topic of another post.

my baby’s not ugly

I’ve written about startups that persist despite the failure of others, as well as about startup postmortems, so this may seem ironic: we’ve decided to stop active work on Bynamite. To make a long story short, my cofounder and I have both received compelling offers to work at large Internet companies, offers that we don’t think rational people would refuse. Unfortunately, the companies involved do not want to purchase Bynamite.

As a startup founder, whenever anyone tells you that your idea won’t work, that it won’t be popular, that no one will care, that no one wants it – you hear all of this as: “Your baby is ugly.” Founders invest time, money, emotion and the goodwill of their friends and family into the company; it really can feel like raising a baby. It saddens me that I haven’t been able to find a home for our pride and joy.

I couldn’t even get the company “acqhired” – that is, have our company acquired merely in order to hire Ian and me. That kind of “hacquisition” seems pretty common around Silicon Valley these days, but I failed to get it done. It hardly makes a difference though – a hire wrapped up in a sale is merely a mask. Our goal wasn’t to build a resume in the form of a company, we were aiming a lot higher than just getting hired. It’s important to own your failures, and this experience has certainly given me plenty to learn from.

But I’m proud of what we were able to do in the time we had. We put out a beautiful service that received nice launch coverage and some industry mindshare. Serious publications highlighted Bynamite as a useful tool and a company to watch. We took a little shot at the opportunity and had good enough results to seriously question why we won’t take it further. I’ll probably detail and try to answer those questions in a later post, but not for a while.

In the meantime, I still have a passion for the relationship between online advertisers and consumers. To the extent my new duties allow, I’ll keep Bynamite up as a hobby project outside of work. I’ll consider selling the assets to someone who cares about the product, or perhaps even turn it into an open source or otherwise community-supported effort. If you have any ideas about what to do with Bynamite, feel free to comment here or send me a note via LinkedIn.

a brief history of failure

VentureBeat was kind enough to publish a piece I submitted to their Entrepreneur Corner, under the title “How to make your startup succeed where others have failed.”  That’s a good title, by a smart editor who knows what people want to read.  I actually submitted a more modest title, “A brief history of failure” – because I’m actually not so sure I know how to succeed where others have failed.  I’m just saying that a history of failure in something you want to do isn’t a reason to stop trying.  Please go give it a read and comment there if you like!

the double back theory

I have an old friend who swears by The Double Back Theory, which basically goes like this:  Any important revelation will immediately strike you as obvious and true, but because its significance lingers with you for years, you will have too much time to develop alternatives and corollaries that overcomplicate the picture. Nevertheless, if you keep on thinking about the central idea, you will inevitably double back to your original revelation as the most profound revelation.

Today’s example: The Internet is a new platform for consumer media. That’s a striking revelation . . . in maybe 1994 or perhaps as late as 1998. This may be hard to believe today, but there was a time when it was revelatory to describe the Internet as a new form of popular media, rather than as a niche technology.  Today most people would declare the Internet as the second most important form of media (behind TV).  It seems so obvious now that the Internet is a consumer media delivery system.  And yet, it’s easy to find ways to overcomplicate this simple picture.

Take for example the argument over whether The Web Is Dead. Putting aside the easiest objection – that many claims of death are exaggerated – the thesis basically says that “the Web” was supposed to be this great open playground that changed the world forever, but a variety of closed systems now threaten the promised paradise. We are supposed to get hysterical over the idea that content that was free on the Web will not be free forever, and that there will be special access channels that only some people will be able to afford.

But the Web isn’t dying, it’s just evolving the way that consumer media have always evolved.  The history of consumer media is littered with similar patterns of free and paid content, amateur and professional content, sponsored and bought content. There are many examples where a new medium was popularly established with free content, and evolved into a tiered system of both free and paid content. Look at television – once it was free (i.e. ad-supported), then cable TV came along with both an ad-subsidized paid model (basic cable) and an ad-free paid content model (e.g. HBO, PPV).

The same thing is happening with this wondrous new medium of the Internet, and the most wondrous thing of all is that anyone thought it would be any different.  The Internet is wonderful and has changed many things in the consumer content landscape, in terms of interactivity, variety, engagement, and low production and distribution costs. But one thing it hasn’t changed is that consumer media, as a whole industry, will always trend toward payment for quality content, and toward concentration of media power in the hands of a relatively small number of players.

I wish that weren’t true, but it is true today and will always be true for as long as we remain human beings.

We like to think that technology frees us from the scarcity-based economics of the past.  And it’s true that changes in scarcity can free up new business models.  But there is no kind or amount of technological advancement that can eliminate scarcity in two areas:

  • Quality. Quality content is by definition scarce: no matter how great the aggregate improvement in overall quality, there will always be some portion that is better than the rest. The development and application of new technology to content only heightens the divide, not flattens it – because the quality of the content includes not just artistic merit but its presentation and convenience to the consumer.
  • Attention. Human attention is limited, both in the aggregate and for any individual. No matter what automatic aggregation, filtering, or curation tool is ever developed, we can’t radically increase the finite amount of real human attention for consuming media. Even if we develop technology that actually stops time, our biology dictates a finite attention span – there’s only so many hours of media a brain can absorb in a day, no matter how long the day is.*

Since quality is scarce and attention is finite, there will always be an opportunity to charge money for the best content – and since this includes charging for the best quality presentation and delivery, it means that there will necessarily be a two (or more) tiered Internet. You can call it surrender, you can call it the death of the Web, you can call it whatever you want – but recognize that it’s progress, it’s evolution, it’s the future as well as the past.

——-

On a related (and more obscure) note, lately there’s been a lot of conversation about the evolution of certain parts of the venture capital business. I can’t do the whole conversation justice – but basically the narrative is that there is a new mode of investing in the consumer Internet sector, with smaller but smarter initial investments, giving rise to an expanding birthrate of web startups, and raising the specter of a seed investor bubble.  Again I’d ask, should we try to understand all this as a new phenomenon, or is this just a different variation of a familiar pattern?

Consider that a lot of “Consumer Internet” is no longer mostly about technology development, it is about media content development. From that perspective, a lot of the shifts in venture investing are about a certain class of savvy investors becoming media investors instead of technology investors. They’re not evolving to some kind of new model of investing, but cycling into the model of investing that you see in more mature content production businesses.

I think that consumer Internet investors will become more and more like television producers and financiers, and less like “hard” technology investors.  If that’s right, you’ll stop seeing conversation about equity vs convertible debt, and will instead see a move toward the revenue-sharing model that is common in the TV and movie industry.

Some people will regard this theory as idiotic, controversial and even demeaning (if you think being a TV producer is worse than being a VC), but for me it’s just doubling back to the basic insight that the Internet is a new platform for consumer media.  Now that the original mid-’90’s revelation has come true, you can expect that the investment economics will repeat old patterns more than they create new ones.

——-

* I realize that there are people who believe that in the future, technology could enhance brain function as well as create endless renewable energy – making essentially limitless time and capacity to enjoy leisure activities, including consumption of media.  Without opining on the likelihood of that future, I’d just note that it’s a future in which we are no longer human, as we understand humanity today.

launch PR: New York Times vs TechCrunch

This post is inspired by a similar post by Udemy – I’m trying to add useful information for all the folks who are working hard and trying to get their products noticed.

Bynamite | Internet By The PeopleWe launched our beta product at Bynamite about a month ago, and were lucky to get covered in the New York Times.  I wish this post could be about “How To Get Covered in The New York Times,” because that would be some really valuable information for the startup community.  But we were simply very lucky – a friend introduced us to a potential business partner who was really interested in our story, who introduced us to the Times reporter, who had been thinking and writing about related issues for a long time.  Everyone in the chain was very thoughtful and patiently dedicated to understanding what, if anything, is interesting about what we’re doing.  Sometimes the pieces just fall into place, and that’s what happened here.

Before that series of fortunate events, we had been preparing a more traditional scrappy startup PR strategy, which I learned from the interwebs.  Balsamiq‘s marketing advice and launch homework are invaluable; in particular I was focused on the 10 PR tips from Weebly.  We had identified about 45 blogs, big and small, that I intended to contact one by one, with the holy grail being coverage in one or more of the major tech blogs – TechCrunch, Mashable, ReadWriteWeb, GigaOM and VentureBeat.  Just as I was starting to reach out to the list, the Times reporter confirmed that his story was very likely to go forward in the Sunday business section.

At that point, we had a decision to make.  On the one hand, the TechCrunchosphere is the place to launch consumer tech products – the audience is intelligent, opinionated, and early adopting.  This is an audience that understands that startup companies launch “unfinished” product.  It’s not a good idea to get mainstream press before your company is really ready for it.  On the other hand, our product goes contrary to the tech orthodoxy that had largely proclaimed that no one cares about privacy.  Would TechCrunch readers be the wrong audience for our more mainstream message?

Although these are complicated concerns, we didn’t take long at all to decide, and we were swayed for one irresistible reason: it’s the New York freaking Times!  As much as I’m with the punditocracy that declares newspapers dead, I just couldn’t help myself – I grew up reading the Times, and I really wanted to see if we could get in the paper, the good ol’ physical, dead-tree paper.  So we saved the blog efforts for a later time – hopefully after we’ve learned our lessons from the beta and are ready to relaunch with a more complete product.  It’s sort of a topsy-turvy press strategy, and there’s probably a whole ‘nother post in whether or not it’s stupid, but that’s not the point here.  The cool thing today is that we get to compare results from different PR launch paths.

Here’s the Visits graph from Udemy’s launch:

Udemy Screen-shot-2010-05-24

Here’s a similar graph from Bynamite’s launch:

Bynamite Screenshot 2010-08-13

Here’s the referral chart from Udemy:

Udemy referral chart

And the corresponding chart from Bynamite:

Bynamite referral chart

Now, the point here is NOT to say that Bynamite PR is any better or worse than Udemy PR!  That kind of comparison would draw all sorts of wrong conclusions, not least because I’ve cheated here by including 30 days of data to Udemy’s 23 days.  Also, note that Bynamite is a browser extension that records a page view when the extension bar pops up (that’s why the Avg. Time on Site is absurdly high).  Different products are going to have lots and lots of reasons for different metrics.

But the conclusion I’m willing to draw is that getting covered in the Times is roughly equivalent to coverage in the major tech blogs.  Not an order of magnitude higher, and certainly not smaller.  So for anyone hoping to confirm the relevance of mainstream media, I suppose that’s a victory of sorts, though it’s just as accurate to be amazed that media sources that barely existed 5 years ago are now equivalent to the “paper of record” that’s been around for 150 years.

It’s also interesting to note that both Udemy and Bynamite got a secondary bump 5 or 6 days after the original coverage.  In Udemy’s case, that bump exceeded the initial coverage, and was almost entirely driven by a mention in one source, Thrillist.  Bynamite’s secondary bump was smaller than the first, and was a result in pickup by many smaller sites that focus on covering downloadable apps.  Also like Udemy, our traffic has settled down to a much quieter pace, though significantly higher than the near complete obscurity prior to the press coverage.

I’m still digging through the details – and by the way, could use some help, if anyone reading this wants to drive through Google Analytics with me, let me know!