the webz for realz

The “real time web” is a phrase that’s coming dangerously close to buzzword heaven, to the point where formerly-Web-2.0-guys will start saying it’s a key feature of Web 3.0.  I’m not sure that the people who are so fond of the phrase are entirely clear on what it really means.

Some folks seem to say that “real time web” basically means everything more and faster.  It’s been a common tenet for at least a decade that the value in the Web is found in aggregating and analyzing information, and delivering the result to the right audiences.  Today we simply have more information, from more sources, and more techniques to analyze and filter the info flow.  Creation, collection, analysis, filtering and distribution of information happens so fast now that we can call it “real time” – but that’s just a fancy phrase for “really rilly fast.”  In this view, “real time web” is an inapt phrase if it’s meant to describe a new benefit – what we really have is just a new expression of an old (in Web terms) problem.

Another view is that the Real Time Web is not just a faster Web, but a new medium.  I don’t think I can articulate this view in brief, but the general idea is that fundamentally different sensory experiences are implicated by this new kind of personalized information that is delivered and filtered nearly concurrently with its creation.  This view is sexy sexy sexy:  it’s always so sexy to declare a new medium.  It’s so sexy that somehow the tone of new media declaration infuses the works of even the commentators whose words clearly describe nothing more than a faster Web.  When you read just about any writing that uses “real time web,” it seems that the author is striving to discuss new tools for new problems, rather than new tools for old problems, and would be aghast if it turned out that it’s actually just a discussion of old tools for old problems.

So is the real-time Web really just a faster Web, or is it a new medium, as different from the Web as the Web is from TV and radio?

That’s a deliberate question, since I think the fascination with ”’real-time”’ often represents a yearning for old media forms – especially broadcast media like TV and radio.  Ironically, people who have seen a lot of media evolution tend to race to declare every older medium dead, while they simultaneously pine for the familiar patterns of old media.

The idea of broadcasting – getting the same information out to many users at the same time – might be a familiar pattern that people are seeing when they think of the Real Time Web as a new medium.  Early Internet businesses like Pointcast and Broadcast.com were simply about using the Web to get the same information to as many people as possible at (roughly) the same time.  But at this point, thinking of the Web as a broadcast medium requires perverse ignorance of the distinct characteristics that make the Web interesting as a new media format.

What is distinct about the Web versus other media is the extent to which information can be aggregated and analyzed, the low cost and ease of creation of content, the application of both computer algorithms and social means to filter and personalize delivery.  When people began to understand that, they stopped trying to broadcast and instead built businesses like Ebay and Amazon and Google – and it so happens that those businesses were built on aggregating and analyzing asynchronous information.

I guess I’m asking:  Is the asynchronicity of information creation, collection and analysis also an important distinct characteristic; and if so, does that mean that doing those things in real time makes the Web a different medium?

Another way to gauge the relative importance of the real-time concept:  Which would you rather have at your disposal, everything on the Web that is older than one hour, or everything that is more recent?  (Picking It Depends is not a choice.)

loving and leaving linden lab

The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.

F. Scott Fitzgerald, “The Crack-Up” (1936)

I love Linden Lab.  Over the past four years, I’ve poured everything I had into the company.  Leaving was a tough decision.  But at the same time, it was easy to see that it was time for me to go.

Departure missives are a tricky thing.  This is actually my third for this same departure:  I said goodbye to the company internally, I posted to the company blog, and now here’s one for my own blog.  Why so many?

I’ve studied the art of the departure memo, it’s really quite interesting.  The business world sees many comings and goings, and in certain companies, internal communications are destined to get leaked – and you can see that the authors know this.  Compare two examples from the same company, Yahoo:

  • Stewart Butterfield’s resignation was bizarre, funny, and ultimately a scathing indictment of a place that overdiversified and lost the love of innovation.
  • Sue Decker was more restrained, with a classic and classy goodbye that nevertheless could be read as a defensive listing of all the progress made under her watch.

In their own way, each goodbye note took pains to remind people of the author’s special qualities and accomplishments.  I avoided doing that in my earlier announcements.  It’s not that I’m especially modest –  I just didn’t want to muck up messages to colleagues, customers and company commentators with shameless self-promotion.  There’s a time and place for self-promotion.  Like right here, on my own damn blog.

Ah, but I’ve never been great at claiming credit.  I’m struck by the wisdom that one mentor told me earlier in my career, which I’ll paraphrase as:

Success has many fathers, and even more virgins trying to claim paternity.  No one who wasn’t there can really understand the full story, and even the ones who were there didn’t see everything.  But you’ll know what you did, and so will the people that matter.  Let the others play their guessing games.

So then here’s a game to play.  When success really does have many fathers, how do people claim any successes for their own?  I thought about what successes I’d want to highlight from my time at Linden, and I realized that any of them could have at least two opposing interpretations.

my would-be claim one idea opposing idea
key exec in managing company growth from early revenue to profitable phenomenon can spot and guide a winner just along for the ride
lead exec in many areas through company history: international markets, legal, finance, HR, developer relations, enterprise segment, business and corporate development multifunctional business executive short attention span to the point of personality disorder
led finance through early revenue, raising $15+ million equity and debt financing, accurately projecting 2+ years of revenue growth within 10% talented early-stage financier and prognosticator wild-ass guesser
early leader of international growth from 30% to 70+% of audience makes worldwide progress with limited resources strained the organization beyond its ability to grow
established basic legal and regulatory policy and strategies, with humor insightful thinker on social and governmental issues paper-pushing policy dork, with wicked streak
architected Linden Dollar as unique virtual currency and multimillion real dollar business fearless and creative new product innovator reckless and dispiriting goon
wrote, tweaked, and rewrote the Tao of Linden sensitive guardian of company culture feckless appeaser of management fads
executive sponsor of startup-within-a-startup initiative for enterprise segment constant pioneer in new markets and strategy focus-diluting disruptor
negotiated and managed acquisition and integration of several businesses accomplished M&A dealmaker heartless crusher of helpless entrepreneurs
helped recruit and integrate new management team before departure selfless assembler of talent ruthless operative in reorg-and-run

Can I claim any of these successes as wholly my own? Where does the truth lie? Would the modesty of my saying that all opposed ideas could be true be undercut by the implication that I would then be claiming a first-rate intelligence?

Ah well, that’s about the best I can do for self-promotion.

social networks and the dunbar break

A couple of months ago, The Economist noted that the Dunbar number appears to apply to online social networks like Facebook.  I’ve since been thinking about the threat this represents to Facebook’s business, and all social networking businesses.

To recap:  The Dunbar number is a theoretical limit to the number of social relationships that one person can maintain – this number is often estimated at 150.  Facebook’s “in-house sociologist” confirmed that the average Facebook user has 120 “Friends” (i.e. other Facebook accounts linked to the user’s account).  Moreover, when measuring the interaction between users, such as comments on each others’ accounts, men average regular interaction with only four people, while women average six people.

You see the problem?  It’s too easy to leave social networks:  you’ll leave as soon as your six closest friends do.  From Tribe to Friendster to MySpace, no one has been able to hold on to their users.  Given that history, Facebook and Twitter have to fight more than just faddishness – they have to fight the cognitive limits of the human brain.

Ironically, social networks do not have the full benefits of network effects.  A really robust network effect means that each additional user of a network adds value to the network for all users.  In social networks, once all of my friends have been added, I don’t really care if any more people join the network.  And that means that the converse is true:  once all of my friends leave, the network has no value to me, no matter how many other users are still on the network.

The ”’Dunbar break”’ occurs at the point at which so many of your contacts have left a social network that you no longer value the network.  Dunbar’s number suggests that this point might be as high as 150, but looking at the actual interaction on Facebook, your personal Dunbar breaking point for Facebook could happen when as few as half a dozen of your friends leave.

That’s why Facebook and other social networks must paddle furiously to try to add value that scales across all users with a true network effect.  But with advertising and applications and ”’lifestreaming”’, they haven’t quite found the magic formula yet.

Does current media darling Twitter hold the key to defeating the Dunbar break?  As a combination of social media and broadcasting, it has some intriguing possibilities.  Ask yourself:  Once all of my friends are on Twitter, do I care if anyone else joins?  And would I care if all my friends leave Twitter, while the rest of the world joins?  A lot of people are answering those questions differently for Facebook and Twitter, which is why Twitter is such a popular dance partner these days.

best and brightest = delusional and egotistical

When are people going to realize that the phrase “best and brightest” is only used without irony by those whose egos blind their senses?

David Halberstam used The Best and the Brightest the right way when he wrote about the supposed brain trust in the Kennedy administration that led us into the Vietnam War.  The title has roots going back almost 250 years, when a pseudonymous protestor applied the caustic label to the fools in his government’s ministry.

And yet the phrase remains an irresistible cliche to people who embody the opposite of the literal words.  The CEO of AIG misuses the phrase when he says Treasury must allow over $100 million in bonuses to be paid for the firm’s performance last year.  That’s right, bonuses for year 2008, when they made the decisions that led to a financial disaster that has cost $173 billion dollars of taxpayer money so far.  In explaining this bizarre disconnect between actual performance and justifiable compensation, this delusional buffoon says, “We cannot attract and retain the best and the brightest talent to lead and staff the A.I.G. businesses — which are now being operated principally on behalf of American taxpayers — if employees believe their compensation is subject to continued and arbitrary adjustment by the U.S. Treasury.”

Wall Street workers must be especially immune to irony these days.  Judith Warner says it’s a relatively modern malady to call finance workers the best and brightest, though she seems unaware of the irony deficit involved in the labeling.  On the other hand, commentators from all around the political spectrum seem appropriately aware of the mistrust we should have of any collection of pointy-headed resume polishers.

Are Wall Streeters the most delusional about their own talent and worth?  Well, they at least share the top of the list.  A peripatetic career through law, finance and technology has exposed me to enough people, professions and archetypes to form this thoroughly unresearched hierarchy of vocational delusion and egomania:

Seriously delusional: high finance versions of bankers, investors, lawyers, and consultants. This only applies to those who deal in hundreds of millions if not billions of dollars daily.  Something about dealing with massive amounts of money causes these people to equate their self-worth with the heady figures involved.  Two factors encourage the greatest separation from reality: (1) the abstraction of money from actual value-creating activity makes it easy to misplace the truth, and (2) the impact of punishing hours of soulless work requires delusions of grandeur to justify the sacrifice.

Mildly delusional: doctors, engineers, and fiction writers. An odd grouping at first glance turns out to have important shared traits.  All are involved in the creation and expansion of life, doctors in the most literal sense and writers in an equally important artistic sense, with engineers somewhere in between.  High intelligence and passion is required for success, and at the most successful extremes, significant fortunes can be made.  So it is not uncommon in these vocations to believe that only the best and brightest could thrive in their fields.

Surprisingly humble: venture capitalists and entrepreneurs. Because the most successful in these fields can become as rich as any in high finance, you might be surprised to find humility in their ranks.  However, although these two classes are often at odds (where one needs money and the other supplies it), they share a deep knowledge that success often arises from repeated failure and fortuitous circumstance.  These people know that the best and the brightest lose repeatedly to the persistent and the lucky.

Pathetically self-loathing: journalists and comedians. The best in these fields are every bit as bright as in any other vocation.  But the necessity of constantly examining the foibles of humanity leads to a misanthropic cynicism, which extends broadly to all throughout their view while saving the greatest contempt for the familiarity in the mirror.

These are obviously broad generalizations subject to many exceptions in every direction.  There’s no financier higher than Warren Buffett, who is famously humble; on the other hand, entrepreneur Larry Ellison is a reputed egomaniac.  And there are people in every category who hold dear to the belief that they exemplify “the best and the brightest” – and to these I say:  You’re right!

losers, killers, drugs, cars

What does it mean to say that TV lost to computers, as Paul Graham suggests?  Graham tries to explain why the Internet “won” in the battle of media convergence, but he begs the question of whether media “convergence” was a valid proposition in the first place, or what it was even supposed to mean.

Take a close look at any claim that one kind of media “lost to” or “killed” another.  Were they ever really in competition?  Sure, every kind of experience is a competitor for our limited time, but I’d call this “attention competition” rather than “replacement competition.”

I might choose to watch basketball rather than baseball in the limited time I watch sports, but basketball didn’t kill baseball:  although basketball had a later start and has grown more in recent decades, both sports businesses are larger than they have ever been before.  Things that compete for my attention do not kill each other, they just give me more choices.

In contrast, I use a safety razor to shave my face.  A safety razor is better than its predecessor, the straight razor, in every way – shaving is faster, cleaner, closer, easier, safer – except possibly price, which I am happy to pay for those benefits.  I am not the only person to make this choice, by and large the entire shaving market has.  Although some people stick to the straight razor for reasons of fashion, self-esteem, or violent secondary uses, the straight razor has been replaced by the safety razor to such a large extent that you can say the latter killed the former.  Losers and killers in the market can only exist among things that accomplish the same function.

A lot of media is entertainment, and “entertainment” is not a function, it’s a category.  Historically, there has been a huge amount of attention competition within the entertainment category, but much of this competition has only given us more choices in an expanding market for leisure.  Novels did not replace plays, radio did not replace books, movies did not replace radio, TV did not replace movies.  Most if not all of those businesses are larger in absolute terms than they’ve ever been, even though many are smaller in audience share.

On the other hand, vinyl albums nearly got replaced by 8-tracks and then cassette tapes, then all of these lost to CDs, and now CDs are losing to music downloads and streaming on computing devices.  All of those formats simply fulfill the function of delivering music, so here you really can say that a new format killed an old format, and various businesses were winners and losers along the way.

If you’re thinking that it’s not so simple, you’re right.  Let’s go back and examine the entertainment category again.  TV may not have completely replaced radio, but a certain kind of radio show no longer exists in any noticeable volume:  the radio drama, of the type that the Shadow knows.  Were these replaced by TV shows?  Probably.  And for that matter, books and their predecessor scrolls and tablets did replace cave drawings.  So how can you really tell when you have competitors for attention within a category, rather than replacement competitors in a race to be the best format for the same function?

I think one key is to ask whether the format gives rise to a distinct art form.  I don’t mean to make lofty judgments about art, but more mundane observations about senses and brain responses.  (I could say “neurosensory experience” rather than “art,” but that would be replacing pretension with didacticism.)  The novel engages the brain in conveying a narrative in a way that the brain is not engaged in hearing or watching essentially the same story.  Plays have a sensory experience in a way that is not captured in TV or movies.  But radio dramas never really attempted to deliver any experience that wasn’t the same experience done better by a TV drama, so when people began enjoying TV shows, that did kill radio shows.  There is arguably no distinct form of art tied to radio – music obviously can be delivered well in many formats – so it’s likely that radio will actually be killed by superior forms of distribution for audio content.

So does TV have a distinct art form?  Well, over the last few decades we have seen the rise of ever longer dramas that tell a story with character depth that have been quite distinct to TV (especially as opposed to movies or plays).  From Hill Street Blues to The Sopranos, viewers became accustomed to following character development and story lines over years rather than one-hour episodes.  This “long form passive story viewing” is distinct to TV – and even though you can watch these same TV shows on your computer, that doesn’t mean that the Internet ”’killed”’ TV.  It matters whether we are talking about the art form of TV, or the delivery vehicle of TV.

See, this my objection to Graham’s argument.  He says that “Facebook killed TV,” meaning that the social applications made possible by the interactivity of the Internet led to the downfall of TV.  But this mixes and matches the format and the substance; it is an inapt attempt to make a larger artistic and social commentary.  Social applications are an attention competitor for the long form passive story viewing that is on TV today, but neither will kill the other.  To say that TV loses to the computer is only saying that the screen in your house on which you watch TV shows will also be the screen that you use for Facebook – it’s a somewhat interesting comment, but not more interesting than cassettes beating 8-tracks.

My view here is complex and probably not stated very well, so I’ve come up with a simple question to ask when considering the losers and killers that others see:  Are we talking about drugs or cars?

Popular recreational drugs have distinct effects on the brain, and all are competitors for the attention of recreational drug users.  Some people prefer particular drugs, but in the overall drug market, meth doesn’t kill heroin doesn’t kill cocaine doesn’t kill weed – all of these have their audiences because each elicits a distinct effect on the brain.  (For some reason, only heroin seems to be a popular point of comparison for technology in general, Internet addiction, or porn on the Net.  I think comparisons should be more exact:  TV is heroin, social networking is coke, Internet porn is meth, and so on . . . but I don’t really have the time or social position to research this properly.)

Before safety razors and CDs, cars are the prototypical replacement choice – the advent of cars killed the horse-and-buggy, literally a superior delivery vehicle.  If it’s not this kind of outright replacement, then there shouldn’t be talk of losers and killers.

why O’Reilly can’t read

Tim O’Reilly is one of the few public figures in technology who honestly deserve the term “futurist.”  He’s a vibrant speaker and thinker; every time I’ve seen him talk, he’s set my mind spinning around a universe of amazing ideas.  The future is unevenly distributed because he’s got more of it in his head than most mortals.

This is of course the kind of encomium you give right before you try to criticize someone who knows a hell of lot more than you do.  Ah well, that’s what blogs are for, aren’t they?

With the launch of Amazon’s Kindle 2, O’Reilly looks into the future and says that the best e-book reader today will be gone in three years.  He makes a comparison to Microsoft’s effort in the mid-’90s to make a portal for delivering proprietary information over the Internet.  Tim quite correctly told Microsoft’s then-CTO that the open standards of the World Wide Web would provide a far superior means for dissemination of information, so that publishers that did not embrace the Web would get left behind, along with the proprietary platforms that compete with the Web.

He was right then but he’s wrong now to extend the analogy to books.  I don’t think O’Reilly understands books, or at least, he’s chosen to hide his understanding for polemic purposes.  I realize that I’m entering into serious chutzpah territory here – not only does this man see the future in ways I can’t imagine, but he’s been a successful book publisher for over 20 years.  He’s forgotten more about books than I’ll ever know.  And maybe that’s the problem.

O’Reilly apparently looks at books as just an arbitrary format for delivering information. He makes the question of the Kindle simply one of determining the most efficient means of delivering information over an interconnected web.  And so you can look into a future of free content and open standards and conclude that there’s no room for a proprietary e-book reader.  Supposedly everyone will be reading books on their laptops and iPhones and the only e-book readers that will survive will be computing devices that adhere to the same open standards as these other devices.

But that vision of the future ignores the properties of the book as a format.  For some kinds of books, the format really is only an obstacle to efficient delivery of information.  A perfect example is the book used as an example in O’Reilly’s article, iPhone: The Missing Manual.  The purpose of this book is to describe how to use an iPhone; it’s not meant to be read cover-to-cover but to be dipped into in order to retrieve information.  This is not a book so much as, as the title says, a manual.  Virtually all of the books in the O’Reilly catalog have this property – they are manuals for delivering information, not books that deliver a narrative.

O’Reilly would disagree with the contention that technical manuals can’t be narratives, so let me be clear that I’m not saying that.  I’m saying that even if a manual has the form of a narrative, delivering a narrative is not the purpose of the manual.  The purpose of the manual is to deliver information.  As opposed to what I’ll call a “book” – the purpose of a book is to deliver a narrative.  You may also get lots and lots of information, but the primary goal of the reader is to experience a narrative, not to simply learn the information.

Sure, it’s a cheap trick to try to win an argument by defining terms, but there’s a reason it should work here.  Say that manuals are works that are read to receive information – I agree these will need to be on open standards, and I agree that proprietary e-book readers for these will not exist.  And say that books are works that are read to enjoy a narrative – I think this activity has special features that will support proprietary e-book readers.  So you see what happens if I am right?  All manuals will disappear into the Intarwebs, and the only things left on e-book readers are the things that I am calling books!

And what are those “special features” for reading a narrative for narrative’s sake?  Well, the dry terms are these:  form factor, display characteristics (electronic ink), weight, weatherproofing, keyboard layout and peripherals (or lack thereof).  And the magic of books are these:  the feel in your hand, the pages racing under your fingertips, reading all day curled up on the couch, reading in the bath, reading with a flashlight under the covers, reading until you disappear into the book.  All right, finally I’ve tipped my hand, you might have guessed it all along:  I love books.  I’m not alone:

I’m very fond of paper books. I have run out of wall space in my house for bookshelves. One of my hobbies is to collect old editions of bestselling books from bygone eras (many of them now largely forgotten) to find out just what people in that time found so compelling. I find that many “great” books have a timeless quality, but the second tier down, in which grand human themes stand out less than the time-bound peculiarities of an age, provide a fascinating window onto the past.

That’s Tim O’Reilly, from the same Amazon interview linked above; according to Wikipedia he was a classics major, and it shows.  I don’t think either of us believe that people will stop loving paper books, just as many people still enjoy plays that have been produced for centuries.  I guess I’m a more modern bibliophile than O’Reilly in that I believe that an electronic format can retain some of that same magic, and I’m a less open idealogue in that I think open standards will need more than three years to surpass the best proprietary e-book reader today.

bailout 2.0

I complained about auto industry bailouts and was chastised.  Thomas Friedman says VCs should get bailed out and VCs say No No No.  Then I realized that No is not enough of an answer.  You have to propose an alternative, preferably one without the government deciding who gets the money.  That’s not how we do it in the S.V., yo. (I kept expecting Sarah Lacy to say that, but since she didn’t, I did.)

There are few who are ideologically strong enough to say that we shouldn’t have any bailout for anyone for any reason.  I’m not one of them.  This crisis is too enormous; adherence to ideology now can only be accomplished through disconnection from reality.  I do think that government bailout funds should go to stimulating the economy.  But as someone with a Silicon Valley belief in economic growth through innovation, I just don’t believe that the federal government will make the wisest choices about how to spend bailout money.

So who should make those choices?  Wait for it . . . wait for it . . .

You should, of course.

That was too easy, but let’s think hard:  Is this any worse an idea than anything that’s on the bailout agenda now?  Web 2.0 may be dead, but the underlying values of participation and collective intelligence are enduring concepts that will continue to pay off in the future.  Sure, the wisdom of crowds has many exceptions and qualifications, but I’ll gladly bet my tax dollars that the crowds are wiser than Washington.

Let’s say the government designates $100 billion for a crowdsourced bailout.  They just have to set up services to collect and analyze the input from all the social nets out there.  Twitter users could nominate worthy recipients with a #crowdbail hashtag.  Mobile camera phone users could use barcode scanners to submit deserving products, and location based services to tag retail stores worth bailing out.  There would be dozens of ways to submit suggested recipients on the web.  In order to limit fraud, the payouts could be in the form of tax credits or “consumer purchase credits” (CPCs).  I just made up CPCs – government credits that a seller can apply to reduce the consumer purchase price of goods.  For example, a business that received $100 in CPCs can take 100 items that are usually sold at $2, and sell these for $1 instead, receiving $1 in cash from the government for each item sold.

Would this work?  Is it too complicated, too stupid, too difficult?  Look, this is just half hour’s worth of musing, but I believe that any objection you can raise can be fixed with further work to at least to this standard:  it’s not worse than the government bailouts going into effect now, and it’s certainly not worse than some being asked for.  Something along these lines is in the spirit of what Silicon Valley can do, and I’d be thrilled to see more serious efforts to design something that fits with the ethos of our region.

For example, I’d bet that the folks at Virgance could come up with something that would make sense, and they’d probably be glad to implement it for less than 1% of that bailout amount.

man bites App Store

A study that most iPhone apps fail is being picked up by credible news outlets.  This is a classic abuse of the “Man Bites Dog” principle:

When a dog bites a man, that is not news, because it happens so often. But if a man bites a dog, that is news.

The fact that most new efforts fail is not news.  In recent years, we’ve seen amazed reporters discover that corporate and brand Facebook apps fail as FB developers struggle.  Shockingly, most businesses fail in virtual worlds.  Although small business failure rates are often exaggerated, the real numbers show that most startups fail.  Without going to the trouble of actually doing research, I will make the following guesses:

  • most Google advertisers fail.
  • most blogs fail.
  • most eBay sellers fail.
  • most television shows fail.
  • most movie producers fail.
  • most book authors fail.
  • most cave drawings fail.

I look forward to the startling exposes crafted by hardworking reporters on these topics.

Sarcasm aside, it’s interesting to consider the underlying assumptions of those who would find news in high failure rates.  If these stories really are about man biting dog rather than vice versa, then the assumption must be that there is a new means of business delivery that ensures success for the majority of its users.

That of course is a flawed assumption.  There is not now and never has been any way of delivering new business efforts that guarantees success in a free market.  Apple does not make businesses successful, Facebook does not make businesses successful, even mighty Google does not make businesses successful.  Instead, each of those companies have enabled some businesses to become successful – which is just another way of saying that they’ve given most businesses a new way to fail.

So the ultimate test for these companies is not whether they magically improve failure rates for others.  The test is whether the company itself operates a profitable business.  Apple and Google have passed that test with flying colors, Facebook has yet to do so.

TV’s Napster Moment

I really don’t believe we’re going to see this much stupidity again so soon after the exact same stupidity launched the death of an entire industry.

But I’m getting ahead of myself, let me recap very briefly and excessively linkfully and painfully parenthetically:  Boxee has this cool product that enables you to watch Internet video streams on your TV.  Another company, called Hulu (f.k.s.a ClownCo – the “s” stands for snarkily), has deals with a lot of TV networks to stream TV shows on its website.  Both companies he said she said that Hulu content is now being blocked from Boxee.  The obvious read is that the ”’content providers”’ (I just invented triple-quotes to indicate sarcastic air quotes!) that are partnered with Hulu demanded this blocking to protect high-priced distribution channels.

It’s Napster all over again, replaying from the music industry into the TV industry.  For those with short memories:  In probably the worst decision in a relentless trail of self-destruction, record labels had two choices when they saw the early popularity of the original Napster (not today’s incarnation) service for music file sharing:  make a deal with Napster or shut it down with litigation.  The labels chose to destroy the popular service, which turned out to be like trying to prevent a flood by blowing up the nearest dam.  Napster alternatives quickly sprang up that were impossible for the labels to deal with, and the rest is history, just like the labels will be.

Santayana said “Those who cannot remember the past, are condemned to repeat it.”  This used to be a mournful statement, because it’s actually pretty hard to learn from history – there’s just so much history, no one can really hold all of the lessons of it in one head.  So you’re doomed to make mistakes that lots of people have made before.  And then by the time you’re old enough to remember a lot of history, you’re starting to feel too old to do anything about it.

But one wonderful aspect of our accelerated modern lives is that history happens in ever-shorter cycles.  I doubt there’s anyone in a decision-making capacity at the TV networks who doesn’t remember the Napster lesson very well.  And I can’t believe they’d make the same mistake with that knowledge.  People can’t be that dense, can they?  A deal is going to get done here, if not with Boxee then maybe with BitTorrent.  I’m going to lose faith in humanity if that doesn’t happen.

[Edited to correct slight misquote of Santayana.  <sigh> Those who rely on random websites for quotes are doomed to misquote.]

“All this has happened before, and all this will happen again.”

With all the recent coverage of Twitter’s financing, and earlier news about the Twitter-Facebook acquisition dance, you might think that the two are destined to compete to the death.

Some say they’re already competitors, that Facebook will kill Twitter, or that they are at least competitors for developer mindshare.  They are certainly competitors for media mindshare – the lower half of this chart shows that news coverage of the two has become nearly equal.

twitterfacebook1

Ah, but what about that upper half?  Search traffic for Twitter doesn’t even register compared to Facebook.  Will it really take Twitter 36 years to catch up to Facebook’s active user base? Is Twitter really even in the same game as Facebook?  There’s a hint in the #1 reason that Todd Chaffee invested in Twitter: because it’s “open.”

I like to think of Facebook and Twitter not as direct competitors, but as classic heroes of competing ideologies.  They represent yet another chapter in that old Internet story, The Walled Garden and the Open Future.  In the primary exemplum, America Online introduced the Internet to the masses, delivering a “safe” experience that attempted to control all content delivery to the end users.  AOL was eventually swamped by services that aggregated more open content (Yahoo), excelled in specialized commerce experiences (eBay, Amazon), and found massive monetization through key horizontal services like search (duh, Google).

The moral of the story is supposed to be that the open future always wins in the end.  But the moralizers conveniently forget that the story keeps repeating itself.  The walled garden is replanted again and again, and the open future is always in the future.  And people make money at both ends, and people fail at both ends.  Let’s not forget that early AOL shareholders saw the company sell at $182 billion, and let’s not ignore former heroes Yahoo and eBay struggling to remain relevant today.  Amazon and Google look like winners today, but they’ll have their rough patches too – when the game lasts forever, the only prize is that you get to compete for your life over and over again until you die.

With that cheery thought, let’s look at Facebook vs Twitter again.  Facebook fills the role of a classic walled garden experience, notwithstanding their apps platform, which seems more of a concession towards prevailing tech ideology than a coherent strategy.  Twitter is part – only part – of the competing ecosystem of open web apps.  Take Twitter together with Flickr, WordPress, WidgetBox, glue it all together with some OpenSocial and OpenID – and there you have a Facebook replacement in the classic Open Future:  it doesn’t all quite hang together yet, but someday it will – one or more of these services will become a huge new business, and Facebook will shrivel to a shadow of its former self (though early shareholders will get a chance to enjoy a huge liquidity event before then).  The open futurists will declare victory, but it’s just another battle in a neverending war.