Our Gladwellian Future

thetippingpoint

This is a great book.

Gladwellian stories are those that take a common assumption and then overturn it with data. As the cost of generating data decreases, and as inaccurate data grows, the genre finds itself in an arms race.  Determining what is enough data and who is really right will now matter as much in publishing as it does in commerce. My first edition hardback copy of The Tipping Point has many dog-eared pages and plenty of water stains from reading it at the beach during our honeymoon in 2000.  It is a great book.  Gladwell, and similar-TED-esque methods of story-telling, have recently received some negative attention.  I’m no publishing expert, but the formula it seems he, and many other authors that I really enjoy, have followed is:

Gladwell(Story) = Common Assumption (X) + Data Driven Reality (Y)

In both Raeburn’s critique of Gladwell and Thomas Frank’s “TED Talks are Lying to You” we’ve now found that Gladwell’s story telling method has itself been subject to Gladwell’s own formula.

Gladwell(Gladwell(Story)) = Common Assumption(Gladwell(Story) + Data Driven Reality(Y)) + Data Driven Reality (Y)

This results in the following conundrum:

  • A Gladwellian story overturns a Common Assumption with data, creating a new prevailing Common Assumption 2.
  • Further data overturns the Gladwellian story and Common Assumption 2.

Common Assumptions are exactly that – Common.  There is no shortage of them.  At the time of Gladwell’s initial writings, creating ‘Data Driven’ realities to contradict those assumptions was a costly endeavor, but the cost to do has dropped significantly. Ten years after reading The Tipping Point, my ability to create data to corroborate a claim or get a paper published is greater than it has ever been.  I can hop on Odesk and contract with people globally to create data or simply parse existing data sets.  And that’s assuming that we base our Gladwellian story on newly commissioned data – as this Economist article points out, science itself is turning out results that are harder and harder to corroborate.

  • 1/2 of published research can’t be replicated per VC rule of thumb.
  • 6/53 landmark cancer studies were repeatable by Amgen efforets.
  • Negative results now account for 14% of papers, down from 30% in 1990.

There is no shortage of Common Assumptions waiting to be over-turned by the application of data, but who is to say that the data that we are using is correct?  It becomes a battle of data sets and the author’s integrity.  We’ve encountered a Red Queen Race, where a better data set is compelled to introduce the next generation of a story.  One author’s story, supported by data, can stay as the dominant narrative until it is overturned by a larger or better data set, which itself may have a competing or supporting narrative.

About flybrand1976

Find me on twitter @flybrand.
This entry was posted in Business, Disruption, Innovation, Theory and tagged , , , , , , , , , . Bookmark the permalink.