Earlier this month, Dan Frommer, a writer for Silicon Alley Insider, made a prediction right before Steve Jobs’s WWDC Keynote address.
“We also expect Jobs to unleash some sales milestones; to some extent, his keynotes are like mini earnings calls,” Frommer wrote. “Specifically, we think it’s likely he’ll announce that Apple has sold its 5 billionth song via iTunes.”
The prediction was specific, easily understood, and reasonable. But, more importantly, it was wrong. Jobs made many announcements during that address, but iTunes’s 5 billionth sale wasn’t one of them.
One could argue that Jobs’s keynote addresses are primary targets for all sorts of predictions, many of which never turn out to be true. And normally Frommer’s wrong guess would have been drowned out by the thousands of other blog posts, articles, and twitter updates hitting the web that day.
But perhaps unbeknownst to the writer, someone was keeping score. A U.K. man named Nigel Eccles made a note of the prediction and its subsequent failure to come true. It was eventually published on Pundit Watch, a site that launched on June 6.
Several media critics have noted that punditry is one of the few professions in which its participants aren’t punished for being terrible at their jobs. Many of the pundits who predicted we’d find WMDs in Iraq, for instance, are still pulling in multi-million dollar contracts while continuing to impart their widely-discredited wisdom to the masses.
Last year, two researches, Kesten C. Green and J. Scott Armstrong, gave eight “conflict scenarios” to 106 experts — mostly business professors — and asked them to make predictions of outcome. They then turned around and gave the same scenarios to 169 students who were not considered experts. The results, published in the journal Interfaces, showed that predictions made by “experts” were only slightly better than those made by the general population.
This wasn’t the first time that one of the researchers, Armstrong, had dived into such a topic. In 1980 he published a paper titled Ã¢â‚¬Å“The Seer-Sucker Theory: The Value of Experts in Forecasting.Ã¢â‚¬Â
Pundit Watch takes this theory and applies a modified version of it to the real world. “The Seer Sucker Theory shows that no matter how much evidence there is that proves seers (psychics) don’t exist, there will always be suckers who believe that they do,” Eccles told me in a phone interview yesterday.
To understand how Pundit Watch works, one must first consider the site with which it’s affiliated: Hub Dub.
In previous years, Eccles had worked on “betting exchange” sites, where users made predictions for certain scenarios and then bet for or against them. He began to notice that the way he consumed news closely followed this model; he viewed it as a series of outcomes and found himself making bets on how events would unravel. “I decided it was much more useful instead of reading the polls or taking analysis that I should just look at what the odds were,” he said. “The original idea was let’s do something where users could follow news stories, trade predictions, and in the process they could produce exciting markets.”
He teamed up with three others and, after raising a seed round of investing, launched Hub Dub in January.
When a new user signs up, he’s given $1,000 in play money. He can then take that currency and bet on user-created questions or write a question of his own. Questions span across a whole range of categories — movie opening weekends, gas prices, election results– and the amount of money at stake hinges on how many people are betting on a particular outcome. Most of the questions are U.S.-centric; though the company is based in the U.K., they realized that the States offered a much larger, “sophisticated” market, and over 80% of their current users live in America.
“So about three months ago we were thinking about all the journalists and pundits out there and saying to ourselves, ‘I wonder how they would perform if they were on the site?’” Eccles said. “So we decided that instead of inviting them on, let’s follow their predictions and record it on Pundit Watch. We did that through stealth for a month, and then at the end of the month we launched it and said, ‘Hey, we’ve been following your predictions, and here is your performance.’”
The three pundit categories — technology, politics, and celebrity gossip — are assigned to “category editors” who are supposed to follow the pundits very closely, making note of any forecasts they make. When Chris Matthew said that Hillary Clinton may wait until the Democratic Convention to drop out of the race, a Pundit Watch editor was there to record it. And when she dropped out well before the convention, that same person returned to the site to deliver the verdict — case closed.
So how do the nine pundits they chose to follow measure up? Not always so well. Michael Arrington, founder of TechCrunch, has only seen 14% of his predictions come true (though technically Pundit Watch is following all the writers on his site). Pat Buchanan, surprisingly enough, is currently running four for four; all his fortellings have come to fruition.
“Some of the pundits we’re tracking are saying, ‘Well it’s really subjective, and I don’t think I made that prediction,’” Eccles said. “What we’ve said is we are trying to be as objective as possible; we’re taking very much a reader’s view. Some of the pundits said, ‘I didn’t make that prediction’ And we replied, ‘Well, you reported a rumor that that was the case.’ We take the view that reporting a rumor is quite similar to making a prediction.”
His category is technology, and he noticed that pundit forecasts tend to ebb and flow; before major conferences there are huge spikes, followed by inactive lulls once the event has passed.
Unlike many media critics, Eccles doesn’t view a wrong prediction as necessarily a bad thing. In his philosophy, punditry is more of a game than a serious stab at making weighed decisions. “To me as a reader, I want to reach someone with a strong opinion who makes interesting points and does so in a clear fashion,” he said. “These are the guys who are prepared to do that…I think the fact of the matter is — and we can’t admit it — is that a lot of news is entertainment, and I would personally much prefer to read an entertaining journalist who’s wrong a lot of the time than one who’s right most of the time and is quite dry and dull.”
I asked Eccles about the implications that such a site could have over the long term, especially if it became more successful. If these pundits became more aware that someone is keeping a quasi-scientific score of their predictions, would they be more careful and reluctant to make them?
“My biggest fear is that I force a lot of entertaining journalists to turn into dull caveat-driven detail-oriented journalists who wouldn’t be that interesting, would make fewer predictions, and would only make predictions if they were 100% sure [of their outcomes],” he replied. “I think that would be a really unwanted result…To some extent I think of it as an experiment. What will happen? Will the journalists themselves change how they perceive things, or will it change how the readers perceive them? I don’t know what the answer is.”
For his part, a low success rate for a pundit’s forecasts won’t make Eccles any less likely to read or listen to the person’s opinions. “The classic example is TechCrunch. They’re at the bottom of our leader board and yet I never for a moment thought I wouldn’t continue reading it. Because I find it very entertaining writing with very opinionated mixed predictions. Though usually they’re wrong.”