Internal prediction markets enable colleagues to wager on the fate of crucial projects and the success of products in the pipeline.
By Chris Taylor, MIT Technology Review
The need to predict the future, as exciting as it sounds, crops up in corporate life in terribly mundane ways. Case in point: large videogame companies need to know where to put their marketing dollars many months before they complete their games. Inevitably, some games will be stinkers, hardly worth the investment of an ad campaign. But how do you know which ones?
Here’s how one very large videogame company used to guess the answer: its marketing people would predict the score their games in progress would garner on the website Metacritic, which aggregates game reviews. But why would the marketing people know more than the game’s developers?
Three years ago, a startup called Crowdcast suggested a different tactic. Why not take hundreds of your lowliest employees, the ones in the trenches who are actually making and testing these games, and ask them what they think the Metacritic scores will be? Better yet, why not give them each $10,000 in play money and ask them to bet on the outcome? Let them accumulate a pot of pretend wealth if they’re right. Turn game marketing prediction into, well, a game.
To the executives’ delight, the employees’ Metacritic predictions turned out to be 32 percent more accurate. More disturbingly, their accuracy was inversely proportional to their place in the hierarchy. The closer you got to the C-suites, the less of a clue you had—and the lower your pretend wealth in Crowdcast’s game.
That embarrassing factoid might explain why this particular videogame company, like many Crowdcast customers, wants such stories to remain anonymous. “It’s kind of experimental,” explains Mat Fogarty, Crowdcast’s sardonic British CEO, “and it may undermine the credibility of their awesome management.”
Indeed, anonymity and uncomfortable revelations in the boardroom are Crowdcast’s stock-in-trade. The San Francisco startup already boasts clients and partners as diverse as Hallmark, Hershey’s, and Harvard Business School. It is built on the back of years of research into how internal prediction markets work. In such a market, managers ask employees questions about the future of their product and let them bet on the answers, without knowing who bet what. The results can be scarily accurate.
The Crowdcast platform
In September, Crowdcast ran a prediction market for a large American car company, one that normally runs its designs for new autos through a car clinic—a lengthy and expensive kind of focus group of buyers. Crowdcast’s project involved asking engineers and factory supervisors what they thought the outcome of the car clinic would be.
Forty questions were in front of these ground-floor experts at any given time during the market. For example: What percentage of buyers will list this car’s dashboard as its most important feature? The trial market was so accurate that the car company will be trying another in January. The auto giant now has a new way to cut costs: use these predictions markets instead of expensive car clinics at least a third of the time.
Crowdcast calls the space it’s in “social business intelligence” rather than crowdsourcing. “Within your organization, there are people who know true future outcomes and metrics,” says Leslie Fine, Crowdcast’s chief scientist. “When is your product going to ship? How well will it do? Normally, crowdsourcing asks for creative content. We’re asking for quantitative opinions.”
Put that way, it sounds a lot more respectable than “get your employees to play a kind of fantasy football with sales and shipping dates.” But make no mistake—that’s actually what Crowdcast does. That used to be a hard sell, Fogarty admits: “It seemed weird to be talking about playing games at work and using Monopoly money.”
But the gaming approach has gotten easier as more executives have heard about large-scale prediction markets like InTrade, which accurately forecasted the results of the 2008 and 2010 elections, and the Hollywood Stock Exchange, which predicts the success or failure of major movies. The current craze for “game dynamics” in apps like Foursquare and SCVNGR, which let users rack up points for various tasks, also helped drive the idea home. Crowdcast differs from other prediction markets, however, because users don’t get to bet on any outcome they can name. Instead, the company runs closed markets where top executives get to pose the questions.
Part of what’s fascinating about Crowdcast’s approach is how wildly inequitable it is—much like capitalism itself. The democratic, politically correct thing to do would have been to hand those auto employees survey forms, and count all their voices equally. But that would not have given the more prescient ones a louder voice. “We’re trying to create a meritocracy of information,” says Fine, who spent more than a decade studying prediction markets at HP Labs and holds several patents in the field. In theory, if you make bad bets, you go bankrupt. (In practice, these virtual bankruptcies happen rarely, and Fine can provide a back-end bailout by gifting every employee an extra $10,000, say.)
What crowdcasting proves is that even play money talks—and losers walk. Studies show there’s no practical difference between using real money and play money in this context—both equally represent a person’s intention. It turns out you put your money where your mouth is, even if it’s Monopoly money. And as much as the best players like racking up millions of fake dollars, here’s the answer participants most frequently give as the reason they enjoy the game: “I believe management is listening to me through this tool.”
Management, however, doesn’t always like what it’s hearing. The biggest blow of Crowdcast’s young life came when it ran a trial market for a consumer goods company, one that makes a popular household lubricant. The market was asked about sales figures, new customer acquisition, and the price of oil (vital for lubricants) at the end of the month. In every metric, the market was more accurate than the company’s official forecast. “We nailed it,” says Fine. After presenting their results, “we were high-fiving each other.”
But Crowdcast didn’t win the contract—because it had failed to connect with the head of sales, a 20-year veteran of the company who simply ignored the evidence. As much as it believes math should win any argument, the startup is learning the importance of the personal touch. If the marketing department says a product will ship on time, but engineering is more bearish, some boardrooms may prefer to cling to the marketing fantasy. Crowdcast’s next task, therefore, is figuring out how to make an eminently disruptive tool look less threatening to “awesome management.”
Copyright Technology Review 2010.