We allow aggregated peer reviews to guide our purchasing decisions. We trust in index funds to glean the collective intelligence of a market full of investors. We crowdsource, crowdfund, crowdtest, crowdcreate.
And, lately, we’ve begun to crowdvote, in growing numbers.
The most glamorous application of this majority-rules form of decision-making might be in the appeals to elect—via web or mobile app—our favorite crooner on “The Voice” or the worthiest cha-cha on “Dancing With The Stars.”
But a more practical use of crowdvoting is being developed for the policing of online gaming communities.
This is the topic of new research by Michael Wagner, an assistant professor of operations management at the University of Washington Foster School of Business. Wagner has modeled the most effective way to construct a crowdvoting program for ruling quickly, accurately and efficiently on complaints about offensive online behavior.
“Crowdvoting can allow organizations to save costs on a more agile and responsive enforcement system,” he says. “And customers who volunteer to vote feel more engaged in the online community.”
It’s only been in the last couple of years that multiplayer gaming firms League of Legends and Microsoft’s Xbox Live have pioneered this new application of crowdvoting. Both companies host tens of millions of member gamers. Their demographics span many nations and cultures as well as ages—including younger players of sometimes limited polish and perspective.
With players pitted against unknown competitors, these large and diverse communities tend to generate a steady stream of complaints over violations of “terms of service” agreements: the rules of etiquette.
So Wagner’s version of crowdvoting is not about policing, per se, but rather delivering judgment. Collectively.
Individual players already file complaints about improper behavior—cheating or using an offensive username, for example. Assessing and ruling on each complaint would normally be the job of a paid and trained employee of the gaming firm.
But with a crowdvoting system, this judgment can be outsourced to the very gamers who pay to be part of the community. These volunteers are easily recruited, Wagner says, by offering badges or in-game incentives in the form of “levels and loot.” Cost to the company? Nearly nothing.
“Members of online communities who volunteer for crowdvoting programs develop a stronger sense of ownership of the community, which makes them more likely to keep playing,” he says. “It seems to be a win-win situation for both gamers and the company that hosts them.”
But how many volunteers does it take to match the expert judgment of a trained staffer?
Don’t overtax the crowd
Crowdvoting of this kind asks of its peer reviewers a simple yes-or-no decision: does the behavior violate the terms of service? But this question is asked repeatedly by a community of millions.
So Wagner’s contribution is to discern the optimal format and number of voters required to minimize the strain on volunteers in the online community. He has modeled both simple majority votes and votes weighted by the accuracy of voters—established by their performance on a baseline test.
His analysis reveals that for simple majority crowdvoting, an odd number of voters is preferred to prevent ties, and a maximum of 17 voters is required to attain the correct ruling 99 percent of the time.
For accuracy-weighted crowdvoting, an even number of voters works fine since the variable individual weights rule out a tie. And by “pruning” the savviest voters—who demonstrate at least a 75 percent accuracy rate—from the pool, it takes only 5 to arrive at the objectively correct decision 99 percent of the time.
“It’s important to be efficient with our resources,” Wagner says. “Tax the voters too much and they’ll stop participating. Keep the workload low and they’ll continue voting and the system will be sustainable.”
This novel way to peer-police online communities is only in its earliest stage, and Wagner believes that it will catch on quickly among the big players in web 2.0. Twitter reports nearly 300 million active users. Facebook has 1.4 billion active users. And many other companies are attempting to foster their own online communities of customers.
That’s a lot of potential complaints to assess. And with the proven wisdom of small crowds at the ready, it doesn’t make sense—financial or logistic—to do it any other way.
“The amount of work for any individual voter is minimal,” says Wagner. “That’s why crowdvoting can really scale. There’s a lot of potential here.”
He notes that a modified version of his model could inform extensions of crowdvoting enforcement to rule on potential online “trolls” who post offensive comments on all manner of news and entertainment sites. Some have even suggested his model could eventually be applied to virtual boards, or even online juries.
Wagner says he’s just gratified to have been able to study a unique and unexplored operations challenge with a real-world application.
“This is not typical of the work I usually do,” he adds. “But there’s value in taking ideas from operations—mathematical and quantitative tools to achieve efficiency—and applying them in an intelligent way to a new application.”
“Crowdvoting Judgment: An Analysis of Modern Peer Review” is the work of Michael Wagner.