Even in my so-far small experience with crowdsourcing, I’ve noticed that many campaigns succeed or fail based on the presence or absence of three criteria. I’d like to share them today, and get some feedback on how essential my readers believe these principles to actually be.
- Incentive. I just got done reading a post by fellow blogger VisualBloke about the relations between human behavior and the social web, and it mentioned incentive as being influenced by tones of self-interest. Meaning, make it personally worth it for your participants. Most CS’ing campaigns have this built-in, offering participants the opportunity to contribute to something that would normally be out of their sphere of influence. Nissan’s Project 370Z is a good example of this; how many people have ever had the chance to design a mass-market automobile? Kickstarter offers a more clear-cut incentive system, with different levels of monetary contribution translating to “rewards”, extra influence, or early access to the final project. No matter how you slice it, participants need more incentive than “we need help”.
- Barrier to Entry. How easy or difficult is it to join your crowdsourced work force? The right answer to this question depends on the level of talent you need from the people comprising your crowd, but the bottom line is to make your barriers low enough as not to deter any sufficiently motivated individual, but high enough to ensure quality. This seems complicated; let me explain. OpenStudy.com, as a resource to all students, wants to set its barrier low for entry because everyone from elementary schoolers to graduate students and beyond needs to easily access the site. They accomplish this with simple design layout and a no-frills registration process, making it simple for any student to get involved. By contrast, the aforementioned Project 370Z demands gearheads and performance experts. Hiding the project on the performance-dedicated subsection of their Facebook page was a move that may have seemed less than prudent at first, but actually succeeds in weeding out the less-than-desirable opinions without resorting to a dedicated moderator, simply by virtue of them being absent.
- Compartmentalization. Present a crowd with a big problem, and you get a million less-than-helpful answers. This is the principle that rules overwhelmingly in the comments sections of political news sites. Break the problem down into manageable chunks for maximum crowd effectiveness. I hate to pick on Project 370Z again, but damn it, it’s such a good example of successful crowdsourcing. Had they presented their crowd with the simple directive of “help us build a car”, they would have gotten a hodge-podge of vastly different ideas, opinions, and (unfortunately) misinformation. Instead, they break the process down into parts, and then further into individual questions. “Help us build a car” becomes “which of the following three wheels best fits the Metalloy design?” Give people a manageable problem instead of an open-ended question, and you get more focused and relevant answers.
There are probably many more aspects of effective crowdsourcing, and even more that will guarantee failure, but I’ve noticed these three present in every successful attempt I’ve seen. Have you observed any crowdsourcing patterns worth noting?