What happens to game designers when they don’t know the “right” answers?
This is especially important in situations where designers need to somehow verify crowdsourcing data. What data can we obtain with the resources we have?
Well, what do we have?
1) In the case of our Metadata Games project for Archives, we have a huge collection of photographs.
2) Users who might want to interact with these photographs, and the user accounts they create.
3) The competitive relationships between players that might be fostered within our games.
4) The relationships between tags based upon how often they appear in images together.
5) Eventually, defined groups of associated images based on the tags that they share.
Sounds great, but what don’t we have?
We don’t know the right answers, the correct tags for any of the photos we’re asking people to tag. This is kind of a big deal, because games usually involve the user solving something the system already knows! We have no way of checking whether a user’s input is correct, so we’ll have to use the competitive relationships between users to prevent false entries.
How? By giving players a VETO button, allowing the crowd to moderate itself by disagreeing with the entries of their peers.
In single player situations we’ll be completely unable to check if entries are correct.
But it turns out our entries don’t even have to be correct. At least, not all of them. With the patterns that emerge from massive quantities of tags, we can quickly tell which entries are valid by keeping track of how many times they have been agreed upon and how many times they have haven’t been flagged. A picture tagged with the word “Dog” a thousand times and “cat” once is probably a dog. Ideally the tag “cat” will have been flagged as incorrect by someone…
In his work “Metadataing The Image”, Lev Manovich explains how automation helps humans manage an otherwise overwhelming amount of information:
“What is important in this paradigm –- and this applies for computer media in general – is that storage media became active. That is, the operations of searching, sorting, filtering, indexing and classifying which before were the strict domain of human intelligence, become automated. A human viewer no longer needs to go through hundreds of hours of video surveillance to locate the part where something happens – a software program can do this automatically, and much more quickly. Similarly, a human listener no longer needs to go through years of audio recordings to locate the important conversation with a particular person – software can do this quickly. It can also locate all other conversations with the same person, or other conversations where his name was mentioned, and so on.”
So this means that all we really need to do is to prompt users for semi-specific information and give players a way to flag or reinforce previously existing tags.
In order to avoid the stray “cat” tag, our users should be rewarded for entering tags that are approved of by others later on. How can we get people to 1) care about whether their tags are approved by people they know and 2) invest in contributing/editing our network of tags over a long period of time?
Well Facebook seems like a pretty solid option for us if we stay away from annoying tendencies of some FB games.
Approving / flagging tags on a given photo might be a slow process to build rewards for, but so are many popular facebook games! If we keep track of who enters what tag and reward them for user approvals of said tag later on, then we can grant experience points over time. Experience grants users levels and level progressions can be broadcast to friends on facebook as a form of social reward for playing our game over an extended period.
So we’ll need to mobilize human players to filter out their own bad tags over extended periods of time. The question now is how to promote the experience in such a way that users WANT to become a part of the information salvation process. Isn’t it more fun to be bad?