UMBC ebiquity
Botprize: a turing test for game bots

Botprize: a turing test for game bots

Tim Finin, 1:42pm 12 September 2009

Botprize is yet another variation on the classic Turing Test. Does setting the evaluation in the context of a online multi-user video game really change the nature of the test? At least this does have real practical values. The computer game industry is very competitive and having more realistic and interesting computer-controlled entities make a game more successful. Technology Review has a short story, A Turing Test for Computer Game Bots on the contest.

“A new contest could help develop better AI for games and other applications.

Can a computer fool expert gamers into believing it’s one of them? That was the question posed at the second annual BotPrize, a three-month contest that concluded today at the IEEE Computational Symposium on Intelligence and Games in Milan.

The contest challenges programmers to create a software “bot” to control a game character that can pass for human, as judged by a panel of experts. The goal is not only to improve AI in entertainment, but also to fuel advances in non-gaming applications of AI.

The contest has been completed, but the results have not yet been announced. The BotPrize web site currently says:

The 2009 BotPrize Contest has been decided!

Complete results will be posted soon, but here is a summary of the results:

None of the bots was able to fool enough judges to take the major prize. But all the bots fooled at least one of the judges.

The most human-like bot was sqlitebot by Jeremy Cothran. The joint runners up were anubot from Chris Pelling and ICE-2009 from the team from Ritsumeikan University, Japan. Jeremy and Chris are both new entrants, and the ICE team were also runners up in 2008.

… more details to follow in the next 24 hours.

Contestants created bots for Unreal Tournament 2004 which communicate with the game server via the GameBots interface.

The TR story continues.

“This year’s BotPrize drew 15 entrants from Japan, the United Kingdom, the United States, Italy, Spain, Brazil, and Canada. Entrants created bots for Unreal Tournament 2004, a first-person shoot-’em-up in which gamers compete against each other for the most virtual kills. For the contest, in-game chatting was disabled so that bots could be evaluated for their so-called “humanness” by “physical” behavior alone. And, to elicit more spontaneity, contestants were given weapons that behaved differently from the ones ordinarily used in the game.

Each expert judge on the prize panel took turns shooting against two unidentified opponents-one human-controlled, the other a bot created by a contestant. After 10 to 15 minutes, the judge tried to identify the AI. To win the big prize, worth $6,000, a bot had to fool at least 80% of the judges. As in last year’s competition, however, none of the participants was able to pull off this feat. A minor award worth $1,700, for the most “human-like” bot, was awarded to Jeremy Cathran, from the University of Southern California, for his entry, called sqlitebo.”


Comments are closed.