I recently attended the IEEE-CIG 2010 conference, which stands for the "Computational Intelligence in Games" conference, and I've to say, it was a very interesting experience. I've regularly attended AIIDE (the other game AI conference), but I had never been in CIG. I had the impression that the CIG environment was much more focused on games than in AI, and that made this conference really interesting from an AI researcher's point of view for the following reasons:
1) AI conferences are typically full with researchers (like me) that have a technique they believe in (case-based reasoning, support-vector machines,... you name it) and that are looking for problems they can solve with their technique. Typical "hammer looking for nail". In CIG it was the opposite. It was full of people explaining real problems they face in the creation of video games, and for which they were looking for solutions.
2) On top of that, given that a large proportion of the crowd were more game experts than AI experts, the range of AI techniques exhibited in CIG was quite narrow (as I will detail later). So, I think there are lots of opportunities for AI researchers looking for nails there.
Most of the papers presented some particular AI technique applied to some particular game. I did a rundown of all the 62 papers accepted and this is the histogram of AI techniques being used (not all papers were about AI, so the following numbers do not add up to 62):
- Machine Learning: 11 papers (3 on neural nets, 2 in reinforcement learning, rest on different techniques)
- Search: 7 papers (6 on game tree search)
- Planning: 1 paper
- Optimization: 18 papers (17 on genetic algorithms)
- Statistical: 3
- Game Theory: 2
- Logic: 2
- Scripting: 2
- Cognitive Approaches: 5 (2 on CBR/Episodic memory)
- Drama Management: 1
- Game Specific AI: 5
One quick observation is that there is a huge bias towards certain kind of approaches. For instance, there were 17 papers out of 62 talking about genetic algorithms. And out of all of the papers using some machine learning technique, the only 2 techniques to be used more than once were neural networks and reinforcement learning.
For instance, in one of the competitions held during the conference, the problem was to make an AI which would control a car, and be as fast as possible. The most successful approaches would first optimize a "race-line" using genetic algorithms, and then just have a hard-coded controller which would stick to that line when possible. The problem they faced is that driving a car is not just sticking to the race line: there are other cars in the game which have to be overtaken, etc. Thus, there are several behaviors that need to be modeled: sticking to the race line and overtaking. The AI community has solutions for these kind of problems where there are multiple competing behaviors, such as the subsumption architecture, or the more modern multi-agent bidding coordination mechanisms.
My conclusion is that we, as AI researchers, have a lot to gain by paying attention to places like CIG. On the one hand, we have the chance of contributing to the computer games community by bringing new techniques to the table, and on the other hand, the computer games community have a chance of contributing to the AI community by offering us challenging problems with which to push the limits of current AI techniques.
If anyone is interested, here's a quick list of the competitions that called my attention in CIG:
- Simulated Car Racing Championship: make an AI which races a car through a track with opponents as fast as possible. The problems involve both optimizing the trajectory, real-time control and handling unpredictable events (opponents). URL: http://cig.dei.polimi.it/
- Ms. Pac-Man Competition: create an AI agent which can play Ms. Pac-Man, problems: real-time decision making, non-determinism. URL: http://dces.essex.ac.uk/staff/sml/pacman/PacManContest.html
- Mario AI Championship: create an AI that can play Mario. In addition to the reactive control needed in Ms. Pac-Man, here you need high level planning, since there are levels which require some amount of puzzle-solving ability (e.g. you can only pass by first breaking some stones, and then jumping to a particular position, etc.). URL: http://www.marioai.org/
- The 2K BotPrize: a Turing test for first-person shooter bots, and with a cash prize! URL: http://botprize.org/
- StarCraft RTS AI Competition: if you think Chess is hard. Starcraft has a way bigger action and state space, it is real-time, and there is imperfect information. Good luck trying to create an AI for this! :) URL: http://ls11-www.cs.tu-dortmund.de/rts-competition/starcraft-cig2010
So, what's what I'd like to see next year at CIG? a more varied histogram of AI techniques, and a more varied set of entries to the competitions. I think this would provide a more interesting discussion and also help better understand which techniques are more suitable for which kind of game problems.