Natural Selection 2 is my favorite game, without compare. I really can’t say enough good things about it, go watch the last big tournament if you want a taste. It combines everything that’s great about the FPS and RTS genres, and requires very high levels of FPS skill, RTS tactics, and team coordination to play well. It’s the only video game I’ve played where a person’s leadership skills can mean the difference between victory and defeat. When someone leads their team to victory in NS2, it doesn’t mean they were the one with the most kills. It’s a very rewarding social experience, with all the highs and lows of a team sport, but you get to shoot aliens.
It is however, a difficult game to learn, and an even more difficult game to play well. Because of this, games in public servers often have unbalanced teams, leading to dissatisfying games for both sides.
To help fix this problem, I designed a player ranking system that was implemented in the most recent patch, Build 267. It’s based on skill systems like ELO but tweaked for the unique properties of NS2.
A useful skill system should be able to predict the outcome of a game using the skills of the players on each team. Rather than trying to figure out which other statistics about a player (kill/death etc) indicate that a player is good, we instead try to infer it the skill levels just from our original objective: predicting the outcome of games.
Designing a skill system like this is different from many other statistics problems, because the world changes in response to the system. It’s not enough to predict the outcome of games, you have to be sure that you can still predict the outcome of games even when people know how your system works. On top of that, you have to ensure that your system incentivizes behaviors that are good for the game.
- is the probability of a victory by team 1.
- is the outcome of the game, 1 if team 1 wins, 0 if team 1 loses.
- is the skill of the th player.
- is the time spent in the round, an exponential weighting of each second so that being in the round at the beginning counts more than being in the round at the end. Integrals over exponents are easy, so you can implement this as . Set to something like where is the median round length. This means that playing until the middle of an average round counts more than playing from the middle to the end, no matter how long the round takes.
- is an indicator variable for the team of the th player. Its value is 1 if the thplayer is on team 1, -1 if the th player is on team 2
The Basic Model
To compute the skill values, the first task is to predict the probability of team 1 winning the round as a function of those skill values.
The function maps the range to the range which is how we can relate a probability to a sum of different factors. (It’s a convenient function for this purpose because it’s deeply related to how conditionally independent components combine into a probability. See Naive Bayes, Logistic Regression for more info)
We’d like to come up with the values for that cause the model to predict the outcome correctly for as many games as possible. I’m skipping a bunch of steps where we maximize the log likelihood of the data. We’ll be optimizing the model using stochastic gradient descent. A pdf for further reading. Basically, whenever the model is wrong, we figure out the direction in which it was wrong, and we change the model to move it in the opposite direction.
We predict the outcome of the game based on the players’ skill and how long they each played in the game. After the game, we update each player’s skill by the product of what fraction of the game they played, and the difference between our prediction and what actually happened.
Properties of the Basic Model
- If one team smashes another, but the teams were stacked to make that outcome almost certain, nobody’s skill level changes at the end of the round. Only unexpected victories change skill levels.
- Nothing you do during the round other than winning the game has any effect on your skill level. This means that there’s no incentive to rack up kills at the end of the game rather than finishing it, or to play skulk instead of gorge, or to go offense rather than build.
- The effect on your score is determined by the time you spent playing the round rather than your points, with the beginning of the game weighted much higher than the end. This means that it doesn’t harm you to join a game that is already lost, because you’ll have played for a very small fraction of the weighted time spent in the game.
- If the two teams have a different number of players, this is compensated for automatically by normalizing by the total time across both teams rather than just within a team, and the model predicts that the team with more players will win.
- Larger games contribute less to each player’s skill level because the presumption is that each player has a smaller effect on the outcome of the game.
- If you are a highly skilled player on a low-skill server, the best thing you can do for your skill ranking is to join the underdog team, and lead them to victory.
Accounting for imbalance in races, maps, game sizes.
The basic model assumes that the only factors that contribute to the outcome of the game are the skill of the players. Given the overall win rate of each race, this is clearly not true. To fix the model to account for this, all we have to change is our original formula for p.
We can determine the value of for team 1’s race using the historical records for the winrate of that race. We then set to . This ensures that when teams are even, our prediction matches the historical probability of that race winning.
This needn’t be merely a function of the race however. It could also be a function of the map, the game size, or any other relevant feature that is independent of the players. All that is necessary to set its value is to measure the historical winrate for the team in that set of circumstances (for instance, aliens on ns2_summit with game size > 16), and put it through the inverse logit function as above.
Special treatment for commanders.
The basic model assumes that commanders are just like any other player, and that they have the same contribution to the outcome of the game as any other player. This isn’t necessarily a bad assumption, I’ve seen many games where an unusually vocal and motivational player other than the commander was able to call plays and lead a team to victory.
The much worse assumption however is that the same skill sets apply for being a commander or a player. Players that can assassinate fades and dodge skulks might be useless in the command chair, so it doesn’t make much sense to use the same skill values for both. To fix this, we give each player a separate skill level that indicates their skill at commanding. To distinguish it from the other skill level, we’ll call it . To indicate the time spent commanding i’ll use the same way (and using the same formula) as for above.
This makes a few questionable assumptions for simplicity. The model will still do useful things even if these assumptions are false, but they do indicate areas where the model won’t perform well.
1. The magnitude of the impact of the commander does not depend on the size of the game.
2. Playing marine commander is equivalent to playing alien commander.
3. A good commander with a bad team vs a bad commander with a good team is expected to be an even match. I suspect this isn’t true because there are few ways that a commander can influence the outcome of the game independently of their team, such that a bad team probably critically handicaps a good commander, and as such it might make sense to use quadratic terms instead.
Rather than restricting ourselves to updating the skill levels once on each game, we can optimize them iteratively using the whole history of games, which will cause them to converge much faster as we acquire more data. To do this however, it is necessary to put a prior on the skill levels of each player so that they are guaranteed to converge even when there is little data for a player. To do this, include a Gamma Distribution in the model for each player’s skill.
The gradient of the log likelihood of the gamma distribution is .This makes the update rule for the player as follows:
There are two differences between this formula and the update rule above. The first is that rather than just updating the score one game at a time as the games come in, we store the whole history of games the player has played, and iteratively update the player’s skill.
Secondly, on each iteration, we add to the player’s skill, and subtract from the player’s skill. This expresses our belief that until we know more about a player, their skill probably isn’t too high or too low. The and parameters control the distribution of skill levels. As increases, we become more skeptical of very low skill values. As decreases, we become more skeptical of very high skill values.
The mean skill value will be .
To run this algorithm, we alternate between updating our prediction for the outcome of each game and updating each player’s skill level based on the new predictions for all of the games they’ve played.
Gradient descent tells you which direction to go, but it doesn’t tell you how far. We have a lot of freedom in choosing how much to update each player’s score after each round to make it satisfying for the player. More precisely, we can add a coefficient that describes the step size for the updates.
To pick this, it’s useful to get a sense of what skill values actually mean. If the skill values of two players differ by 2 points, then this causes our model to predict that the better player will beat the worse player 90% of the time. Or rather, that a team composed of clones of the better player will beat a team composed of clones of the worse player 90% of the time. So intuitively, how many fair games must a player win consecutively before we’re comfortable predicting that they will beat the people they are playing against 90% of the time?
Suppose the player is playing on 20 player servers, and they are playing fair games. That means the absolute value of the update above will be 0.5/20, or 0.025 each round. With set to 1, it would take the player 80 consecutive wins on fair games in order to be estimated to be able to beat their old self 90% of the time. This seems excessive! So we should set to something larger, maybe 8?
This gives us a feel for the difference between the players scores, and how quickly we should update them, but we still don’t have a sense for the absolute magnitude of the player scores. How far should they all be from 0? If the teams have the same number of players, than all that matters is the difference between the scores. We can add any constant, and it cancels out. However, it does effect the model when the teams have different numbers of players. The team with more players will be expected to win.
So if the skill values are close to 0, then we predict the team with more players to have no advantage, they win 50% of the time. If the skill values are all around 40, then we predict the team with more players to win over 90% of the time in a 13 player game. This makes it important that we don’t set the starting absolute value of the skill levels to be too high or too low.
To initialize the scores, a reasonable choice is to set them to some multiple of the log of the player’s playtime.
Aside: Why the logit and not erf?
This formulation of a skill system differs from many others in that it uses the logistic distribution’s cdf to model win probabilities rather than the gaussian distribution’s cdf. There are two reasons I chose this.
- NS2 allows for frequent upsets late in the game if one team is particularly cunning, and the other is inattentive. This makes a distribution with fatter tails more appropriate.
- Using the logit allows us to correct for various factors that cause imbalance in the game without any complicated optimization. Due to the independence assumption, we can just measure winrates empirically in different circumstances, and we don’t have to optimize these coefficients jointly with the player skill.
How do we balance the teams in a real game?
NS2 had a feature where players could vote to “Force Even Teams” but it was generally unpopular. The skill values weren’t very accurate, and the strategy used to assign players had some undesirable properties. It used a simple heuristic to assign the players since the general problem is NP-complete. Because it reassigned all players, they would find themselves playing aliens when they wanted to play marines, or vice versa, and would switch back right after the vote. To fix this, we designed an algorithm that minimizes the number of swaps that occur to reasonably balance the teams.
- Optimize the team assignments of the players who have not yet joined a team. You can do this using whatever method you wish.
- Consider each pair of players. Find the pair where swapping them minimizes the absolute value of the skill difference. Iterate until there is no pair that reduces the absolute value of the skill difference.
While this greedy heuristic of swapping only the best pairs one at a time may be a poor solution for coming up with the optimal balanced teams, it is great for keeping as many players as possible happy with the team assignments. Because the swap chosen is the one that minimizes the objective function the most, we can get quickly to a local optima by swapping very few players.