The next version of Natural Selection 2 contains an algorithm I designed for ranking servers. It powers the “play now” button at the top of the menu.

The goal is to come up with a smooth expression that is proportional to the probability that a player will have a good game on a server. For a good game, all conditions should be simultaneously true, so we express this as a product. Each component is defined below.

## Number of Players

This function determines the viability of a game from the number of players. Games start to become viable as they pass 12 players.

We don’t just want to put the player on the biggest game available though, we also want to maintain a healthy server pool with many viable games. To do this we also want to help seed servers by putting players on the server that will increase the viability of as many player slots as possible as much as possible. To do this we give servers a bonus based on the derivative of viability.

We only care about increasing the viability of the server if there are open slots, so in the combined function we scale the effect of the derivative of viability by the number of open player slots. Seeding servers is also only a secondary goal. The primary goal must be giving the player a good game, or players will just ignore the ranking or “play now” button, so seeding is given a lower weight in the combined function.

The combined function ensures that servers with *almost* enough players for a good game get priority over servers that are already healthy, but that otherwise players will end up on a server with a healthy number of players.

## Ping

The player’s latency to the server is as important a factor as whether the server has enough players. Once it crosses 100 milliseconds, it is a key factor.

## Skill

All other things being equal, we’d like to put the player on a server with similarly skilled players. However, when a player joins a server, they will change the mean score of the server, so the amount we care about the server’s average skill level should increase as the number of players on the server increases. We use a statistic inspired by the z score. (The mean skill is available to us, but the variance is not, so this assumes a default.) Rather than doing something expensive with erf as would be customary to convert a z score into something like a probability, we use a simple exponential. Aside from being a cheaper function, it prevents the ranking from prioritizing skill over ping even when the skill difference is tremendous.

## Performance, Favorites, and Joinable

Servers in NS2 are graded on their ability to keep up with the load placed on them. According to matso, you don’t want to play on a server with score less than -10, and everything above 10 is about equivalent.

We give special priority to favorites, and to servers that have open slots. For “play now,” joinable is just a filter. For server ranking, it’s somewhat useful to see servers you expect to see in the list even if you can’t join them, just so you know they still exist.

Pingback: Aim To Please Update (279) Released! - Natural Selection 2

n1c3!

This is amazing, well implemented and designed! This will definitely help retain NS2 players, even if the player population increases. I only have one question: Does this algorithm take care of huge servers (such as woozas 64 player) not steal players from other servers?

First of all, well done 😀

How come the denominator in dV/dP is (403.249+e^(-0.5)*P)^2 and not (1+403.249*e^(-0.5)*P)^2?

Also for a non-programmer, how much more expensive is it aprox to compute varians rather than using the constant?

Always love when ppl post the math behind mechanics, so i can nerd it out.

#CptClutch

Since dV/dP is close to 0 (assuming we use the true derivative) when there are more than 24 players, all servers with above 24 players get almost the same score

Thanks. 🙂

I use wolfram alpha for all my calculus so I don’t screw it up. 🙂

http://www.wolframalpha.com/input/?i=derivative+1%2F%281%2Bexp%28-0.5+*+%28p+-+12%29%29%29

It wouldn’t be expensive at all to compute the variance, it just isn’t computed at the moment and I was designing the function based on what was already there just to make the implementation simpler. The simplest way to compute the variance uses this identity. https://upload.wikimedia.org/math/3/a/4/3a49e13032777de7db45a8577b986314.png So it’s just two means that you can compute in one loop through the data.