## Introduction

With just the final weekend left to play, it's crunch time in the Premier League - will Manchester City hold on to retain their title, or will Liverpool overhaul them to win their first league title in 29 years?

At the same time, around the world, millions of Fantasy Premier League players will be anxiously checking results, as they battle friends and colleagues in their mini-leagues, and strive to raise their teams up the overall ranking. One of these teams is **AIrsenal**, and it is managed by some machine learning software developed at the Turing.

Fantasy Premier League (FPL), as the name suggests, is a game based on the Premier League. Each participant (referred to as a "manager") chooses a squad of 15 football players from the English Premier League. Players in this 15 person squad are awarded points based on their real life performances in Premier League matches. In addition to choosing their initial team, managers can make transfers and substitutions every week (subject to some constraints - we'll cover these later). Managers also nominate a "captain", a player in their squad whose points total is doubled for that week.

Players of different positions are assigned points for different outcomes. Strikers are awarded points for scoring (or assisting) goals. Defenders and goalkeepers, on the other hand, receive points for keeping clean sheets (they also receive points for scoring and assisting goals, but this is comparatively rare unless they are a Liverpool full-back! )

There are more than six million FPL managers taking part this year, with varying levels of commitment: some give up just weeks into the season, while others spend hours each week scouring the web, searching for those key nuggets of information that will allow them to obtain more points than rival managers.

**Why might FPL be a suitable use-case for machine learning?**

Football, like most sports, is an intriguing combination of randomness and predictability. The combination of a large quantity of past data, a well-defined numerical scoring system, and the huge search space of possible squads, makes FPL a challenging but tractable problem to tackle with machine learning. In fact we are far from the first to try. For example, researchers from Southampton University wrote an interesting paper on this topic in 2012 describing a similar approach to the one we are using.

**Our approach**

FPL is a decision theory problem. The aim is to maximise the expected *utility* $U$ as a function of the *action* $a$ we take and its corresponding *outcome* $x$:

$$

\mathbb{E}\left[U\,|\,a\right] = \int \mathrm{d}x\,U(x, a)\,p(x|a)

$$

In the context of FPL, $x$ is the granular outcome of Premier League matches. By "granular", we mean both the scoreline of matches, and the outcomes for all of the individual players (who scored, who got sent off, who kept a clean sheet, etc.) The action $a$ refers to team selection (this includes transfers, substitutions, assignment of the captaincy). $U(x, a)$ is then the points total for the selected team given outcomes $x$. Finally, $p(x | a)$ is the probability density at outcome $x$ given the action $a$. In our case, $p(x | a) = p(x)$, since we do not expect our FPL team choices to have any bearing on real Premier League match results!

Our job, then, is to estimate the above integral as function of the action $a$ and then maximise it with respect to $a$. This will provide us with the action $a_*$ that we believe will lead to the largest points total $U$. It's clear that $U(x, a)$ is easy to calculate given a team and some results, since all we need to do is apply the rules of the game. However, $P(x | a)$ is much more of a challenge. This component of the integrand is a *probabilistic model* of the premier league, down to the player level. It is a daunting prospect to build such a model -- there are hundreds of players in the league, spread across 20 different teams. On top of building this model, we also need to find an efficient way to find the optimal action $a_*$, which is part of an enormous set of potential actions.

**Estimating points for all players**

In order to train our models, we use all the Premier League results and FPL player scores for the past three seasons, as well as the results-so-far for this season (fortunately web APIs exist that let us access all this information). We then use a two-step process to forecast the expected points for every player for every match:

**Team level**: what are the probabilities for every possible scoreline for each match?**Player level**: For a given scoreline, how many*points*will a player likely obtain?

For the team-level model we took a Bayesian approach based on Dixon & Coles (1997). In the model, every team has two *latent abilities*; $\alpha$ for attack and $\beta$ for defence. Another parameter $\gamma$ encapsulates "home advantage" (most teams score more goals and concede fewer when playing at their home ground). The figure below shows the inference for $\alpha$ and $\beta$ (smaller $\beta$ is better, whereas larger $\alpha$ is better) for all teams from last season's data. There is an obvious trend: clubs that score more goals also tend to concede fewer. We can also see the "big six" clubs are separated from the rest of the league, and that Man City are somewhat ahead of the pack (note the typical uncertainty scale in the top right corner before you read too much into the difference between teams, however!) We find $\gamma \sim 1.3$.

Once we have fitted the model, we have a probability of any scoreline for any given match; for example the plot below shows the probabilities for Man City vs Arsenal, with the most likely scoreline being 2-0 (in fact the result was 3-1).

The next step is to predict the FPL points for every player. Defenders and goalkeepers score the majority of their points based on how many minutes they play and how few goals their team concedes. Since the latter is a property of their team, and not them as an individual, we only need to use the team-level model to predict how many defensive points these players might obtain. We estimate the minutes they'll play by the average from their last three matches: a crude approximation, but we find it works reasonably well in practice.

In attack, however, the individual contributions are what count - players score points for scoring or assisting a goal (an "assist" means playing the final pass before a team-mate scores, or winning a penalty). To model these player level outcomes, we take a more involved approach.

We attempt to model the *conditional distribution* of a player's contribution given a team-level scoreline. For example, if Man City score 3 goals, we ask: what is the probability that Aguero scored or assisted each of these goals? For each goal, there are three mutually exclusive outcomes for a player: they can score it, they can assist it, or they can have no involvement. Hence, a natural model to use is the multinomial: if $N$ goals are scored, then for a given player, these $N$ "trials" must be partitioned into the three types of outcome previously mentioned (score, assist, neither). We also condition on the number of minutes that the player has spent on the pitch during the game by assuming that players are equally likely to score or assist at any moment they are on the pitch.

In the end, we infer a simplex $\theta$ for each player, which contains the probabilties that the player will score, assist, do nothing if they are on the pitch and their team scores. If the player has simplex $\theta$ and spends $T$ minutes on the pitch, then the probabilities that they scored, assisted or didn't contribute to a goal are:

\begin{equation}

\begin{aligned}

\mathrm{Pr}(\mathrm{score}) &= \dfrac{T\,\theta_0}{90} \\

\mathrm{Pr}(\mathrm{assist}) &= \dfrac{T\,\theta_1}{90} \\

\mathrm{Pr}(\mathrm{do\,nothing}) &= \dfrac{T\,(\theta_2 - 1) + 90}{90}.

\end{aligned}

\end{equation}

Notice that if $T=0$, then $\mathrm{Pr}(\mathrm{do\,nothing}) = 1$ - you can't contribute from the bench! One final point to make is that we use empirical Bayes to produce priors for $\theta$, and we use different priors for defenders, midfielders and strikers.

The figure below shows the probability from our model that a player will assist (x-axis) or score (y-axis) a goal that is scored by their team (if they are on the pitch) for all players from the 2017/18 season. The structure is intuitive: defenders (green) are the least likely to score or assist goals. Midfielders (orange) cover a large range as we would expect -- some are defensive, some are attacking. Finally, strikers (blue) are the most likely to score, and their propensity to score is negatively correlated with their propensity to assist. The single orange dot in the blue cloud of players is, you guessed it, Mohamed Salah! He was top scorer in the 2017/18 season, despite being classified as a midfielder.

To obtain the expected number of attacking points for a player, we marginalise over the number of goals and assists that they might score:

$$

\mathbb{E}\left[\mathrm{points}\right] = \int dT \sum_{N=0}^{N_\mathrm{max}}\sum_m \mathrm{points}(m)\mathrm{Pr}(m|T,N)\mathrm{Pr}(N)p(T).

$$

This formula has a lot of different components:

- $T$ is the time a player is on the pitch;
- $N$ is the number of goals scored by his team;
- $m = (m_\mathrm{goals}, m_\mathrm{assist}, m_\mathrm{neither})$ is the contribution of the player to the $N$ goals (the sum of the components of $m$ is $N$);
- $\mathrm{points}(m)$ represents the number of points awarded for these in the rules of the game (e.g. defenders get more points for scoring a goal than attackers do);
- $\mathrm{Pr}(m|T,N)$ is the output of our multinomial, player-level model;
- $\mathrm{Pr}(N)$ is the probability that the player's team scores $N$ goals given the opponent, and is computed using the team-level model (this component is really important. Harry Kane is more likely to score a hatful if Spurs are playing against Fulham than if they face Man City!).

We approximate the integral over $T$ using Monte-Carlo integration, and assume that the minutes played by the player in each of the last 3 weeks are a representative draw from $p(T).$ Finally, we set $N_\mathrm{max}$ to 10, as it's pretty unlikely that any team will score more than 10 goals in a match.

**Optimising our squad**

At this point, we have obtained an expected points total for every player for all fixtures. We are now faced with an optimisation problem: how do we maximise the expected score for our team over the next few weeks, given that it there is a points cost to transferring players in and out of our squad?

Arguably the hardest part of this is the initial squad selection, due to the enormous search space. Once we have chosen the starting squad, we would generally only make one or two transfers per week, and we can brute-force the evaluation of all possible sets of transfers in order to find the strategy that will maximise our expected points over the next *n* (typically 3) matches. However, for choosing our starting set of 15 players, there is no way we could try all possible combinations, so instead we do the following:

- Order all the forwards, midfielders, defenders, and keepers, in order of expected points over the next
*n*matches. - Starting from the top of these lists: add players to our squad, skipping to the next in the list if we would violate the constraint that one can only choose 3 players from any given Premier League club.
- Long before we've filled up our squad, we will have exceeded our budget (there is a strong correlation between expensive players and players that are expected to score well!). At this point, randomly choose one player from our proto-squad to remove, and we replace them with the next-best player in the same position.
- Repeat this procedure of randomly removing a player and replacing with next-best, until we converge on an affordable 15-player squad.
- Repeat this whole procedure 100 times to get 100 prospective squads, and choose the one with the best predicted points.

As mentioned above, once we pick the initial squad, for most of the season we have a much more straightforward optimisation. We usually look 3 weeks ahead, and look at all possible combinations of making zero, one, or two transfers per week, trading off the extra points we might get from transferring in a player with a favourable fixture, against the points-penalty incurred for making more than one transfer per week.

## Conclusion

**How are we doing?**

Our team, AIrsenal is currently ranked 1.7M out of 6.3M players, i.e. well inside the top 30%. FPL is a high variance game, but this level of performance is reasonable for our first foray into using statistics and ML to approach the problem!

**Future improvements**

We'll be back, new and hopefully improved, for the 2019/20 season! The code behind AIrsenal was written back in August last year (in time for the start of the season), so we've got plenty of potential improvements in mind if we can find the time to implement them!

Some of the updates we're looking to make are:

- Better predictions for how many minutes a player might play, and if a player is injured, can we predict who might replace them?
- Predictions for "bonus points" (for every Premier League match, three top-performing players are awarded extra FPL points).
- Better optimisation - with improved architecture for the optimiser, we should be able to look further ahead, and prepare better for the crucial "double-gameweeks", where some teams play more than one match in the same week, which can make a huge difference to the performance of an FPL team!
- Better wild-card optimisation: related to above, we would like to improve the decision making around when to use the "wild card" or "free hit" chips, that allow unlimited free transfers for a single week. Astute usage of these is critical to a successful season in FPL.
- Making our code more useable to others!

With all these changes, and a bit of luck, we're hoping to be challenging the top 10% of FPL managers next season. In the meantime though, you can check out the AIrsenal code on Github.