Loading…

Sam Kenkel

Data Science, Machine Learning, DevOps, CCNA, ACSR
Learn More

Lol_Scout 2: Feature Engineering, initial Modelling

This is the 2nd of 3 blog posts about my process and discoveries working with data from Riot’s online game, League of Legends.   This post is a technical writeup of the code I used for my initial ‘baseline’ modeling, and my Data Preparation (and imputation) code. The code I wrote for initial sanity check modelling work can be found here. The feature engineering/Data Prep code is here. The code for my ‘final’ models is here.

Background:Summary of the previous post

In the 5 v 5 Videogame/Esport two teams of 5 players compete against each other. I have gathered Data using the API from riot games. I’m trying to use machine learning to predict wins or losses based on the characters (Champions) that players choose, and those player’s skill/ practice with those champions.

After loading my data in, I check how many matches at which patch versions I have gathered:

As an initial test, I’m going to test out the results of ignoring users and roles: I’m going to make a classifier that works entirely on the champions picked, and nothing else.

It is important to note here, that all of this data is from ranked, or matchmade games. Riot games has an internal algorithm which attempts to construct teams of players that have an equal chance of winning and losing. However, since this matching occurs before characters (champions) have been banned or selected, riot’s matchmaking system cannot predict or match games based on it.

The model I’m  going to start with is a Latent Factor Embedding Neural Net.  It’s best to think of this as an improvement of a matrix factorization recommender : There is a layer where every Champion ID is embedded into an N dimensional vector. The key here is that the hidden factors are changed during training to minimize error (the cost function), and that every time champion “2” is embedded in a match, it must be the same vector as every other time  champion 2 is embedded.

All of those vectors are turned into one layer, then  a densely connected layers (using sigmoids)   is added (with a dropout layer in between)  and connected to a final sigmoid which is the predictor.

To explain this in the terms of league of legends itself: The embedding layer will start to learn aspect of champions, and one hidden factor (or the pca of a hidden factor) will become “Tankiness” and Champions like Nautilus, Garen, Sion and Mundo will have  the highest scores for this, because all of those characters have been designed to be hard to kill and survivable. Squishy, high-damage champions will populate the other end of this score.

The second layer then becomes interactions between all of these components: What does is mean to have 4 “Tanky” champions on one side, and no “Tanky” champions on another? What if there are 3 “lane bullies”(characters who are easier to play in the early game, but cannot scale as well in the late game) and the other team has 3 hypercarries (

The third layer, dropout, is added a layer that randomly kills parts of the network. Imagine you are working at a company, and every day one fifth of your coworkers didn’t show up. Everyone else needs to band together to get things done to keep the lights on. The next day a different one-fifth doesn’t show up. Eventually (If your company doesn’t go under), you no longer have any “specialized” knowledge that only one or two employees know (because everyone else had to learn this on the days that those employees weren’t there).

In a neural network, adding a layer like this helps prevent the network from “memorizing” your training data (the network learns the training data so well it can’t make useful predictions so far).   This is one of the largest risks with using a neural net on insufficient data. As a non theoretical example, if I segment out only the matches in my dataset for a rarely played patch (five thousand total matches), separate those into a train/test split, and add several more layers to my net, I can easily get a classifier to 70 percent accuracy on my training data, at the expense of dropping below 50 percent (the score I get for a model that guesses randomly) on the holdout set.  This is like an unbounded decision tree fed on data with too few example (and too many features), it has memorized the training data, not generated a model.

The alternative way to handle something like this is to one-hot-encode characters, and then use a “classical” machine learning classifier (Boosted Trees, K Nearest Neighbor). I didn’t pursue this at this stage because I wanted a quick baseline, and I cared about the latent factor interactions, not the raw champion interactions.

Here is my code for the network:

When I tried to fit it, I wasn’t thrilled with my results:

The network ended up  with “acc: 0.5740 – val_loss: 0.7035 – val_acc: 0.5274”

It was still overfitting hard. This is to be expected with how that data was being treated: Player1 champ id 101 was embedded differently than player 6 champ id 101. Player1 through Player 5 is always team 1, and Player 6-10 is always team two, but what role each player plays (what “position” to extend a sorts analogy, which determines how a character will be used in the late game, and what other character they will be playing “against” for the first 10-15 minutes of the game, known as the “laning phase”) has not been set.

The Riot API records which role players play, so I wrote a script to convert my data to the format of the characters being listed for the 5 standard roles per team.

Then I did some cleanup to remove “null” columns (roles that can’t exist), as well as every game where both teams don’t have all 5 roles represented.

Then I rewrote the neural net from before:

It’s important to note the lambda function I use before the embedding: when I set an embedding layer(which I am re-using from the previous neural net), I set the total number  of embeddings to generate (in my code it was 134). If I don’t use my lambda function to re-number the character’s picked (and remove skipped numbers) then the model never converges (it’s error increases over time).

Here is a shortened version of that code, as well as the results:

So I’m getting 53-54% on both the training and the test data, without much hyperparameter tuning or optimzation. Rather than work to tweak this model, I moved on to adding in the data that I wanted to explore: Each player’s skill with their character.

I moved to a different notebook to prep all of my data.

After running my “role” code earlier, I have the following columns (note the included “drop” command to remove the columns that are not “real” roles.)

I write three very similar functions which all do the same basic thing:

For each role, find that player, go to the player’s data in the player-skill data frame, and get the relevant data.

Each of three functions has a different approach to what to do if the player doesn’t have any relevant data in the dataframe (Which means I didn’t get information for that player).  The first function set’s these”missing” values to 1 win and two losses. The second function imputes (generates missing data) by taking the mean of my dataset. The third function leaves the missing data as null.

Here is the code for my data prep:

Then I export all of these as .csvs, and then go to a notebook to test this new datasets.