CS224N: PyTorch Tutorial (Winter '21)

Author: Dilara Soylu

In this notebook, we will have a basic introduction to PyTorch and work on a toy NLP task. Following resources have been used in preparation of this notebook:

Many thanks to Angelica Sun and John Hewitt for their feedback.

Introduction

PyTorch is a machine learning framework that is used in both academia and industry for various applications. PyTorch started of as a more flexible alternative to TensorFlow, which is another popular machine learning framework. At the time of its release, PyTorch appealed to the users due to its user friendly nature: as opposed to defining static graphs before performing an operation as in TensorFlow, PyTorch allowed users to define their operations as they go, which is also the approached integrated by TensorFlow in its following releases. Although TensorFlow is more widely preferred in the industry, PyTorch is often times the preferred machine learning framework for researchers. If you would like to learn more about the differences between the two, you can check out this blog post.

Now that we have learned enough about the background of PyTorch, let's start by importing it into our notebook. To install PyTorch, you can follow the instructions here. Alternatively, you can open this notebook using Google Colab, which already has PyTorch installed in its base kernel. Once you are done with the installation process, run the following cell:

We are all set to start our tutorial. Let's dive in!

Tensors

Tensors are the most basic building blocks in PyTorch. Tensors are similar to matrices, but the have extra properties and they can represent higher dimensions. For example, an square image with 256 pixels in both sides can be represented by a 3x256x256 tensor, where the first 3 dimensions represent the color channels, red, green and blue.

Tensor Initialization

There are several ways to instantiate tensors in PyTorch, which we will go through next.

From a Python List

We can initalize a tensor from a Python list, which could include sublists. The dimensions and the data types will be automatically inferred by PyTorch when we use torch.tensor().

We can also call torch.tensor() with the optional dtype parameter, which will set the data type. Some useful datatypes to be familiar with are: torch.bool, torch.float, and torch.long.

We can also get the same tensor in our specified data type using methods such as float(), long() etc.

We can also use tensor.FloatTensor, tensor.LongTensor, tensor.Tensor classes to instantiate a tensor of particular type. LongTensors are particularly important in NLP as many methods that deal with indices require the indices to be passed as a LongTensor, which is a 64 bit integer.

From a NumPy Array

We can also initialize a tensor from a NumPy array.

From a Tensor

We can also initialize a tensor from another tensor, using the following methods:

All of these methods preserve the tensor properties of the original tensor passed in, such as the shape and device, which we will cover in a bit.

By Specifying a Shape

We can also instantiate tensors by specifying their shapes (which we will cover in more detail in a bit). The methods we could use follow the ones in the previous section:

With torch.arange()

We can also create a tensor with torch.arange(end), which returns a 1-D tensor with elements ranging from 0 to end-1. We can use the optional start and step parameters to create tensors with different ranges.

Tensor Properties

Tensors have a few properties that are important for us to cover. These are namely shape, and the device properties.

Data Type

The dtype property lets us see the data type of a tensor.

Shape

The shape property tells us the shape of our tensor. This can help us identify how many dimensional our tensor is as well as how many elements exist in each dimension.

We can also get the size of a particular dimension with the size() method.

We can change the shape of a tensor with the view() method.

We can also use torch.reshape() method for a similar purpose. There is a subtle difference between reshape() and view(): view() requires the data to be stored contiguously in the memory. You can refer to this StackOverflow answer for more information. In simple terms, contiguous means that the way our data is laid out in the memory is the same as the way we would read elements from it. This happens because some methods, such as transpose() and view(), do not actually change how our data is stored in the memory. They just change the meta information about out tensor, so that when we use it we will see the elements in the order we expect.

reshape() calls view() internally if the data is stored contiguously, if not, it returns a copy. The difference here isn't too important for basic tensors, but if you perform operations that make the underlying storage of the data non-contiguous (such as taking a transpose), you will have issues using view(). If you would like to match the way your tensor is stored in the memory to how it is used, you can use the contiguous() method.

We can use torch.unsqueeze(x, dim) function to add a dimension of size 1 to the provided dim, where x is the tensor. We can also use the corresponding use torch.squeeze(x), which removes the dimensions of size 1.

If we want to get the total number of elements in a tensor, we can use the numel() method.

Device

Device property tells PyTorch where to store our tensor. Where a tensor is stored determines which device, GPU or CPU, would be handling the computations involving it. We can find the device of a tensor with the device property.

We can move a tensor from one device to another with the method to(device).

Tensor Indexing

In PyTorch we can index tensors, similar to NumPy.

We can also index into multiple dimensions with :.

We can also access arbitrary elements in each dimension.

We can get a Python scalar value from a tensor with item().

Operations

PyTorch operations are very similar to those of NumPy. We can work with both scalars and other tensors.

We can apply the same operations between different tensors of compatible sizes.

We can use tensor.matmul(other_tensor) for matrix multiplication and tensor.T for transpose. Matrix multiplication can also be performed with @.

We can take the mean and standard deviation along a certain dimension with the methods mean(dim) and std(dim). That is, if we want to get the mean 3x2 matrix in a 4x3x2 matrix, we would set the dim to be 0. We can call these methods with no parameter to get the mean and standard deviation for the whole tensor. To use mean and std our tensor should be a floating point type.

We can concatenate tensors using torch.cat.

Most of the operations in PyTorch are not in place. However, PyTorch offers the in place versions of operations available by adding an underscore (_) at the end of the method name.

Autograd

PyTorch and other machine learning libraries are known for their automatic differantiation feature. That is, given that we have defined the set of operations that need to be performed, the framework itself can figure out how to compute the gradients. We can call the backward() method to ask PyTorch to calculate the gradiends, which are then stored in the grad attribute.

Let's run backprop from a different tensor again to see what happens.

We can see that the x.grad is updated to be the sum of the gradients calculated so far. When we run backprop in a neural network, we sum up all the gradients for a particular neuron before making an update. This is exactly what is happening here! This is also the reason why we need to run zero_grad() in every training iteration (more on this later). Otherwise our gradients would keep building up from one training iteration to the other, which would cause our updates to be wrong.

Neural Network Module

So far we have looked into the tensors, their properties and basic operations on tensors. These are especially useful to get familiar with if we are building the layers of our network from scratch. We will utilize these in Assignment 3, but moving forward, we will use predefined blocks in the torch.nn module of PyTorch. We will then put together these blocks to create complex networks. Let's start by importing this module with an alias so that we don't have to type torch every time we use it.

Linear Layer

We can use nn.Linear(H_in, H_out) to create a a linear layer. This will take a matrix of (N, *, H_in) dimensions and output a matrix of (N, *, H_out). The * denotes that there could be arbitrary number of dimensions in between. The linear layer performs the operation Ax+b, where A and b are initialized randomly. If we don't want the linear layer to learn the bias parameters, we can initialize our layer with bias=False.

Other Module Layers

There are several other preconfigured layers in the nn module. Some commonly used examples are nn.Conv2d, nn.ConvTranspose2d, nn.BatchNorm1d, nn.BatchNorm2d, nn.Upsample and nn.MaxPool2d among many others. We will learn more about these as we progress in the course. For now, the only important thing to remember is that we can treat each of these layers as plug and play components: we will be providing the required dimensions and PyTorch will take care of setting them up.

Activation Function Layer

We can also use the nn module to apply activations functions to our tensors. Activation functions are used to add non-linearity to our network. Some examples of activations functions are nn.ReLU(), nn.Sigmoid() and nn.LeakyReLU(). Activation functions operate on each element seperately, so the shape of the tensors we get as an output are the same as the ones we pass in.

Putting the Layers Together

So far we have seen that we can create layers and pass the output of one as the input of the next. Instead of creating intermediate tensors and passing them around, we can use nn.Sequentual, which does exactly that.

Custom Modules

Instead of using the predefined modules, we can also build our own by extending the nn.Module class. For example, we can build a the nn.Linear (which also extends nn.Module) on our own using the tensor introduced earlier! We can also build new, more complex modules, such as a custom neural network. You will be practicing these in the later assignment.

To create a custom module, the first thing we have to do is to extend the nn.Module. We can then initialize our parameters in the __init__ function, starting with a call to the __init__ function of the super class. All the class attributes we define which are nn module objects are treated as parameters, which can be learned during the training. Tensors are not parameters, but they can be turned into parameters if they are wrapped in nn.Parameter class.

All classes extending nn.Module are also expected to implement a forward(x) function, where x is a tensor. This is the function that is called when a parameter is passed to our module, such as in model(x).

Here is an alternative way to define the same class. You can see that we can replace nn.Sequential by defining the individual layers in the __init__ method and connecting the in the forward method.

Now that we have defined our class, we can instantiate it and see what it does.

We can inspect the parameters of our model with named_parameters() and parameters() methods.

Optimization

We have showed how gradients are calculated with the backward() function. Having the gradients isn't enought for our models to learn. We also need to know how to update the parameters of our models. This is where the optomozers comes in. torch.optim module contains several optimizers that we can use. Some popular examples are optim.SGD and optim.Adam. When initializing optimizers, we pass our model parameters, which can be accessed with model.parameters(), telling the optimizers which values it will be optimizing. Optimizers also has a learning rate (lr) parameter, which determines how big of an update will be made in every step. Different optimizers have different hyperparameters as well.

After we have our optimization function, we can define a loss that we want to optimize for. We can either define the loss ourselves, or use one of the predefined loss function in PyTorch, such as nn.BCELoss(). Let's put everything together now! We will start by creating some dummy data.

Now, we can define our model, optimizer and the loss function.

Let's see if we can have our model achieve a smaller loss. Now that we have everything we need, we can setup our training loop.

You can see that our loss is decreasing. Let's check the predictions of our model now and see if they are close to our original y, which was all 1s.

Great! Looks like our model almost perfectly learned to filter out the noise from the x that we passed in!

Demo: Word Window Classification

Until this part of the notebook, we have learned the fundamentals of PyTorch and built a basic network solving a toy task. Now we will attempt to solve an example NLP task. Here are the things we will learn:

  1. Data: Creating a Dataset of Batched Tensors
  2. Modeling
  3. Training
  4. Prediction

In this section, our goal will be to train a model that will find the words in a sentence corresponding to a LOCATION, which will be always of span 1 (meaning that San Fransisco won't be recognized as a LOCATION). Our task is called Word Window Classification for a reason. Instead of letting our model to only take a look at one word in each forward pass, we would like it to be able to consider the context of the word in question. That is, for each word, we want our model to be aware of the surrounding words. Let's dive in!

Data

The very first task of any machine learning project is to set up our training set. Usually, there will be a training corpus we will be utilizing. In NLP tasks, the corpus would generally be a .txt or .csv file where each row corresponds to a sentence or a tabular datapoint. In our toy task, we will assume that we have already read our data and the corresponding labels into a Python list.

Preprocessing

To make it easier for our models to learn, we usually apply a few preprocessing steps to our data. This is especially important when dealing with text data. Here are some examples of text preprocessing:

Which preprocessing steps are necessary is determined by the task at hand. For example, although it is useful to remove special characters in some tasks, for others they may be important (for example, if we are dealing with multiple languages). For our task, we will lowercase our words and tokenize.

For each training example we have, we should also have a corresponding label. Recall that the goal of our model was to determine which words correspond to a LOCATION. That is, we want our model to output 0 for all the words that are not LOCATIONs and 1 for the ones that are LOCATIONs.

Converting Words to Embeddings

Let's look at our training data a little more closely. Each datapoint we have is a sequence of words. On the other hand, we know that machine learning models work with numbers in vectors. How are we going to turn words into numbers? You may be thinking embeddings and you are right!

Imagine that we have an embedding lookup table E, where each row corresponds to an embedding. That is, each word in our vocabulary would have a corresponding embedding row i in this table. Whenever we want to find an embedding for a word, we will follow these steps:

  1. Find the corresponding index i of the word in the embedding table: word->index.
  2. Index into the embedding table and get the embedding: index->embedding.

Let's look at the first step. We should assign all the words in our vocabulary to a corresponding index. We can do it as follows:

  1. Find all the unique words in our corpus.
  2. Assign an index to each.

vocabulary now contains all the words in our corpus. On the other hand, during the test time, we can see words that are not contained in our vocabulary. If we can figure out a way to represent the unknown words, our model can still reason about whether they are a LOCATION or not, since we are also looking at the neighboring words for each prediction.

We introduce a special token, <unk>, to tackle the words that are out of vocabulary. We could pick another string for our unknown token if we wanted. The only requirement here is that our token should be unique: we should only be using this token for unknown words. We will also add this special token to our vocabulary.

Earlier we mentioned that our task was called Word Window Classification because our model is looking at the surroundings words in addition to the given word when it needs to make a prediction.

For example, let's take the sentence "We always come to Paris". The corresponding training label for this sentence is 0, 0, 0, 0, 1 since only Paris, the last word, is a LOCATION. In one pass (meaning a call to forward()), our model will try to generate the correct label for one word. Let's say our model is trying to generate the correct label 1 for Paris. If we only allow our model to see Paris, but nothing else, we will miss out on the important information that the word to often times appears with LOCATIONs.

Word windows allow our model to consider the surrounding +N or -N words of each word when making a prediction. In our earlier example for Paris, if we have a window size of 1, that means our model will look at the words that come immediately before and after Paris, which are to, and, well, nothing. Now, this raises another issue. Paris is at the end of our sentence, so there isn't another word following it. Remember that we define the input dimensions of our PyTorch models when we are initializing them. If we set the window size to be 1, it means that our model will be accepting 3 words in every pass. We cannot have our model expect 2 words from time to time.

The solution is to introduce a special token, such as <pad>, that will be added to our sentences to make sure that every word has a valid window around them. Similar to <unk> token, we could pick another string for our pad token if we wanted, as long as we make sure it is used for a unique purpose.

Now that our vocabularly is ready, let's assign an index to each of our words.

Great! We are ready to convert our training sentences into a sequence of indices corresponding to each token.

In the example above, kuwait shows up as <unk>, because it is not included in our vocabulary. Let's convert our train_sentences to example_padded_indices.

Now that we have an index for each word in our vocabularly, we can create an embedding table with nn.Embedding class in PyTorch. It is called as follows nn.Embedding(num_words, embedding_dimension) where num_words is the number of words in our vocabulary and the embedding_dimension is the dimension of the embeddings we want to have. There is nothing fancy about nn.Embedding: it is just a wrapper class around a trainabe NxE dimensional tensor, where N is the number of words in our vocabulary and E is the number of embedding dimensions. This table is initially random, but it will change over time. As we train our network, the gradients will be backpropagated all the way to the embedding layer, and hence our word embeddings would be updated. We will initiliaze the embedding layer we will use for our model in our model, but we are showing an example here.

To get the word embedding for a word in our vocabulary, all we need to do is to create a lookup tensor. The lookup tensor is just a tensor containing the index we want to look up nn.Embedding class expects an index tensor that is of type Long Tensor, so we should create our tensor accordingly.

Usually, we define the embedding layer as part of our model, which you will see in the later sections of our notebook.

Batching Sentences

We have learned about batches in class. Waiting our whole training corpus to be processed before making an update is constly. On the other hand, updating the parameters after every training example causes the loss to be less stable between updates. To combat these issues, we instead update our parameters after training on a batch of data. This allows us to get a better estimate of the gradient of the global loss. In this section, we will learn how to structure our data into batches using the torch.util.data.DataLoader class.

We will be calling the DataLoader class as follows: DataLoader(data, batch_size=batch_size, shuffle=True, collate_fn=collate_fn). The batch_size parameter determines the number of examples per batch. In every epoch, we will be iterating over all the batches using the DataLoader. The order of batches is deterministic by default, but we can ask DataLoader to shuffle the batches by setting the shuffle parameter to True. This way we ensure that we don't encounter a bad batch multiple times.

If provided, DataLoader passes the batches it prepares to the collate_fn. We can write a custom function to pass to the collate_fn parameter in order to print stats about our batch or perform extra processing. In our case, we will use the collate_fn to:

  1. Window pad our train sentences.
  2. Convert the words in the training examples to indices.
  3. Pad the training examples so that all the sentences and labels have the same length. Similarly, we also need to pad the labels. This creates an issue because when calculating the loss, we need to know the actual number of words in a given example. We will also keep track of this number in the function we pass to the collate_fn parameter.

Because our version of the collate_fn function will need to access to our word_to_ix dictionary (so that it can turn words into indices), we will make use of the partial function in Python, which passes the parameters we give to the function we pass it.

This function seems long, but it really doesn't have to be. Check out the alternative version below where we remove the extra function declarations and comments.

Now, we can see the DataLoader in action.

The batched input tensors you see above will be passed into our model. On the other hand, we started off saying that our model will be a window classifier. The way our input tensors are currently formatted, we have all the words in a sentence in one datapoint. When we pass this input to our model, it needs to create the windows for each word, make a prediction as to whether the center word is a LOCATION or not for each window, put the predictions together and return.

We could avoid this problem if we formatted our data by breaking it into windows beforehand. In this example, we will instead how our model take care of the formatting.

Given that our window_size is N we want our model to make a prediction on every 2N+1 tokens. That is, if we have an input with 9 tokens, and a window_size of 2, we want our model to return 5 predictions. This makes sense because before we padded it with 2 tokens on each side, our input also had 5 tokens in it!

We can create these windows by using for loops, but there is a faster PyTorch alternative, which is the unfold(dimension, size, step) method. We can create the windows we need using this method as follows:

Model

Now that we have prepared our data, we are ready to build our model. We have learned how to write custom nn.Module classes. We will do the same here and put everything we have learned so far together.

Training

We are now ready to put everything together. Let's start with preparing our data and intializing our model. We can then intialize our optimizer and define our loss function. This time, instead of using one of the predefined loss function as we did before, we will define our own loss function.

Unlike our earlier example, this time instead of passing all of our training data to the model at once in each epoch, we will be utilizing batches. Hence, in each training epoch iteration, we also iterate over the batches.

Let's start training!

Prediction

Let's see how well our model is at making predictions. We can start by creating our test data.

Let's loop over our test examples to see how well we are doing.