Getting a Local Linux Setup for Project 1

Windows: Use WSL2

Newer versions of Windows include a feature called “Windows Subsystem for Linux,” which runs Linux inside of a virtual machine that can be conveniently accessed from your primary Windows OS.

To install WSL2, follow these instructions. Note that the older WSL1 is unlikely to work with our assignments.

Then, from a terminal, run wsl to get into a WSL 2 command prompt, and then run the following:

# Install other dependencies:
sudo apt-get install -y build-essential make curl strace gdb
# Install Rust:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

If you’re using VSCode, you may also want to install the “Remote - WSL” extension.

Mac with Intel chip, or Windows without WSL 2: Use Vagrant

Vagrant is a really helpful tool that will create and configure virtual machines on your computer. A virtual machine creates a “virtual” operating system that runs on top of your real operating system. This will allow you to easily run and test your code as if you’re running it on a Linux machine.

First, you’ll want to install Vagrant. On Mac with homebrew installed, you can do this easily by running brew install --cask virtualbox vagrant. NOTE: if you’re on Mac and get an error when starting up the VM, likely saying “kernel driver not installed,” check out this article.

Then, cd into the deet directory in your terminal. This directory has a file called VagrantFile that will configure the VM when you bring it up (take a look at the comments in the file if you’re interested in how this works).

From the deet directory, run the following to set up the virtual machine:

vagrant up

This will create and configure a Linux VM on your computer according to the specification of the Vagrant file.

The VM might take a minute or so to boot. Once it’s finished, you need to “ssh” into the VM. You can do this by running:

vagrant ssh

This will give you a terminal in your VM! From here, you can run normal commands like make, cargo run or gdb, and they will run in your vagrant VM.

Now, you can open whatever editor you normally use and start working on the assignment. You don’t have to write your code in the VM. The VM’s “file system” is hard linked to your computer’s file system, which means any changes you make on your computer will be automatically synchronized with the VM. However, if you want to actually run the code in the VM, you need to do so from the VM terminal.

(What I do is ssh into the VM, keep that terminal open, write my code in a different window in VSCode, and then go back to my VM terminal when I want to compile and run.)

Once you’re done with the project, you can shut down and delete the VM by running this command from the deet directory (on your system):

vagrant destroy

Note: I recommend keeping the VM in existence while you’re working on this project, and ssh into it from the deet directory when you need to, just because booting and configuring the VM (with vagrant up) takes a few minutes. That said, you’re also welcome to re-create and destroy the VM every time you go to work on the project; all files are locally stored, so you won’t lose any work.

Use Docker

Docker is a popular tool that creates consistent environments to develop/test/deploy software in. It may take some more work to get running, but will probably be pretty smooth in terms of running your code.

Installing Docker

You can download and install docker for Mac here and for Windows here. Make sure you have the latest version of Docker installed. After downloading, open it.

Building the Docker image

cd into the deet/ directory and then run docker build:

docker build -t deet .

This will build an deet image containing dependencies needed to run your program. This might take a while. (In our case, the dependencies are just a barebones version of Ubuntu, cargo, and make.) If you get a message that says “Cannot connect to the Docker daemon”, make sure you have Docker running in the background.

Once you build this image, you won’t need to do it again!

Running cargo

Once the image is built, you can run your code. Here’s a pretty long incantation that runs cargo build in your Docker image:

docker run --rm -it \
    -v "${PWD}":/deet -v "${PWD}/.cargo":/.cargo \
    -u $(id -u ${USER}):$(id -g ${USER}) \
    deet \
    cargo build

Since this is rather long and complex, we included a mini script that does the docker run part for you. You can run it like this:

./container cargo build
./container cargo run

You can edit code locally on your machine using whatever editor you like and run the ./container command to run your code. No need to upload or sync your files anywhere.

You can also run other things within the container. For example, you can run make and gdb:

./container make
./container gdb samples/function_calls

Or, you can even start a bash shell inside the container:

./container bash

What’s with the M1?

If you’re on a machine that isn’t using an x86 architecture (this will most likely only be true if you’re on a new Mac with the ARM M1 chip), none of the above options will work for you for project 1.

First, project 1 relies on registers that don’t exist on ARM. In order to access these registers you’d need a way to “emulate” (pretend to be) an x86 architecture. In other words, you’d need not only a Linux OS environment (like you may have used during the week 3 exercises), but also a way to emulate x86-based hardware.

There are ways to do this. We can specifically ask Docker to emulate x86, or we can use a virtual machine platform that has this option. But the other problem is that any solution that emulates hardware in software is going to be slow. For actually running deet, this isn’t a big deal – deet doesn’t do much. However, compiling deet is another story, and compilation times using emulated hardware were pretty much unmanageable.

Jonathan Kula (an awesome CS110 CA) came up with a script and configurations that set up two Docker containers – one for compiling (which runs “natively”, i.e., directly using whatever hardware your machine has), and one for running (which emulates x86 if you’re on ARM).

This seemed like a great solution! We were excited. It worked for me (on my Intel-based machine), and it worked for Jonathan… until he ran some working solutions through it. It turns out that emulated systems seemingly don’t implement ptrace at all. (Maybe someone would have known this, but we did not.) And ptrace is foundational to how we’ve implemented this debugger.

So, our conclusion is that, if you’re not on x86, you should work on myth or another Intel machine. Or modify deet to work for a different architecture (fun extension project?). We’re not aware of other workarounds – though if you have leads or ideas, let me know! I’m curious. :)