Deploying a Rust App to Google App Engine

April 1, 2016

I love the Rust Language. The trade-offs Rust has chosen work really well for me. I like static typing, powerful type systems, functional programming, and control over memory. Those last two, in particular, often seem at odds with one another, and I really appreciate Rust's particular combination of theory and pragmatism. I also like Google App Engine, and the general Google cloud ecosystem. This wasn't true originally, but the more I've learned, the more I've grown to like it. This blog is served statically from Google Cloud Storage, and I use GAE to back my consulting web-apps, when I choose the stack, at any rate. While I find many things in technology quite interesting, server maintenance is not really one of them, so GAE's managed instances are perfect for me. For my first real application in Rust, I'm writing the back-end to a Disqus-like commenting system for this blog, which I want to host on GAE. Which means I want to run Rust apps on GAE, which is precisely what we'll do.

Hello Rust Engine!

In this article, we'll write a simple "Hello World" web application in Rust using the Iron web framework and deploy it to Google App Engine. The app itself will be trivial: Respond to any request with "Hello Rust Engine!" In order to deploy to GAE, as opposed to manage-it-yourself Compute Engine, we have to use App Engine's Custom Runtime option, which means deploying Docker images. To complicate things just a little more, I'm doing this from OS X, which requires a little more dancing around to get things working right.

Let's start with getting the Rust application working by itself. If you're reading this, you've probably already got Rust and Cargo installed, but if not, I suggest using rustup to get everything set-up. With that handled, let's start a new project.

        $ cargo new hello-rust-engine --bin
Cargo will set-up a new project in the folder hello-rust-engine, complete with a hello world printing and Cargo.toml file. We're going to use the Iron framework to build the web-app itself, so we'll need to add that to the dependencies of the Cargo.toml file. The file itself should look like:

        name = "hello-rust-engine"
        version = "0.1.0"
        authors = ["emptylist <>"]

        iron = "*"
The iron = "*" should be all you need to add. The app itself is modified from the 'hello world' example on the homepage, with some additional hack to get the host ip to bind the server so we're serving publicly. Actually, solving this was the point where I realized just how bad my networking knowledge is.

    extern crate iron;
    use iron::prelude::*;
    use iron::status;
    use std::process::Command;
    use std::net::SocketAddrV4;
    use std::net::Ipv4Addr;
    use std::str;

    fn main() {
      let host_port = 8080; //Note: port must be 8080 for GAE
      let hostname_cmd = Command::new("hostname").arg("-I").output();
      let host_addr: SocketAddrV4 = match hostname_cmd {
        Ok(res) => {
          let addr = str::from_utr8(res.stdout.as_slice())
              .map_err(|err| err.to_string())
              .and_then(|ip_str| ip_str.trim()
                          .map_err(|err| err.to_string())
              .map(|ip| SocketAddrV4::new(ip, host_port));

          match addr {
            Ok(addr) => addr,
            Err(_) => {
              let ip = Ipv4Addr::new(127, 0, 0, 1);
              SocketAddrV4::new(ip, host_port)
        Err(_) => {
          let ip = Ipv4Addr::new(127, 0, 0, 1);
          SocketAddrV4::new(ip, host_port)
      println!("Server listening at {}", host_addr);
      Iron::new(|_: &mut Request| {
        Ok(Response::with((status::Ok, "Hello Rust Engine!")))

Pretty much everything above that's different from the Iron example is about getting the hostname. On (many?) Linux systems, hostname -I will get the host ip address, which will be the public ip to listen to if the system is a server, and the loopback address if the system is not (i.e. a desktop). Of course, if that command fails, as it will on OS X which I'm working from, I want to grab the loopback address instead so I can test that the server running. The upshot of all of this is that Docker is going to assign a 'public ip' to the container, and we need to grab that ip, not localhost, to serve across the container-host boundary. There is probably a better way of getting that IP address, but the hostname worked, at least, and using Docker means we control the application's deployment context, so it should work well enough. I'm open to better suggestions. Finally, the listening port must be 8080 for deploying to Google App Engine.

The rest of the code is mainly about dealing with errors, and follows pretty directly the strategy in the rust book section on error handling. At this point, cargo run should compile and run the server. Most likely, you'll see Server listening at, and you should get a plain text page saying "Hello Rust Engine!" at that address. With that, we're finished with the actual Rust portion; next we need to build a docker image and deploy that to GAE.


The next task is making a Docker image to deploy to GAE. My knowledge of Docker consists precisely of getting this example running, so keep that in mind for this section. If you already know how Docker works, you can skip the section on running the Rust app. Obviously, you'll need to install docker first. Following that, on OS X, you need to start up docker-machine first, since Docker requires a Linux kernel to function. If you're on Linux, you can skip this part, but for OS X, you'll want to run

        $ docker-machine create --driver virtualbox default
        $ eval $(docker-machine env default)

The first command creates a virtual machine using virtualbox with the name default. The second line sets up the docker-machine environment variables. An important note is that the second line needs to be re-run everytime you change Wi-Fi connections. This threw me for a bit where Cargo was unable to access the internet to pull and compile my program after I moved between Wi-Fi endpoints. I think it's due to shifting IPs from DHCP; Docker images rely on the networking information they're given when they're run, rather than asking for it from the host dynamically, so after moving Wi-Fi endpoints, docker-machine needs to re-export the right environmental variables.

Anyway, test that everything is working,

        $ docker run -d -P 8000:80 nginx
This will pull down a Docker image with nginx and run it. The -P 8000:80 flag will publish port 80 on the container to port 8000 on the host. However, the container will not be serving on localhost. To get the ip address from docker-machine, run

        $ docker-machine ip
In my case, this yields, so going to in the browser will show the Nginx default page. With that, we can make a docker image for our Rust application.

Running the Rust Webapp through Docker

At this point we can write a Dockerfile to deploy with. Jimmy Cuadra has published a Docker image that sets up a Rust environment with rustc, cargo, and company. We'll use this image as a base to build the image we'll deploy to GAE. In the hello-rust-engine directory, make a new Dockerfile (called Dockerfile) containing

        FROM jimmycuadra/rust
        EXPOSE 8080
        COPY Cargo.toml /source
        COPY src/ /srouce/src/
        CMD cargo run
This takes the jimmycuadra/rust image as a base, exposes port 8080 on the container, copies files from the current directory to the directory /source and finally executes the cargo run command. By default, the base docker image makes a directory /source and enters there, which is why we're copying the files there. Since we're just making a proof of principle, we'll use cargo run to compile and launch the application when the image starts, but in general this is not a good deployment strategy for an actual application.

With that, we can build and run the docker image for the app.

        $ docker build -t hello-rust .
        $ docker run -P hello-rust
The first command obviously builds the image, in this case -t hello-rust names the image hello-rust, and . specifies that we're building using the current directory as the path root. The second command runs the hello-rust image that we just built, publishing all of the ports that are listed by EXPOSE to host ports selected by Docker. After running, you should see Cargo doing it's thing and compiling and running the application. Remember that the internal IP that the images publishes is not the IP to visit in the browser. In my case, the rust application tells me that the server is listening at, but we actually need the IP address and port that Docker is communicating to the host on, which we can get from docker ps and docker-machine ip:

        $ docker-machine ip
        $ docker ps
The first command will give the ip address to check. Again, for me, this is The second will show the running docker processes; we're interested in the PORTS column, which should have an entry that looks like>8080/tcp
This tells us that the docker process is forwarding port 32769 of the host machine to port 8080 of the docker container for tcp traffic. The upshot is that, in my case, I want to visit in the browser, or alternatively you can test with cURL.

        $ curl
I get the response back "Hello Rust Engine!" and realize I didn't even bother to finish the response with a newline. Obviously, you'll want to substitute the ip and port your system shows from the docker commands. At this point we're successfully running our trivial web application from inside a docker container. Last step: deploy to GAE!

As an aside: you'll want to stop the docker process running the image now. To do so, use

        $ docker stop $name
where $name is the name of the process shown in the NAMES column when running docker ps. They're random and odd - I've got one 'hopeful_dubinsky' as I write this section.


Finally, we're almost there! Our last step is deploying to Google App Engine. If you haven't already, you'll need to sign up and install the gcloud SDK. Follow the instructions and get everything initialized. You'll also need to set up a project to deploy to through the web console. My project for this application is "hello-rust-engine", so that's the name I'll be using for the rest of the article. Substitute with your project name as appropriate.

If you haven't already while following Google's instructions, you'll want to run

        $ gcloud init
The interactive initialization procedure will handle set up. If you didn't set up the 'hello-rust-engine' project as the default during the init process, do so now with

        $ gcloud config set project hello-rust-engine
Okay, that should handle the set up. We need to make an app.yaml file to configure GAE and then we can upload. The app.yaml file should be located in the project root (so the hello-rust-engine directory), along with src and the Dockerfile, and should read

        runtime: custom
        vm: true
This just specifies that we're using the Custom Runtime/Managed VM system. There's nothing else we need to do with the YAML file for this demo, so let's deploy!

        $ gcloud preview app deploy
This should deploy the docker image hello-rust-engine/default to GAE at the URL specified on your project console, in my case, this is Deployment will take awhile. Be patient.

Once it's live, you can test with the browser or via curl

        $ curl
(But with your project URL, of course!) And you should see

        Hello Rust Engine!
We're done! Rust is running on GAE. Happy hacking!