For this project I have to manage multiple domains as we have different entrypoints for different members of the family (read: sides of the family). There’s also the API server which needs to serve requests for these domains as well as manage the authentication in a seamless manner (more on that in a future post). As-is that’s a little challenging during local development unless you spin up multiple versions of the frontend application and treat each port that it runs on as a separate domain/origin. Another alternative is some hackery with your /etc/hosts but that causes it’s own problem if you want to use a real domain because of browser security, things like CORS, or (secure) cookies.

Tools like ngrok have existed for some time that help with this a little bit, but in order to get custom, stable domains you have to enter the paid tier. Ditto for securing the endpoints which otherwise leave your development site open on the public internet (as unlikely as it is to be found) and you don’t want folks stumbling upon your unfinished work or potentially interfering with something you’re working on by interacting with the running service.

Enter cloudflared. For me cloudflared solves all of these problems and also has the added benefit that I can manage the entire thing via infrastructure-as-code! Cloudflare is also nice enough to offer all of this for free, which is perfect for a hobby project like this.

That last point is actually something I discovered after I had gone through the original “manual” setup first, so I actually wrote this article twice, once for the “old” non-IAC way, and then again for the “new” infracoded way. I’ll start with the new, better way and then afterwards describe the old method that I used if you were to try this setup but didn’t have any infracode (but you really should 😀).

Whichever method you use you will need to have cloudflared installed on your development machine, since it’s written in go and open-sourced this is incredibly easy. You could build it from source, or more likely download a pre-compiled binary directly from Cloudflare.

the new way

Compared to the old way below the “new” way of configuring a cloudflare tunnel with infracode is much simpler and can be almost entirely managed with just a few lines of terraform.

  1. Assuming that you have a standard way of setting up your terraform providers and remote state backed, the main configuration to get a tunnel up and running follows:

    data "cloudflare_zone" "example_com" {
      name = "example.com"
    }
    
    resource "random_id" "secret" {
      byte_length = 32
    }
    
    resource "cloudflare_argo_tunnel" "dev" {
      account_id = var.cloudflare_account_id
      name       = "dev"
      secret     = random_id.secret.b64_std
    }
    
    resource "cloudflare_record" "tunnel" {
      for_each = toset(["fam1", "fam2", "api"])
    
      zone_id = data.cloudflare_zone.example_com.id
      name    = each.key
      type    = "CNAME"
      ttl     = 1
      proxied = true
      value   = cloudflare_argo_tunnel.dev.cname
    }
    
    output "credentials" {
      sensitive   = true
      value = jsonencode({
        AccountTag   = var.cloudflare_account_id,
        TunnelSecret = random_id.secret.b64_std
        TunnelID     = jsondecode(base64decode(cloudflare_argo_tunnel.dev.tunnel_token))["t"]
      })
    }
    

    Compared to the many steps below in the old way running many different commands this is much simpler! In about 30 lines of terraform we created a new tunnel, added the DNS records that we wanted and created the credentials file that we’ll use to activate the tunnel. Let’s take a closer look at each part.

    The cloudflare_zone data source looks up our domain so that we can use the zone ID to create the DNS records without needing to hardcode it somewhere.

    The random_id generates cryptographically strong random bytes that we use to set the tunnel secret. The cloudflare documentation specifies that this secret needs to be at least 32 bytes and base64 encoded (which we apply when creating the tunnel).

    The cloudflare_argo_tunnel resource is what actually creates our tunnel. It takes our cloudflare account ID (which I have extracted into a variable), the name of the tunnel, and the secret which we created with random_id but base64 encoded.

    The cloudflare_record created the hostnames that we want the same way that the terraform snippet from the old way does. The only difference here is that we consume the CNAME that the tunnel exports so that we don’t need to hardcode the .cfargotunnel.com domain name in the (unlikely) event that it changes.

    Finally, we build a (sensitive, so not shown in the terminal) output that creates the same credentials file json that the cloudflared tunnel create command would have output with the correct variables substituted in. The tunnel ID is the UUID assigned to the tunnel and we can get it by reading the t key from the base64 encoded JSON of the argo_tunnel’s tunnel_token attribute.

    You can then generate a credentials-file JSON ready to go by querying the remote state

    terraform output -raw credentials > cloudflared.json
    
  2. Now, just like the old way below we need to create the cloudflared configuration YAML.

    ---
    tunnel: UUID
    credentials-file: cloudflared.json
    
    ingress:
      - hostname: fam1.example.com
        service: http://localhost:5173
      - hostname: fam2.example.com
        service: http://localhost:5173
      - hostname: api.example.com
        service: http://localhost:8000
      - service: http_status:404
    

    I store this file alongside the rest of my project code. As you can see the file is relatively simple and self explanatory. You provide the tunnel UUID, and the path to the credentials file (generated above) and then a mapping of hostnames to services and ports running on your localhost. Pay attention that cloudflared requires a catchall address at the end we’re using the built-in http_status service that will return a 404 Not Found.

    Note that it’s possible to manage the ingress mappings using the Zero Trust dashboard (but not via IAC/terraform) but the migration is one-way and any local changes you then make to the configuration file won’t be reflected. Since I’m storing the configuration in code anyway, I prefer the flexibility of letting that file adjust the ingress as my needs change automatically.

    If you would like to validate the configuration file before proceeding you can do so with the following command:

    cloudflare tunnel --config cloudflared.yaml ingress validate
    
  3. Now you’re ready to start the tunnel. As I note below, I disable update checking because I prefer to manage cloudflared with my distribution’s package manager. Start up your local services (which isn’t strictly necessary – cloudflared will happily start up even if the configured services aren’t running or available, but you obviously won’t be able to reach them through the tunnel) and then start the tunnel:

    cloudflare tunnel --no-autoupdate --config cloudflared.yaml run
    

the old way

Like, I mentioned above I’m pretty much keeping this here for posterity’s sake since I spent the time writing it and didn’t want to throw it away. But I definitely recommend taking the infrastructure as code approach above instead. There’s also some repetition of stuff I wrote above since I wrote the below first, but wanted the better canonical version to be the “new” way.

  1. Login to cloudflare to have it generate a certificate that you can use to create tunnels.

    cloudflared login
    

    This will open a browser window prompting you to login to your cloudflare account and choose a domain. Tunnels are actually account-wide now, and so-called “legacy” tunnels are being phased out, but it’s still technically necessary to select a zone when creating the tunnel even if it ultimately creates the new-style tunnel.

  2. This creates the aforementioned certificate in the default cloudflared configuration directory (~/.cloudflared on macOS and Linux). I have different projects and accounts so I actually want to store this certificate alongside the project directly so I moved it into my project directory and encrypted it using gpg (and added a rule to ignore the unencrypted version in my .gitignore). You actually only need the certificate (if you’re only going to be creating one tunnel) for the next step as the act of creating a tunnel creates a tunnel secret which is used for activating the tunnel and is stored in a separate credentials file.

  3. Create the new tunnel using your certificate from the previous step. I’m having cloudflared create the resulting credentials file in my local project directory, again because I’m a solo-dev and I prefer to distribute the (encrypted) credentials alongside the project itself so that I don’t need to re-authenticate and re-setup on every machine that I use for development.

    cloudflare tunnel --origincert ./cert.pem \
      create --credentials-file ./cloudflared.json dev
    

    Note that the last argument in the command is the name of the tunnel. Cloudflare will assign a UUID for the tunnel which you’ll need to create eventual DNS entries as well as activate the tunnel, but you can see the name that you’ve assigned the tunnel on the Zero Trust dashboard to make it easy to identify your tunnels. I’ve named my tunnel “dev” because I’m using it for local development.

    Once this command runs I similarly encrypted the output and added the rule for the unencrypted version to my .gitignore.

  4. Now you need to create the configuration file to operate the tunnel. cloudflared will automatically check for a file in ~/.cloudflared/config.yml but again I will be storing the configuration alongside the rest of my project. The configuration is just a simple YAML file where you select the tunnel ID, provide the path to the credentials file and then map any incoming hostnames to the associated localhost service and port.

    ---
    tunnel: UUID
    credentials-file: cloudflared.json
    
    ingress:
      - hostname: fam1.example.com
        service: http://localhost:5173
      - hostname: fam2.example.com
        service: http://localhost:5173
      - hostname: api.example.com
        service: http://localhost:8000
      - service: http_status:404
    

    Here I’ve configured my tunnel to route the two family frontends to my svelte kit (vite) development server and the api endpoint to my API service all running on my local machine.

    Once you’ve created your configuration file you can validate the format like so:

    cloudflared tunnel --config cloudflared.yaml ingress validate
    
  5. Now we can actually create the hostnames that we specified in our configuration in our cloudflare zone. The records should have the cloudflare proxy on (orange cloud) and should be CNAME records to the UUID of your tunnel .cfargotunnel.com.

    Here’s some example terraform code:

    data "cloudflare_zone" "example_com" {
      name = "example.com"
    }
    
    resource "cloudflare_record" "cloudflared_tunnel" {
      for_each = toset(["api", "fam1", "fam2"])
    
      zone    = data.cloudflare_zone.example_com.id
      name    = each.key
      type    = "CNAME"
      proxied = true
      value   = "YOUR-UUID.cfargotunnel.com"
    }
    

    Alternatively, you can have cloudflared create the DNS records for you on-the-fly (note that it does not clean them up itself when you’re done).

    cloudflared tunnel route dns YOUR-UUID api.example.com
    
  6. Finally, you’re ready to start the tunnel. In my example I’ve disabled checking for and applying updates because I’ve installed cloudflared through my distribution’s package manager and I prefer to update it that way too.

    Start-up your local services (note this isn’t strictly a requirement for cloudflared to start but you obviously won’t be able to reach your services if they aren’t running) and then start the tunnel:

    cloudflared tunnel --no-autoupdate --config cloudflared.yaml run
    

securing the environment

The last step is to prevent unwanted access of your development environment. This is easy to do with Cloudflare Access. In a few lines of terraform you can require someone to enter the PIN that gets emailed to them before accessing the website. This is more than enough protection for a simple development site, and the Cloudflare Teams free tier includes up to 50 users meaning that if you need to show the development environment to someone else it’s easy: just add their email address into the list of approved addresses and they’re all set. Here’s some terraform to set it up:

resource "cloudflare_access_application" "dev" {
  zone_id                   = data.cloudflare_zone.example_com.id
  name                      = "development site"
  domain                    = data.cloudflare_zone.example_com.nam
  type                      = "self_hosted"
  session_duration          = "24h"
  auto_redirect_to_identity = true
}

resource "cloudflare_access_policy" "dev" {
  application_id = cloudflare_access_application.dev.id
  zone_id        = data.cloudflare_zone.example_com.id
  name           = "development"
  precedence     = 1
  decision       = "allow"

  include {
    email = ["[email protected]"]
  }

  require {
    email = ["[email protected]"]
  }
}

conclusion

And that’s it! You can visit the names that you specified in your browser (or with curl, etc) and they’re served over HTTPS without you needing to open/forward any ports. Check your logs and you can see the requests to your service.

This is pretty neat and extremely helpful for local development, but I’ll also be using the same approach for hosting the final product in production. That will allow me to run and serve everything from a Raspberry Pi running in my house, again without needing any messy port forwarding or dangerous firewall rules. I won’t write again about how to setup cloudflared but I will definitely do a follow up about the production setup (probably more than one!).