I was originally designing this project to use a single API service domain but
for several reasons (mostly due to issues with
CORS and sending
credentials cross-domain, but also because it seems likely that third-party
cookies might be going away entirely in the future) I have decided to mostly
abandon that idea and instead opt for a, perhaps more classic, /api
route on
the same domain that serves that frontend application. There’s only one real
downside for the end-user which is that if they have access to more than one
family then they’ll need to login to each family site independently. This seems
fine since I don’t expect people to be switching between families that often
anyway. This makes a lot of the setup much simpler except for one rather
important thing: how to actually send the requests from the frontend to the
backend.
The “short” answer you were probably expecting: a reverse proxy, specifically nginx. The longer answer is the same, but I’ll go into the configuration for how to make everything work, with special attention on how to make it work when running an automated test suite with playwright and GitHub Actions.
local development
I run nginx as part of my docker compose setup for both local development and when running tests locally (as opposed to running in CI). This spins up my service dependencies postgresql and redis. Importantly, I don’t run the backend application in a container because it’s compiled and it would require recompiling and then rebuilding the container on every change, but instead run it on my local host machine and then rebuild and restart it manually as necessary.
So, for local development there’s not much to change. I create a simple nginx
configuration file (nginx.conf
):
server {
listen 80;
server_name localhost;
location /api/ {
# trailing space is important!: https://serverfault.com/a/562850
proxy_pass http://host.docker.internal:8081/;
proxy_set_header Origin https://$host;
}
location / {
root /usr/share/nginx/html;
index index.html;
}
}
This should be pretty straightforward: a location
block to rewrite requests
that start with /api
back to the backend server running on the docker-host
machine (host.docker.internal
). Setting the Origin header to the host is
important for my application logic but is not important for making this strategy
work in general. As the comment notes
however, the trailing slash is important to rewrite the requests when sending
them to the backed without the leading /api
prefix. Then, everything else gets
served by the build directory which gets mounted into the container by my docker
compose configuration (compose.yaml
):
---
services:
nginx:
image: nginx:1.18
volumes:
- ./build:/usr/share/nginx/html:ro
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- '8080:80'
profiles:
- local
Here the build directory and nginx configuration get mounted into the container
(as read-only), the container port 80
gets exposed on the host on port 8080
,
and I add the nginx
service to my existing local
profile so that it comes up
when I run docker compose --profile=local up
.
The above is actually slightly misleading because I don’t actually use the
static build mount during local development because I want to use the
vite server and
hot module replacement.
I achieve this using cloudflared
to redirect requests that begin with a
certain path to the nginx proxy (which I’m now really only using the strip the
/api
prefix) and send everything else to the vite development server. I wrote
about this setup
previously although it’s
now a little out-of-date given this new information it’s still useful to see how
to set everything up using terraform.
Here’s an updated cloudflared.yaml
config:
---
tunnel: UUID
credentials-file: cloudflared.json
ingress:
- hostname: fam1.example.com
path: ^/api/.*
service: http://localhost:8080
- hostname: fam1.example.com
service: http://localhost:5173
- service: http_status:404
local testing
For local testing using playwright we need to start with a new nginx configuration file that forwards frontend requests to the vite preview server:
server {
listen 80;
server_name localhost;
location /_health {
default_type text/plain;
return 200 'gangnam style!';
}
location /api/ {
# trailing space is important!: https://serverfault.com/a/562850
proxy_pass http://host.docker.internal:8081/;
proxy_set_header Origin http://$host;
}
location / {
proxy_pass http://host.docker.internal:4173/;
}
}
The configuration is obviously similar to what we use for local development but
with a few key changes. The first you’ll notice is the addition of a location
block for a _health
route that responds with a static response (taken from
this helpful answer on Server Fault). We’ll
cover this more in the next section. The /api
location is the same, and then
we pass everything else to the vite preview server.
Then we can add the new nginx config to a special test nginx service in our
docker compose configuration that launches with
docker compose --profile=test up
(which is different from the local
profile
because the postgresql volume is not persistent):
---
services:
nginx-test:
image: nginx:1.18
volumes:
- ./tests/nginx.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- '8080:80'
profiles:
- test
You can see that this is mostly the same as the local
profile except that we
don’t bother to mount the build result into the container.
Finally, The playwright configuration then looks like this (where we have two webservers: the backend application and the frontend vite preview server):
// playwright.config.ts
import type { PlaywrightTestConfig } from '@playwright/test';
const config: PlaywrightTestConfig = {
reporter: process.env.CI ? 'github' : 'list',
testMatch: 'tests/**/*.ts',
webServer: [
{
command: 'npm run build && npm run preview',
port: 4173,
},
{
command: './start-backend-server-command',
port: 8081,
},
],
use: {
baseURL: 'http://localhost:8080/',
},
};
export default config;
Now after starting the containers the playwright test suite can be run as
normal: npm run test
.
testing on github actions
Finally, the last step to tie everything together is to enable this setup on GitHub Actions my choice for CI runner. I originally thought that this would be pretty straightforward because GitHub Actions already has support for launching service containers as part of the workflow (which I’m already using to start postgresql and redis containers for the test suite) and so I should just be able to start an nginx container and mount the test configuration, right? The issue is that the service containers start before any of the steps run (and more specifically before a checkout step can run) which means that the test configuration is not available to mount as a volume into the service container.
The answer to this problem is instead to start the container manually. It’s not as clean as using the services and my example below could be improved by doing some of the steps that the services containers do such an ensuring the container is stopped and removed at the end of the workflow. In any case let’s see what the step looks like:
name: CI
on: # push/pull_request/etc
jobs:
main:
runs-on: ubuntu-latest
services:
# postgresql/redis/etc
steps:
- uses: actions/checkout@v3
- run: >-
docker run -d -p 8080:80
-v ${{ github.workspace }}/tests/nginx.conf:/etc/nginx/conf.d/default.conf:ro
--add-host=host.docker.internal:host-gateway
--health-cmd "curl -fs http://localhost:8080/_health"
--health-interval 2s --health-timeout 5s --health-retries 5
-e GITHUB_ACTIONS=true -e CI=true nginx:1.18
- uses: actions/setup-node@v3
- run: npm ci
- run: npx playwright install --with-deps
- run: make # compile the backend server
- run: npm run test
Most of this should appear as a pretty standard GitHub Actions workflow. The
interesting part is the run docker step. Here we pass the -d
flag to daemonize
the container so that is runs in the background and we can move on to the next
step. We pass -p 8080:80
to expose port 80 of the container to port 8080 on
the host (just like in our docker compose configuration). Then, we mount the
nginx configuration that’s now available from our previous checkout step into
the container. Next, is very important: in order for our nginx configuration to
work with the host.docker.internal
upstream that we gave it we need to make
sure that the host will resolve and that it will resolve to the IP address of
the host. We do that with the --add-host=host.docker.internal:host-gateway
.
Next, if you’ll recall the static _health
route that we added to the nginx
configuration you’ll see that we use it here to know when the container is up
and able to receive requests using the health command and friends. We curl
the
container port that we’ve exposed to see if we get an OK response back. Finally,
we set a few environment variables that GitHub Actions would have set if we had
launched the container using the services framework.
Not mentioned here but you could also attach the container to the docker network
that GitHub Actions creates by passing a
--network=${{ job.container.network }}
option to the docker run
command. I
don’t do this because I don’t need this container to talk directly to any other
containers (or vice versa), but it could be useful if you had other use-cases
where you needed to mount files into the container so you couldn’t use the
services framework but you also needed the containers to talk to each other. An
idea that comes to mind is the ability for most database containers to be able
to take a directory of SQL files on start-up to seed the database.
Finally, after starting the container to allow our frontend/playwright process to forward backend requests properly we can do the rest of the setup and then launch the test suite.