This is part of a series of blog posts I’m writing about hosting this blog from my homelab. You can find the previous part here
This blog is being hosted on a Docker container in my homelab (see this post for more details). But the actual machine serving the requests is a Linode VPS. I do this because my ISP uses CG-NAT and therefore I can’t open a port from my home network out to the public internet. I have a Wireguard VPN connecting my homelab and my VPS and any requests to blog.akjohn.dev
on the VPS gets redirected to the Docker container in my homelab through the VPN.
The Problem
This setup works fine, and has been going strong for a few months now. But there are multiple points of failure with this setup:
- My home internet connection could go down. This would mean the requests from the VPS to my homelab wouldn’t get through.
- My homelab server could go down. Since I tinker with my server a lot, I frequently restart it, mess around with network settings and restart containers without any precautions. It’s highly likely that I could some day take down the Docker container running the blog.
- The Wireguard VPN could go down. As mentioned, I mess around with my network a lot. It’s likely I could one day bring down the VPN connection connecting my VPS and my homelab which would also stop requests to my blog from going through.
Even though this blog is not a critical service for me (or anyone else I guess), I decided I needed to solve this problem. I’ve added links to this blog in several of my professional portfolios and I think it would leave a bad impression if the blog is down when someone clicks on those links.
Figuring out a solution
⚠ There are definitely much better, more efficient and simpler ways of setting up failovers. I believe Kubernetes might help in this scenario. But before pulling out the big guns, I wanted to see what I could setup with the knowledge I have at the moment. Please do NOT take these instructions as the only way (or even the correct way) to solve this problem.
I had a basic understanding of what I would need to do to setup failover for this blog:
- Run multiple instances of the Docker container running my blog.
- One will be running on my homelab.
- The other will be running on my VPS.
- Run some sort of service that will sit behind Nginx Proxy Manager and the Docker Containers.
- If the Docker container running in my homelab is up, all requests will be directed to this container. Since my homelab has a much beefier CPU, it should be able to handle much more traffic.
- If the Docker container running in my homelab is down, all requests will be directed to the container running in the VPS.
- Once the Docker container running in my homelab is back up, redirect all requests back to this container.
With this basic plan in mind, I started searching for software which would help me set something like this up. Spoiler alert: I didn’t have to search for long.
HAProxy
My first Google search listed HAProxy as the top result. It seemed to do everything I needed, and then some. You just install HAProxy (If you’re running Ubuntu, it’s as simple as sudo apt install haproxy
) and edit its configuration file. Then you restart the haproxy service and you’re done. It’s that simple.
I installed HAProxy on my VPS and edited the configuration file at /etc/haproxy/haproxy.cfg
like so:
...
frontend myfrontend
bind 10.1.1.1:8888
default_backend myservers
backend myservers
server server1 192.168.18.41:12346 check
server server2 127.0.0.1:12346 check backup
This configuration basically tells HAProxy to do the following:
- Listen on port 8888 (denoted by
bind 10.1.1.1:8888
). - Use
192.168.18.41:12346
as the primary server to serve requests. - If the primary server is down, redirect requests to
127.0.0.1:12346
.
192.168.18.41:12346
points to the Docker container running in my homelab and 127.0.0.1:12346
points to the Docker container running in my VPS.
Now I just run sudo service haproxy restart
, reconfigure Nginx Proxy Manager to redirect all requests to blog.akjohn.dev
to 10.1.1.1:8888
and my failover setup is complete!
Streamlining the process
Even though I’ve successfully setup failover for my blog, I still had more work to do. Right now, I use a CI/CD pipeline to automatically create Docker images and containers and deploy them whenever I create a new blog post (click here for more details about how this works). The pipeline I’ve setup right now only updates the Docker image on my homelab. But now, since I have HAProxy redirecting requests to another Docker container on my VPS, I need to update the Docker image in the VPS whenever I create a new blog post. Otherwise, there would be a difference in the content served from both these Docker containers.
I could do this manually, but since I already have Jenkins automating the entire process, I decided to integrate this part of the process to the pipeline.
Again, I started with a basic idea of what I had to do:
- Whenever the Docker image for my blog is updated:
- Upload the new Docker image to my VPS.
- Restart the container on the VPS with this new image.
I started by trying to see if there was some way I could export a Docker image to a file. I could then upload the file to the VPS, convert it back to an image and then start a new Docker container with that image. I soon found out that the command docker image save
could be used to achieve what I wanted. This command will export a Docker image to a tar file which I could then copy over to my server.
This would have worked fine, but I felt like I could do better. I knew about DockerHub, which is like Github for Docker images. If I could upload my blog image to DockerHub, I could then pull it from the VPS and run it from there. But I didn’t feel like uploading my personal blog image to DockerHub since it contains information personal to me. But what if I could self-host my own DockerHub?
Turns out, there’s a Docker image for that too.
This link should give you all the information you need to setup your own Docker Registry. Basically, you run a Docker container with the registry
image and upload all your images to that registry. You just need to tag all your images with the url to the Docker Registry and run docker push
.
But wait, there’s more. Apparently, Gitea, which I already self-host as my own private git server, has a built-in Container Registry! This means I already have my Docker Registry up and running. And I didn’t even know about it.
Now all I had to do was run a few commands to login to the Registry, build and tag my image, and finally upload it.
docker login 192.168.18.4:3000 # login with your Gitea username and password
docker build -t 192.168.18.4:3000/john/personal-blog-master:latest .
docker push 192.168.18.4:3000/john/personal-blog-master:latest
But there’s still one more step remaining. I have the Docker image in my Registry now, but I still need to automatically pull and update it in the VPS. My first thought was to write a bash script in the VPS which I would call whenever I push a new Docker image. I would have a Docker Compose file in the VPS which will create a container based on my blog image. I could then run commands to pull and restart the containers on each update to the blog image.
As mentioned in the beginning of this post, I already have a CI/CD pipeline for creating and deploying new Docker containers whenever I write new posts in my blog. Now I needed to add these new steps to the pipeline process. My new updated Jenkinsfile looks like this:
...
stage('Deploy to prod') {
when {
branch 'master'
}
steps {
script {
sh """
cd personal-website
../hugo
cd ..
docker build -t 192.168.18.4:3000/john/personal-blog-master:latest .
docker push 192.168.18.4:3000/john/personal-blog-master:latest
docker stop personal-blog-master || true
docker rm personal-blog-master || true
docker run -d --restart always --name personal-blog-master -p ${env.PROD_PORT}:80 192.168.18.4:3000/john/personal-blog-master:latest
"""
}
}
}
stage('Deploy to Linode VPS') {
when {
branch 'master'
}
steps {
script {
sh '''
ssh john@10.1.1.1 "cd /home/john/Docker-Containers/personal-blog/ && sudo docker compose down && sudo docker compose pull && sudo docker compose up -d"
'''
}
}
}
...
The bash command in the Deploy to Linode VPS
will automatically update the Docker container in my VPS. I also created an SSH key pair to login to the VPS without asking for credentials while running the pipeline. I bind mounted the home folder in the Jenkins Docker container and copied the ssh key to the .ssh
folder here. I also added the VPS to the known_hosts
file.
With this, my failover setup is complete. Now all I need to do is create a new blog post, commit it and push it. My CI/CD pipeline will automatically create the Docker containers and deploy them both on my homelab and the VPS.
My final CI/CD pipeline looks like this:
Always room for improvement
As always, I already have a few more ideas on how to improve this setup even more.
- I run a backup of this blog on an AWS EC2 instance. I could add this server to the HAProxy configuration to strengthen my failover setup even more.
- Move to Kubernetes. I’ve been putting off on setting up Kubernetes for a while now. But I think it’s high time I finally dip my toes into the world of Kubernetes.
- Add more parameters to the HAProxy config file. HAProxy provides a lot of flexibility on when to fall back to a backup server or when to redirect traffic back to the main sever. I could even use HAProxy as a load balancer.