This blog is currently being hosted on my homelab in a docker container. I have a Wireguard VPN connecting my homelab and my VPS on Linode. The VPS has Nginx Proxy Manager running in a docker container which is publicly exposed on port 80. Then I added a proxy host in Nginx so that all requests to blog.akjohn.dev gets redirected to the docker container in my homelab.

blog-architecture

One of the most common roadblocks you’ll face when trying to expose almost anything from your home network to the public Internet is bypassing CGNAT. CGNAT (Carrier Grade NAT) is a common technique ISPs use nowadays to tackle the problem of IPv4 address exhaustion. In short, the number of IPv4 IP addresses available for use right now is far less than the number of internet-connected devices. This means it’s impossible to give each device a unique IPv4 address.

CGNAT

CGNAT works around this issue for ISPs by sharing one public IPv4 address between multiple customers. For example, let’s say your ISP has 100 different IPv4 addresses available to assign to its different customers. Assume the ISP has 1000 different customers that it provides internet access to. When using CGNAT, one public IPv4 IP address from the ISPs pool of 100 would (potentially) be mapped to 10 of its customers. Lets call these 10 customers customer 1, customer 2 … customer 10. If customer 1 makes a network request to www.google.com and customer 7 makes a request to www.google.com, the server at www.google.com will see both those requests coming from the same IP address.

This might seem like a nice and clever solution to the IPv4 exhaustion issue, but it comes at a major tradeoff. Since now there’s no unique public IPv4 address being assigned to a user, port forwarding is impossible. As a result, you won’t be able to connect to your homelab from anywhere except inside your home network.

Most users probably won’t even be aware that they are behind a CGNAT until they try to port forward. If you have access to your ISP router right now, one way to check whether your ISP uses CGNAT is to check the IP address assigned to your router. CGNAT has a reserved IPv4 address space between 100.64.0.0 and 100.127.255.255. If the IP address assigned to your router by your ISP is within this range, you’re probably behind a CGNAT.

wan-address

There are a few different methods for a home-laber to bypass CGNAT, which I’ll probably discuss in a different post. One of those methods involves renting a VPS in the cloud and creating a VPN between your home network and the VPS. This way, you can create a secure link between your VPS and your homelab. Since the VPS has a public IPv4 address, you can now simply open a port on the VPS and redirect requests to your homelab.

Hugo

I decided to choose Hugo as my static site generator for building this. There was no particular reason behind my choice, but I knew Hugo was pretty widely used and relatively easy to setup.

There are a lot of themes available for Hugo at https://themes.gohugo.io/. I decided to go with one of the most popular ones called PaperMod.

Creating content with Hugo is pretty straight-forward. Every post is a markdown file and you just edit those markdown files with your content and run the hugo command to process those files into HTML, CSS and JS which you can then serve using a web server.

Docker

During my initial homelab-ing days, I ran everything on bare metal. An application needs Redis, Maria-DB and Apache? I installed all of them individually and configured them as per the application’s requirements. This obviously took time to setup and maintain. And it broke. A lot.

Migrating everything to Docker felt like magic. You’re telling me I just run docker compose up -d and everything I need, all the services the app depends on, all the networking stuff, all the permissions are all handled by Docker? Suddenly making backups was a breeze and no more figuring out which systemd service I setup as part of an application ages ago is down.

I decided to use Docker to host this blog. I knew what I had to do at a pretty basic level:

  • Create a Dockerfile which will contain steps to setup a web server and serve the Hugo build files.
  • Create a Docker image using this Dockerfile.
  • Run this Docker image in a Docker container while exposing the web server’s port to the host.

Writing the Dockerfile turned out to be the easiest part. There’s an image named httpd in DockerHub which contains a barebones Apache installation in Linux. That’s like 90% of the work already done (Docker keeps on surprising me). Now all I need to do is copy the Hugo build files to the Apache2 htdocs folder. My final Dockerfile is just 2 lines and looks like this:

FROM httpd:latest
COPY ./personal-blog/public /usr/local/apache2/htdocs/

Building the image is just another simple command:

docker build -t personal-blog:latest .

This creates a new Docker image with the name ‘personal-blog’ and the tag ’latest’.

Now all I need to do is run this image in a Docker container. This is again just one command:

docker run -d --name personal-blog -p 12345:80 personal-blog:latest

And that’s it. My blog is now accessible on my host machine on port 12345.

Buying a domain

There are a lot of great domain registrars you can buy a domain from in 2024. I decided to go with Cloudflare. Going with Cloudflare gives me the added advantage of proxying all requests through Cloudflare. This means Cloudflare will automatically cache my blog’s content on their CDNs and provide me analytics about my blog. I’ve chosen to turn off proxying through Cloudflare for now since I’m planning to parse logs from Nginx and collect metrics about the traffics that’s hitting my blog (I’ll probably be writing a post about that too pretty soon). Proxying through Cloudflare would mean I will miss a lot of this data.

Nginx Proxy Manager

The final step is to map all requests to blog.akjohn.dev on the VPS to port 12345 on my homelab. For this, I use Nginx Proxy Manager (NPM). NPM is a pretty UI built on top of Nginx with a lot of useful features added in. It abstracts away the complexity of setting up sub-domains, creating and renewing HTTPS certificates and so on.

I created a proxy host in NPM as shown in the image below: npm-blog-config

I also setup SSL and automatic certificate renewal in NPM.

Final working setup

My current workflow to create a new blog post is this:

  • Run the command hugo new content <file-path> to create a new markdown file for a blog post.
  • Edit the markdown file with my content for the blog post.
  • Run the hugo command to create the final HTML, CSS and JS files for the blog.
  • Run the docker build command to create a new docker image based on a Dockerfile that I created. This step basically uses the httpd image and copies the hugo build files to the htdocs folder in Apache.
  • The final step is to run the docker run command to create a new docker container based on the image that I just built. I also pass in a CLI argument to expose port 80 in the container to my host.

If a user were to hit the domain blog.akjohn.dev right now:

  • NPM would get the HTTP request.
  • See that the subdomain is blog and redirect it to port 12345 on my homelab through my Wireguard VPN.
  • Finally, my docker container running Apache would respond to the request.

Future plans

My current workflow of writing a new blog post, building the Hugo website, creating the final docker container and exposing it to the internet is all being done manually right now. Sure, I could probably write a couple of bash scripts to automate a lot of it. But I think this workflow is perfect to create a CI/CD pipeline for. I’ll probably be writing a blog post detailing how I do that pretty soon. Stay tuned!