My new personal project

Hello, and welcome to my blog!

I’ve been meaning to do something like this for a while now. What stopped me? A combination of a stressful job, little time outside of my job to work on projects, let alone read or blog, or much else.

What’s letting me do this now? I got a new job! And I’ve had much more time on my hands because of this. I’m getting back into a project I started, I’m learning a lot, and I figured I should blog my thoughts, findings, and anything else I come across. It’ll be my public journal of sorts. I don’t expect to write full tutorials or other articles. Posts may be in series, disjointed, never completed, etc. But it’s an outlet for me, which will help a lot.

Wow this blog is so…simple?

I didn’t want a full Wordpress installation. Admin account? Credentials? Database? Backups? Nah, son. That’s too much.

I looked around for simpler methods. Ghost looked promising, and even had an official Docker image (which we’ll get into). But I’ve heard some news about the organization, and also think that a blog should be pretty simple. Ghost touts JavaScript, and I’m not too inclined to see how much is used when just viewing normal articles. For a static website, I don’t think any JavaScript is really needed, even if it’s fun to write. Having a rich text editor may have been nice, but it wasn’t a dealbreaker.

Besides, I’d still have to keep a database running, and either store it on my NAS or back it up, which sounds like a pain.

I ended up vying for a static site generator (SSG). It means my blog can be stored in source, not a database, which I ended up putting on a private GitHub repository. MUCH easier, in my book. I ended up with Hugo, along with the Etch theme (which I may or may not modify in the future). I followed the tutorial, slapped together a Dockerfile, a couple shell scripts so I don’t need to remember the commands, committed it, et voila here we have it.

Wait, how am I reading this; where is it hosted?

Perceptive! I’ll likely write a longer, more-detailed post about my internal network setup later. But this is worth a short summary.

I have an Intel NUC (actually 2, but I haven’t gotten around to setting up the second one yet) which is mainly configured for running several Docker containers. I setup the containers on a macvlan network, so that each container gets its own IP address on the internal network, even though they and the host share a single NIC. This was a bitch to setup at first, but has held up reasonably well.

The containers related to this build are:

Traefik is a reverse proxy that works a bit closer to the Docker level, as opposed to nginx which is a bit old-fashioned. I wanted to use nginx at first, but skimming articles on configuring it, and getting it to work with Let’s Encrypt looked…scary. Learning some new config file formats and performing odd bootstrapping with Let’s Encrypt would have been a hassle that I didn’t want to re-learn every year when the certificate expires. Traefik seemed to be integrated with it much more easily, and that’s what I wanted: easy. Get this setup, and not have to worry about it later at all.

The nginx container that I mentioned is for the Hugo blog image. This was because plain HTTP is super simple with nginx, with basically no configuration except slapping some static files in a certain directory.

The way it works:

  1. Request comes in through port 443 to my router
  2. Request is routed to the Traefik container’s internal IP (which leads to the NUC, but specifically to a certain container, which I like)
  3. Traefik sees the incoming Host/Domain in the HTTP message, and matches it to the nginx container running
  4. Traefik routes the request through the specified container through the Docker socket (/var/run/docker.dock) to nginx on port 80 to fulfill the request.

In order to connect the Hugo/nginx container to Traefik, it’s also pretty simple. In my Docker Compose file, I can add labels to my nginx service, like so:

services:
  jamesnlblog:
    image: jamesnlblog
    restart: always
    networks:
      - web
    ports:
      - "8081:80"
    labels:
      - "traefik.docker.network=web"
      - "traefik.enable=true"
      - "traefik.frontend.rule=Host:blog.jamesnl.com"
      - "traefik.port=80"
      - "traefik.protocol=http"
    container_name: jamesnlblog

Traefik scans for the labels, matches blog.jamesnl.com to the jamesnlblog container, talks to it on the container’s internal port 80 via HTTP, etc. This means not having to build a custom Traefik image with specific configuration and I can leave most of it in my docker compose configuration file. Traefik does have another small configuration, to redirect HTTP to HTTPS and such but it’s more minor.

It took me a bit to get the networking correct. I originally wanted to have Traefik and nginx on the macvlan network, but it was running into some small problems. I ended up following Traefik’s tutorial with a custom basic network, web, so that containers could talk to each other nicely. However, Docker didn’t like running nginx on two networks while changing the published port (despite "8081:80", it insisted on only running on port 80). I ended up fumbling with traefik.port=8081 and after poring over Traefik debug logs, found that I needed to change it to port 80, even if the nginx container was on the host’s port 8081 (something else was running on port 80 on the host that it couldn’t bind to). I could probably add this back to the macvlan, but don’t have much of a need to.

Lastly, for Let’s Encrypt, this is where things got interesting. Turns out, my ISP, Cox, blocks port 80. port 80 is pretty integral to validate a challenge to verify we own the domain. There wee two other options: a DNS challenge, which would have required passing in my domain registrar’s credentials to dynamically add the challenge details to a new DNS record, or a TLS challenge. Turns out, the TLS challenge was super simple, it needed no advanced configuration, and just verifies the challenge on port 443 (which is thankfully open). After that, and a few seconds, the Let’s Encrypt certificate finally appeared!

Now, all I have to do is perform a git pull in my home directly on the Intel NUC, run a docker build with a tag, and re-up my docker compose. Then my blog gets updated with new posts and changes.

Wasn’t this post supposed to be about a new project?

Heh, yea. I suppose I got carried away. Documenting my full setup in more detail will be fun.

Before I left my old job, I began working on a project. Partly as a way to get away from the stress, partly as a way to get back into Swift, and partly as a way to look better for looking for my new job.

The project is a Swift implementation of SFTP, the secure file transfer protocol. I oh-so-cleverly named it JLSFTP, my initials. You can view it here: https://github.com/Jman012/jlsftp. It’s very much a work in progress at this point.

Why am I working on this? It seems so basic, or so un-needed, really. Well, there’s a few reasons.

First, I like that the scope of this is so restrained. It’ll be a library of a well-known, low-level protocol. There’s not much that is very inventive or novel about this. I can spend the time to organize it well and to my standards. In a job, you’ve no doubt been restricted to the structure that is already setup. Only rarely do you get the chance to make something brand new or perform a large refactoring and restructuring of the codebase. And before then, you might be grumbling about how bad it is. But if the chance arises, how do you know you’ll make a good choice when changing it? With practice, of course. This is a small-scale practice ground for me to test out those ideas I’ve had over time, in my grumblings. I’ll get to try things out, see what works, what doesn’t, and blog about my results. Rinse and repeat with different projects, and you’ll bring so much more to your professional environment.

It also allows me to look into new technologies, or the basics of every-day technologies. Swift evolves quite rapidly, and I’ll get the chance to stay up-to-date on its evolution and features. I’ll also get to actually use Swift, as opposed to C# like my job uses, which I have my own gripes about. But I’m currently researching SwiftNIO for the networking. This will be it’s own post, definitely, but it has caused me to research not only networking models at the low level (instead of high-level HTTP requests and responses), as well as runtime event loops, asynchronous programming, etc.

My two jobs so far also have been very light on unit testing. Making a fully-fledged open source library with >90% unit test coverage will really get me more into the test-driven development ideology.

Past this, making client implementations using this library in, say, SwiftUI, would lead to more technologies, different project types that require different organization skills, etc.

I hope to be able to explain more details along the way. Stay tuned!