capsule/content/gemlog/capsule_deployment_pipeline.gmi

44 lines
2.8 KiB
Text

(
title: "Capsule Deployment Pipeline",
summary: Some(
"How this capsule is published, vs Resource Wastage at Scale"
),
published: Some((
year: 2022,
month: 7,
day: 10,
hour: 12,
minute: 9,
second: 29,
)),
tags: ["CI", "docker", "make"],
)
---
In my previous post I espoused some thoughts on Docker and CI. In short: I hate Docker and think it's a huge resource waste and many projects abuse CI. So just to give an example of a different way, here's the Makefile that builds and deploys my own personal capsule.
```
all: build capsule.tar upload
build:
zond build
capsule.tar: public/index.gmi
cd public && tar cf ../capsule.tar *
upload: capsule.tar
scp capsule.tar gimli:/home/nathan/capsule.tar
ssh gimli tar xf capsule.tar -C /srv/gemini
clean:
rm -rf capsule.tar public
.PHONY: all build clean upload
```
Now granted, this only works because I have control over the server (a Raspberry Pi 4 running OpenSuse, which also serves a Gitea instance and my Finger server). That said, after adding a new post all I have to do to publish it is type `make` on the command line. Going through the Makefile, it builds the capsule with my site generator, Zond. It then creates a tar file of the capsule. That tar file is then copied to the server using scp, and finally the tar file is extracted on the server by running the command remotely via ssh. The entire process finishes in less than a second. All of the tooling is installed locally, and I just need ssh access to the server. Since I also have ssh-agent running there isn't usually even a password prompt.
Current industry practice would have you push to a remote git repo, triggering a CI build which pulls down some Docker shiite to build a full operating system that includes your static site generator, builds the site, creates a git commit and pushes it to a different repo, where it is served via someone else's server, which might very well be another Docker container, one amongst thousands of server instances. Then the Docker shiite that was used to build the site gets torn down and deleted, only to be pulled down over the netwrok and rebuilt the next time the pipeline runs.
I would submit that the CI deployment pipeline is no easier to set up and offers no added convenience over my simple little Makefile. The drawbacks should be hugely obvious though. There's an enormous resource wastage involved in the name of convenience.
Someone posted on Fedi earlier about how they're using Woodpecker CI along with Codeberg pages to conveniently deploy their site. I applaud that they're using Codeberg rather than GitHub, but ironically, their .woodpecker.yaml file is 21 lines compared with my 11 line deployment script. I'm not really impressed.
I want to suggest a slogan to the Dockerr project: Resource wastage at scale.