Skip to content

Uncategorized

Onzin, onzin, allemaal onzin!

Gitmojis are not just cute emojis

When you first encounter Gitmoji, it might feel like a whimsical idea — adding emojis to your Git commit messages? Surely that is just a fun way to decorate your history, right?

Well… yes. But also, no. Gitmojis are much more than just cute little icons. They are a powerful convention that improves collaboration, commit clarity, and even automation in your development workflow. In this post, we will explore how Gitmojis can boost your Git hygiene, help your team, and make your commits more expressive — without writing a novel in every message.


What is Gitmoji?

Gitmoji is a project by Carlos Cuesta that introduces a standardized set of emojis to prefix your Git commit messages. Each emoji represents a common type of change. For example:

EmojiCodeDescription
:sparkles:New feature
🐛:bug:Bug fix
📝:memo:Documentation change
♻️:recycle:Code refactor
🚀:rocket:Performance upgrade

Why Use Gitmoji?

1. Readable History at a Glance

Reading a log full of generic messages like fix stuff, more changes, or final update is painful. Gitmojis help you scan through history and immediately understand what types of changes were made. Think of it as color-coding your past.

🧱 Example — Traditional Git log:

git log --oneline
b11d9b3 Fix things
a31cbf1 Final touches
7c991e8 Update again

🔎 Example — Gitmoji-enhanced log:

🐛 Fix overflow issue on mobile nav
✨ Add user onboarding wizard
📝 Update README with environment setup
🔥 Remove unused CSS classes

2. Consistency Without Bureaucracy

Git commit conventions like Conventional Commits are excellent for automation but can be intimidating and verbose. Gitmoji offers a simpler, friendlier alternative — a consistent prefix without strict formatting.

You still write meaningful commit messages, but now with context that is easy to scan.


3. Tooling Support with gitmoji-cli

Gitmoji CLI is a command-line tool that makes committing with emojis seamless.

🛠 Installation:

npm install -g gitmoji-cli

🧪 Usage:

gitmoji -c

You will be greeted with an interactive prompt:

✔ Gitmojis fetched successfully, these are the new emojis:
? Choose a gitmoji: (Use arrow keys or type to search)
❯ 🎨  - Improve structure / format of the code. 
  ⚡️  - Improve performance. 
  🔥  - Remove code or files. 
  🐛  - Fix a bug. 
  🚑️  - Critical hotfix. 
  ✨  - Introduce new features. 
  📝  - Add or update documentation. 
(Move up and down to reveal more choices)

The CLI also supports conventional formatting and custom scopes. Want to tweak your settings?

gitmoji --config

You can also use it in CI/CD pipelines or with Git hooks to enforce Gitmoji usage across teams.


4. Better Collaboration and Code Review

Your teammates will thank you when your commits say more than “fix” or “update”. Gitmojis provide context and clarity — especially during code review or when you are scanning a pull request with dozens of commits.

🧠 Before:

fix
update styles
final commit

After:

🐛 Fix background image issue on Safari
💄 Adjust padding for login form
✅ Add final e2e test for login flow

This is how a pull request with Gitmoji commits looks like on GitHub:


5. Automation Ready

Need to generate changelogs or trigger actions based on commit types? Gitmoji messages are easy to parse, making them automation-friendly.

Example with a simple script:

git log --oneline | grep "^✨"

You can even integrate this into release workflows with tools like semantic-release or your own custom tooling.


Do Not Let the Cute Icons Fool You

Yes, emojis are fun. But behind the smiling faces and sparkles is a thoughtful system that improves your Git workflow. Whether you are working solo or as part of a team, Gitmoji brings:

  • ✅ More readable commit history
  • ✅ Lightweight commit standards
  • ✅ Easy automation hooks
  • ✅ A dash of joy to your development day

So next time you commit, try it:

gitmoji -c

Because Gitmojis are not just cute.
They are practical, powerful — and yes, still pretty adorable.


🚀 Get Started

🎉 Happy committing!

silver and black hard disk drive

Suspending cloud backup of a NAS that cannot be reached

I use CrashPlan for cloud backups. In 2018 they stopped their Home solution, so I switched to their Business plan.

It works very well on Linux, Windows and Mac, but it was always a bit fickle on my QNAP NAS. There is a qpkg package for CrashPlan, and there are lots of posts on the QNAP support forum. After 2018, none of the solutions to run a backup on the NAS itself stopped working. So I gave up, and I didn’t have a backup for almost 4 years.

Now that I have mounted most of the network shares on my local filesystem, I can just run the backup on my pc. I made 3 different backup sets, one for each of the shares. There’s only one thing that I had to fix: if Crashplan runs when the shares aren’t mounted, then it thinks that the directories are empty, and it will delete the backup on the cloud storage. As soon as the shares come back online, the files are backed up again. It doesn’t have to upload all files again, because Crashplan doesn’t purge the files on it’s cloud immediately, but the file verification still happens. That takes time and bandwidth.

I contacted CrashPlan support about this issue, and this was their reply:

I do not believe that this scenario can be avoided with this product – at least not in conjunction with your desired setup. If a location within CrashPlan’s file selection is detached from the host machine, then the program will need to rescan the selection. This is in inherent drawback to including network drives within your file selection. Your drives need to retain a stable connection in order to avoid the necessity of the software to run a new scan when it sees the drives attached to the device (so long as they’re within the file selection) detach and reattach.

Since the drive detaching will send a hardware event from the OS to CrashPlan, CrashPlan will see that that hardware event lies within its file selection – due to the fact that you mapped your network drives into a location which you’ve configured CrashPlan to watch. A hardware event pointing out that a drive within the /home/amedee/Multimedia/ file path has changed its connection status will trigger a scan. CrashPlan will not shut down upon receiving a drive detachment or attachment hardware event. The program needs to know what (if anything) is still there, and is designed firmly to track those types of changes, not to give up and stop monitoring the locations within its file selection.

There’s no way around this, aside from ensuring that you either keep a stable connection. This is an unavoidable negative consequence of mapping a network drive to a location which you’ve included in CrashPlan’s file selection. The only solution would be for you to engineer your network so as not to interrupt the connection.

Nathaniel, Technical Support Agent, Code42

I thought as much already. No problem, Nathaniel! I found a workaround: a shell script that checks if a certain marker file on the network share exists, and if it doesn’t, then the script stops the CrashPlan service, which will prevent CrashPlan from scanning the file selection. As soon as the file becomes available again, then the CrashPlan service is started. This workaround works, and is good enough for me. It may not be the cleanest solution but I’m happy with it.

I first considered using inotifywait, which listens to filesystem events like modifying or deleting files, or unmount. However when the network connection just drops for any reason, then inotifywait doesn’t get an event. So I have to resort to checking if a file exists.

#!/bin/bash
file_list="/home/amedee/bin/file_list.txt"

all_files_exist () {
    while read -r line; do
        [ -f "$line" ]
        status=$?
        if ! (exit $status); then
            echo "$line not found!"
            return $status
        fi
    done < "$file_list"
}

start_crashplan () {
    /etc/init.d/code42 start
}

stop_crashplan () {
    /etc/init.d/code42 stop
}

while true; do
    if all_files_exist; then
        start_crashplan
    else
        stop_crashplan
    fi
    sleep 60
done
  • file_list.txt contains a list of testfiles on different shares that I want to check. They all have to be present, if even only one of them is missing or can’t be reached, then the service must be stopped.
/home/amedee/Downloads/.testfile
/home/amedee/Multimedia/.testfile
/home/amedee/backup/.testfile
  • I can add or remove shares without needing to modify the script, I only need to edit file_list.txt – even while the script is still running.
  • Starting (or stopping) the service if it is already started (or stopped) is very much ok. The actual startup script itself takes care of checking if it has already started (or stopped).
  • This script needs to be run at startup as root, so I call it from cron (sudo crontab -u root -e):
@reboot /home/amedee/bin/test_cifs_shares.sh

This is what CrashPlan support replied when I told them about my workaround:

Hello Amedee,

That is excellent to hear that you have devised a solution which fits your needs!

This might not come in time to help smooth out your experience with your particular setup, but I can mark this ticket with a feature request tag. These tags help give a resource to our Product team to gauge customer interest in various features or improvements. While there is no way to use features within the program itself to properly address the scenario in which you unfortunately find yourself, as an avenue for adjustments to how the software currently operates in regards to the attachment or detachment of network drives, it’s an entirely valid request for changes in the future.

Nathaniel, Technical Support Agent, Code42

That’s very nice of you, Nathaniel! Thank you very much!

silver and black hard disk drive

Mounting NAS shares without slow startup

I have a NAS, a QNAP TS-419P II. It’s about a decade old and it has always served me well. Due to various reasons I have never used it in an efficient way, it was always like a huge external drive, not really integrated in the rest of my filesystems.

The NAS has a couple of CIFS shares with very obvious names:

  • backup
  • Download
  • Multimedia, with directories Music, Photos and Videos

(There are a few more shares, but they aren’t relevant now.)

In Ubuntu, a user home directory has these default directories:

  • Downloads
  • Music
  • Pictures
  • Videos

I want to store the files in these directories on my NAS.

Mounting shares, the obvious way

First I moved all existing files from ~/Downloads, ~/Music, ~/Pictures, ~/Videos to the corresponding directories on the NAS, to get empty directories. Then I made a few changes to the directories:

$ mkdir backup
$ mkdir Multimedia
$ rmdir Music
$ ln -s Multimedia/Music Music
$ rmdir Pictures
$ ln -s Multimedia/Photos Pictures
$ rmdir Videos
$ ln -s Multimedia/Videos Videos

The symbolic links now point to directories that don’t (yet) exist, so they appear broken – for now.

The next step is to mount the network shares to their corresponding directories.

The hostname of my NAS is minerva, after the Roman goddess of wisdom. To avoid using IP addresses, I added it’s IP address to /etc/hosts:

127.0.0.1	localhost
192.168.1.1     modem
192.168.1.63	minerva

The shares are password protected, and I don’t want to type the password each time I use the shares. So the login goes into a file /home/amedee/.smb:

username=amedee
password=NOT_GOING_TO_TELL_YOU_:-p

Even though I am the only user of this computer, it’s best practice to protect that file so I do

$ chmod 400 /home/amedee/.smb

Then I added these entries to /etc/fstab:

//minerva/download	/home/amedee/Downloads	cifs	uid=1000,gid=1000,credentials=/home/amedee/.smb,iocharset=utf8 0 0
//minerva/backup	/home/amedee/backup	cifs	uid=0,gid=1000,credentials=/home/amedee/.smb,iocharset=utf8 0 0
//minerva/multimedia	/home/amedee/Multimedia	cifs	uid=0,gid=1000,credentials=/home/amedee/.smb,iocharset=utf8 0 0
  • CIFS shares don’t have a concept of user per file, so the entire share is shown as owned by the same user. uid=1000 and gid=1000 are the user ID and group ID of the user amedee, so that all files appear to be owned by me when I do ls -l.
  • The credentials option points to the file with the username and password.
  • The default character encoding for mounts is iso8859-1, for legacy reasons. I may have files with funky characters, so iocharset=utf8 takes care of that.

Then I did sudo mount -a and yay, the files on the NAS appear as if they were on the local hard disk!

Fixing a slow startup

This all worked very well, until I did a reboot. It took a really, really long time to get to the login screen. I did lots of troubleshooting, which was really boring, so I’ll skip to the conclusion: the network mounts were slowing things down, and if I manually mount them after login, then there’s no problem.

It turns out that systemd provides a way to automount filesystems on demand. So they are only mounted when the operating system tries to access them. That sounds exactly like what I need.

To achieve this, I only needed to add noauto,x-systemd.automount to the mount options. I also added x-systemd.device-timeout=10, which means that systemd waits for 10 seconds, and then gives up if it’s unable to mount the share.

From now on I’ll never not use noauto,x-systemd.automount for network shares!

While researching this, I found some documentation that claims you don’t need noauto if you have x-systemd.automount in your mount options. Yours truly has tried it with and without noauto, and I can confirm, from first hand experience, that you definitely need noauto. Without it, there is still the long waiting time at login.

sky sunny wave space

Jag lär mig svenska 🇸🇪

Jag brukade skriva på den här bloggen på nederländska. Nu är det mest på engelska, men undantagsvis är det här blogginlägget på svenska.

I september 2020 började jag lära mig svenska på kvällsskolan i Aalst. Varför? Det finns flera anledningar:

  • Jag spelar nyckelharpa, ett typiskt svenskt musikinstrument. Jag går på kurser hemma och utomlands, ofta från svenska lärare. Det var så jag lärde känna människor i Sverige och då är det bra att prata lite svenska för att hålla kontakten online.
  • När man slår upp något på nätet om nyckelharpa är det ofta på svenska. Jag har också en underbar bok “Nyckelharpan – Ett unikt svenskt kulturarv” av Esbjörn Hogmark och jag vill kunna läsa den och inte bara titta på bilderna.
  • Jag tycker att Sverige är ett vackert land som jag kanske vill besöka någon gång. Norge också, och där talar man en märklig dialekt av svenska. 😛
  • Jag vill gå en kurs på Eric Sahlström Institutet i Tobo någon gång. Då skulle det vara bra att förstå lärarna på deras eget språk.
  • Jag gillar språk och språkinlärning! Det håller min hjärna fräsch och frisk. 😀

And if you didn’t understand anything: there’s always Google Translate!

green and grey circuit board

I have a ridiculous amount of kernels

In previous blogposts I wrote about how I found a possible bug in the Linux kernel, or more precisely, in the kernel that Ubuntu derived from the mainline kernel.

To be able to install any kernel version 5.15.7 or higher, I also had to install libssl3.

The result is that I now have 37 kernels installed, taking up little over 2 GiB disk space:

$ (cd /boot ; ls -hgo initrd.img-* ; ls /boot/initrd.img-* | wc -l)
-rw-r--r-- 1 39M mrt  9 09:54 initrd.img-5.13.0-051300-generic
-rw-r--r-- 1 40M mrt  9 09:58 initrd.img-5.13.0-19-generic
-rw-r--r-- 1 40M mrt  9 09:58 initrd.img-5.13.0-20-generic
-rw-r--r-- 1 40M mrt  9 09:57 initrd.img-5.13.0-21-generic
-rw-r--r-- 1 44M mrt 30 17:46 initrd.img-5.13.0-22-generic
-rw-r--r-- 1 40M mrt  9 09:56 initrd.img-5.13.0-23-generic
-rw-r--r-- 1 40M mrt  9 09:56 initrd.img-5.13.0-25-generic
-rw-r--r-- 1 40M mrt  9 09:56 initrd.img-5.13.0-27-generic
-rw-r--r-- 1 40M mrt  9 09:55 initrd.img-5.13.0-28-generic
-rw-r--r-- 1 40M mrt  9 09:55 initrd.img-5.13.0-30-generic
-rw-r--r-- 1 45M mrt  9 12:02 initrd.img-5.13.0-35-generic
-rw-r--r-- 1 45M mrt 24 23:17 initrd.img-5.13.0-37-generic
-rw-r--r-- 1 45M mrt 30 17:49 initrd.img-5.13.0-39-generic
-rw-r--r-- 1 39M mrt  9 09:54 initrd.img-5.13.1-051301-generic
-rw-r--r-- 1 39M mrt  9 09:54 initrd.img-5.13.19-051319-generic
-rw-r--r-- 1 37M mrt  9 09:53 initrd.img-5.13.19-ubuntu-5.13.0-22.22
-rw-r--r-- 1 37M mrt  9 09:53 initrd.img-5.13.19-ubuntu-5.13.0-22.22-0-g3ab15e228151
-rw-r--r-- 1 37M mrt  9 09:52 initrd.img-5.13.19-ubuntu-5.13.0-22.22-317-g398351230dab
-rw-r--r-- 1 37M mrt  9 09:52 initrd.img-5.13.19-ubuntu-5.13.0-22.22-356-g8ac4e2604dae
-rw-r--r-- 1 37M mrt  9 09:52 initrd.img-5.13.19-ubuntu-5.13.0-22.22-376-gfab6fb5e61e1
-rw-r--r-- 1 37M mrt  9 09:51 initrd.img-5.13.19-ubuntu-5.13.0-22.22-386-gce5ff9b36bc3
-rw-r--r-- 1 37M mrt  9 09:51 initrd.img-5.13.19-ubuntu-5.13.0-22.22-387-g0fc979747dec
-rw-r--r-- 1 37M mrt  9 09:50 initrd.img-5.13.19-ubuntu-5.13.0-22.22-388-g20210d51e24a
-rw-r--r-- 1 37M mrt  9 09:50 initrd.img-5.13.19-ubuntu-5.13.0-22.22-388-gab2802ea6621
-rw-r--r-- 1 37M mrt  9 09:50 initrd.img-5.13.19-ubuntu-5.13.0-22.22-391-ge24e59fa409c
-rw-r--r-- 1 37M mrt  9 09:49 initrd.img-5.13.19-ubuntu-5.13.0-22.22-396-gc3d35f3acc3a
-rw-r--r-- 1 37M mrt  9 09:49 initrd.img-5.13.19-ubuntu-5.13.0-22.22-475-g79b62d0bba89
-rw-r--r-- 1 37M mrt  9 09:48 initrd.img-5.13.19-ubuntu-5.13.0-23.23
-rw-r--r-- 1 40M mrt  9 09:48 initrd.img-5.14.0-051400-generic
-rw-r--r-- 1 40M mrt  9 10:31 initrd.img-5.14.21-051421-generic
-rw-r--r-- 1 44M mrt  9 12:39 initrd.img-5.15.0-051500-generic
-rw-r--r-- 1 46M mrt  9 12:16 initrd.img-5.15.0-22-generic
-rw-r--r-- 1 46M mrt 28 23:27 initrd.img-5.15.32-051532-generic
-rw-r--r-- 1 46M mrt 17 21:12 initrd.img-5.16.0-051600-generic
-rw-r--r-- 1 48M mrt 28 23:19 initrd.img-5.16.16-051616-generic
-rw-r--r-- 1 45M mrt 28 23:11 initrd.img-5.17.0-051700-generic
-rw-r--r-- 1 46M apr  8 17:02 initrd.img-5.17.2-051702-generic
37
  • Versions 5.xx.yy-zz-generic are installed with apt.
  • Versions 5.xx.yy-05xxyy-generic are installed with the Ubuntu Mainline Kernel Installer.
  • Versions 5.xx.yy-ubuntu-5.13.0-zz.zz-nnn-g<commithash> are compiled from source, where <commithash> is the commit of the kernel repository that I compiled.

The kernels in bold are the kernels where something unexpected happens with my USB devices:

  • Ubuntu kernels 5.13.23 and up – including 5.15 kernels of Ubuntu 22.04 LTS (Jammy Jellyfish).
  • Ubuntu compiled kernels, starting 387 commits after kernel 5.13.22.
  • Mainline kernels 5.15.xx.

When Ubuntu finally bases their kernel on mainline 5.16 or higher, then the USB bug will be solved.

blur bright business codes

Install libssl3 on Ubuntu versions before Jammy

Ubuntu mainline kernel packages 5.15.7 and later bump a dependency from libssl1.1 (>= 1.1.0) to libssl3 (>= 3.0.0~~alpha1).

However, package libssl3 is not available for Ubuntu 21.10 Impish Indri. It’s only available for Ubuntu 22.04 Jammy Jellyfish (which is still in beta as of time of writing) and later.

libssl3 further depends on libc6>=2.34 and debconf, but they are available in 21.10 repositories.

Here are a few different ways to resolve the dependency:

Option 1

Use apt pinning to install libssl3 from a Jammy repo, without pulling in everything else from Jammy.

This is more complicated, but it allows the libssl3 package to receive updates automatically.
Do all the following as root.

  • Create an apt config file to specify your system’s current release as the default release for installing packages, instead of simply the highest version number found. We are about to add a Jammy repo to apt, which will contain a lot of packages with higher version numbers, and we want apt to ignore them all.
$ echo 'APT::Default-Release "impish";' \
    | sudo tee /etc/apt/apt.conf.d/01ubuntu
  • Add the Jammy repository to the apt sources. If your system isn’t “impish”, change that below.
$ awk '($1$3$4=="debimpishmain"){$3="jammy" ;print}' /etc/apt/sources.list \
    | sudo tee /etc/apt/sources.list.d/jammy.list
  • Pin libssl3 to the jammy version in apt preferences. This overrides the Default-Release above, just for the libssl3 package.
$ sudo tee /etc/apt/preferences.d/libssl3 >/dev/null <<%%EOF
Package: libssl3
Pin: release n=jammy
Pin-Priority: 900
%%EOF
  • Install libssl3:
$ sudo apt update
$ sudo apt install libssl3

Later, when Jammy is officially released, delete all 3 files created above

$ sudo rm --force \
    /etc/apt/apt.conf.d/01ubuntu \
    /etc/apt/sources.list.d/jammy.list \
    /etc/apt/preferences.d/libssl3

Option 2

Download the libssl3 deb package for Jammy and install it manually with dpkg -i filename.deb.

This only works if there aren’t any additional dependencies, which you would also have to install, with a risk of breaking your system. Here Be Dragons…

green and black industrial machine

Printing multiple PDF files from console with lp

Recently I wanted to print some PDF files containing sheet music. The tedious way to do that, would be to open them one by one in Evince and press the print button. Surely there must be a more efficient way to do that?

$ ls -l --human-readable *.pdf
-r--r--r-- 1 amedee amedee 217K apr 15  2020 'Arthur original.pdf'
-r--r--r-- 1 amedee amedee 197K apr 13  2020 'Canal en octobre.pdf'
-r--r--r-- 1 amedee amedee  14K apr 13  2020  DenAndro.pdf
-r--r--r-- 1 amedee amedee  42K apr 14  2020 'Doedel you do.pdf'
-r--r--r-- 1 amedee amedee  57K apr 13  2020  Flatworld.pdf
-r--r--r-- 1 amedee amedee  35K apr 16  2020 'Jump at the sun.pdf'
-r--r--r-- 1 amedee amedee 444K jun 19  2016 'Kadril Van Mechelen.pdf'
-r--r--r-- 1 amedee amedee  15K apr 13  2020  La-gavre.pdf
-r--r--r-- 1 amedee amedee  47K apr 13  2020 'Le petit déjeuner.pdf'
-r--r--r-- 1 amedee amedee 109K apr 13  2020  LesChaminoux__2016_04_24.cached.pdf
-r--r--r-- 1 amedee amedee 368K apr 13  2020 'Mazurka It.pdf'
-r--r--r-- 1 amedee amedee 591K apr 13  2020 'Narrendans uit Mater.pdf'
-r--r--r-- 1 amedee amedee 454K apr 13  2020 'Neverending jig.pdf'
-r--r--r-- 1 amedee amedee 1,1M apr 14  2020 'Red scissors.pdf'
-r--r--r-- 1 amedee amedee  35K apr 13  2020  Scottish-à-VirmouxSOL.pdf
-r--r--r-- 1 amedee amedee  76K apr 14  2020 'Tarantella Napolitana meest gespeelde versie.pdf'
-r--r--r-- 1 amedee amedee 198K apr 15  2020 'Zot kieken!.pdf'

There are 2 console commands for printing: lp and lpr. One comes from grandpa System V, the other from grandpa BSD, and both are included in CUPS. The nice thing about these commands is that they know how to interpret PostScript and PDF files. So this is going to be easy: just cd into the directory with the PDF files and print them all:

$ lp *.pdf
lp: Error - No default destination.

Oops. A quick Google search of this error message tells me that I don’t have a default printer.

Configuring a default printer

First I use lpstat to find all current printers:

$ lpstat -p -d
printer HP_OfficeJet_Pro_9010_NETWORK is idle.  enabled since za 12 mrt 2022 00:00:28
printer HP_OfficeJet_Pro_9010_USB is idle.  enabled since za 12 mrt 2022 00:00:17
no system default destination

I have a HP OfficeJet Pro 9012e printer, which Ubuntu recognizes as a 9010 series. Close enough. It’s connected over network and USB. I’m setting the network connection as default with lpoptions:

$ lpoptions -d $(lpstat -p -d | head --lines=1 | cut --delimiter=' ' --fields=2)
copies=1 device-uri=hp:/net/HP_OfficeJet_Pro_9010_series?ip=192.168.1.9 finishings=3 job-cancel-after=10800 job-hold-until=no-hold job-priority=50 job-sheets=none,none marker-change-time=0 media=iso_a4_210x297mm number-up=1 output-bin=face-down print-color-mode=color printer-commands=none printer-info printer-is-accepting-jobs=true printer-is-shared=true printer-is-temporary=false printer-location printer-make-and-model='HP Officejet Pro 9010 Series, hpcups 3.22.2' printer-state=3 printer-state-change-time=1649175159 printer-state-reasons=none printer-type=4124 printer-uri-supported=ipp://localhost/printers/HP_OfficeJet_Pro_9010_NETWORK sides=one-sided

I can then use lpq to verify that the default printer is ready:

$ lpq
HP_OfficeJet_Pro_9010_NETWORK is ready
no entries

Printing multiple files from console

I found that if I naively do lp *.pdf, then only the last file will be printed. That’s unexpected, and I can’t be bothered to find out why. So I just use ls and feed that to a while-loop. It’s quick and dirty, and using find+xargs would probably be better if there are “special” characters, but that’s not the case here.

There’s one caveat: when the PDF files are printed one by one, then the first page will be at the bottom of the paper stack, so I need to print them in reverse order.

$ ls --reverse *.pdf | while read f; do lp "$f"; done

With that command I got 17 print jobs in the printer queue, one for each file.

Now that I know how to print from console, I’ll probably do that more often. The man page of lp describes many useful printing options, like printing double sided:

$ lp -o media=a4 -o sides=two-sided-long-edge filename
microphotography of orange and blue house miniature on brown snail s back

Moving!

A few months ago I wrote about my preferred region to work. Well, that’s no longer true. The co-housing project where I live (in Merelbeke, near Ghent) is going to end, and I need to move by the end of July 2022.

This also has an influence on my preferred place to work. I have decided to find a place to live not too far from work, wherever that may be (because I’m still on the #jobhunt). Ideally it would be inside the triangle Ghent-Antwerp-Brussels but I think I could even be convinced by the Leuven area.

Factors I’ll take into account:

  • Elevation and hydrology – with climate change, I don’t want to live somewhere with increased risk of flooding.
  • Proximity of essential shops and services.
  • Proximity of public transport with a good service.
  • Proximity of car sharing services like Cambio.
  • Not too far from something green (a park will do just fine) to go for a walk or a run.

I haven’t started looking yet, I’m not even sure if I want to do co-housing again, or live on my own. That’ll depend on the price, I guess. (Living alone? In this economy???) First I want to land on a job.

That makes sense—without knowing where I will be working, house hunting feels a bit like putting the cart before the horse. Still, I find myself browsing listings occasionally, more out of curiosity than anything else. It is interesting to see how prices and availability vary wildly, even within the triangle I mentioned. Some towns look charming on paper but lack the basics I need; others tick all the boxes but come with a rental price that makes my eyebrows do gymnastics.

In the meantime, I am mentally preparing for a lot of change. Leaving my current co-housing situation is bittersweet. On one hand, it has been a wonderful experience: shared dinners, spontaneous conversations, and a real sense of community. On the other hand, living with others also means compromise, and part of me wonders what it would be like to have a space entirely to myself. No shared fridges, no waiting for the bathroom, and the joy of decorating a place to my own taste.

That said, co-housing still appeals to me. If I stumble upon a like-minded group or an interesting project in a new city, I would definitely consider it. The key will be finding something that balances affordability, autonomy, and connection. I do not need a commune, but I also do not want to feel isolated.

I suppose this transition is about more than just logistics—it is also a moment to rethink what I want day-to-day life to look like. Am I willing to commute a bit longer for a greener environment? Would I trade square meters for access to culture and nightlife? Do I want to wake up to birdsong or the rumble of trams?

These are the questions swirling around my head as I polish up my CV, send out job applications, and daydream about future homes. It is a lot to juggle, but oddly enough, I feel optimistic. This is a chance to design a new chapter from scratch. A little daunting, sure. But also full of possibility.

mailboxes on metal fence

How I organize my message flow

Email

I use 2 email clients at the same time: Thunderbird and Gmail.

  • Thunderbird: runs on my local system, it’s very fast, it shows me all the metadata of an email in the way I want, the email list is not paged, I can use it for high volume actions on email. These happen on my local system, and then the IMAP protocol gradually syncs it to Gmail. I also find that Thunderbird’s email notifications integrate nicer in Ubuntu.
  • Gmail: can’t be beaten for search. It also groups mail conversations. And then there are labels!
How to turn on Conversation View in Gmail

Gmail has several tabs: Primary, Social, Promotions, Updates and Forums. Gmail is usually smart enough that it can classify most emails in the correct tab. If it doesn’t: drag the email to the correct tab, and Gmail will ask you if all future emails of that sender should go to the same tab. This system works well enough for me. My email routine is to first check the tabs Social, Promotions and Forums, and delete or unsubscribe from most emails that end up there. All emails about the #jobhunt go to Updates. I clean up the other emails in that tab (delete, unsubscribe, filter, archive) so that only the #jobhunt emails remain. Those I give a label – more about that later. Then I go to the Inbox. Any emails there (shouldn’t be many) are also taken care of: delete, unsubscribe, filter, archive or reply.

Enable Gmail tabs
Gmail tabs

Google has 3 Send options: regular Send, Schedule send (which I don’t use) and Send + Archive. The last one is probably my favorite button. When I reply to an email, it is in most cases a final action on that item, so after the email is sent, it’s dealt with, and I don’t need to see it in my Inbox any more. And if there is a reply on the email, then the entire conversation will just go to the Inbox again (unarchived).

Send + Archive

I love labels! At the level of an individual email, you can add several labels. The tabs are also labels, so if you add the label Inbox to an archived email, then it will be shown in the Inbox again. At the level of the entire mailbox, labels behave a bit like mail folders. You can even have labels within labels, in a directory structure. Contrary to traditional mail clients, where an email could only be in one mail folder, you can add as many labels as you want.
The labels are also shown as folders in an IMAP mail client like Thunderbird. If you move an email from one folder to another, then the corresponding label gets updated in Gmail.
The filters that I use in my #jobhunt are work/jobhunt, work/jobhunt/call_back, work/jobhunt/not_interesting, work/jobhunt/not_interesting/freelance, work/jobhunt/not_interesting/abroad, work/jobsites and work/coaching. The emails that end up with the abroad label, are source material for my blog post Working Abroad?

The label list on the left looks like a directory structure. It’s actually a mix of labels and traditional folders like Sent, Drafts, Spam, Trash,… Those are always visible at the top. Then there is a neat little trick for labels. If you have a lot of labels, like me, then Gmail will hide some of them behind a “More” button. You can influence which labels are always visible by selecting Show if unread on that label. This only applies to top-level labels. When there are no unread emails with that label or any of it’s sublabels, then the label will be hidden below the More button. As soon as there are unread mails with that label or any of it’s sublabels, then the label will be visible. Mark all mails as read, and the label is out of view. Again, less clutter, you only see it when you need it.

Show if unread

Filters, filters, filters. I think I have a gazillion filters. (208, actually – I exported them to XML so I could count them) Each time I have more than two emails that have something meaningful in common, I make a filter. Most of my filters have the setting ‘Skip Inbox’. They will remain unread in the label where I put them, and I’ll read them when it’s convenient for me. For example, emails that are automatically labelled takeaway aren’t important and don’t need to be in the Inbox, but when I want to order takeaway, I take a look in that folder to see if there are any promo codes.

Email templates. Write a draft email, click on the 3 dots bottom right, save draft as template. Now I can reuse the same text so that I don’t have to write for the umpteenth time that I don’t do freelance. I could send an autoreply with templates, but for now I’ll still do it manually.

LinkedIn

I can be short about that: it’s a mess. You can only access LinkedIn messages from the website, and if you have a lot of messages, then it behaves like a garbage pile. Some people also have an expectation that it’s some sort of instant messaging. For me it definitely isn’t. And just like with email: I archive LinkedIn chats as soon as I have replied.

I used to have an autoreply that told people to email me, and gave a link to my CV and my blog. What do you think, should I enable that again?

vibrant jester figure in dramatic lighting

Reduce unit tests boilerplate with Jest’s .each syntax

When writing unit tests, especially in JavaScript/TypeScript with Jest, you often run into a common problem: repetition.
Imagine testing a function with several input-output pairs. The tests can become bloated and harder to read.
This is where Jest’s .each syntax shines. It lets you write cleaner, data-driven tests with minimal duplication.

The Problem: Repetitive Test Cases

Take a simple sum function:

function sum(a, b) {
  return a + b;
}

Without .each, you might write your tests like this:

test('adds 1 + 2 to equal 3', () => {
  expect(sum(1, 2)).toBe(3);
});

test('adds 2 + 3 to equal 5', () => {
  expect(sum(2, 3)).toBe(5);
});

test('adds -1 + -1 to equal -2', () => {
  expect(sum(-1, -1)).toBe(-2);
});

These tests work, but they are verbose. You repeat the same logic over and over with only the inputs and expected results changing.

The Solution: Jest’s .each Syntax

Jest’s .each allows you to define test cases as data and reuse the same test body.
Here is the same example using .each:

describe('sum', () => {
  test.each([
    [1, 2, 3],
    [2, 3, 5],
    [-1, -1, -2],
  ])('returns %i when %i + %i', (a, b, expected) => {
    expect(sum(a, b)).toBe(expected);
  });
});

This single block of code replaces three separate test cases.
Each array in the .each list corresponds to a test run, and Jest automatically substitutes the values.

Bonus: Named Arguments with Tagged Template Literals

You can also use named arguments for clarity:

test.each`
  a    | b    | expected
  ${1} | ${2} | ${3}
  ${2} | ${3} | ${5}
  ${-1}| ${-1}| ${-2}
`('returns $expected when $a + $b', ({ a, b, expected }) => {
  expect(sum(a, b)).toBe(expected);
});

This syntax is more readable, especially when dealing with longer or more descriptive variable names.
It reads like a mini table of test cases.

Why Use .each?

  • Less boilerplate: Define the test once and reuse it.
  • Better readability: Data-driven tests are easier to scan.
  • Easier maintenance: Add or remove cases without duplicating test logic.
  • Fewer mistakes: Repeating the same code invites copy-paste errors.

Use Case: Validating Multiple Inputs

Suppose you are testing a validation function like isEmail. You can define all test cases in one place:

test.each([
  ['user@example.com', true],
  ['not-an-email', false],
  ['hello@world.io', true],
  ['@missing.local', false],
])('validates %s as %s', (input, expected) => {
  expect(isEmail(input)).toBe(expected);
});

This approach scales better than writing individual test blocks for every email address.

Conclusion

Jest’s .each is a powerful way to reduce duplication in your test suite.
It helps you write cleaner, more maintainable, and more expressive tests.
Next time you find yourself writing nearly identical test cases, reach for .each—your future self will thank you.