Skip to content

Amedee

blur bright business codes

🐧Upgrade to Ubuntu 22.04 LTS while keeping 21.10 kernels

When Ubuntu 22.04 LTS (Jammy Jellyfish) was released, I wanted to upgrade my system from Ubuntu 21.10 (Impish Indri). But I had one critical requirement:

Do not replace my 5.13 kernel series!

This was primarily for compatibility reasons with specific drivers and tools I rely on. See also my other post about my ridiculous amount of kernels.

This post documents the steps I took to successfully upgrade the OS while keeping my old kernel intact.


đŸ§č Step 1: Clean Up Old Configuration Files Before the Upgrade

Before starting the upgrade, I removed some APT configuration files that could conflict with the upgrade process:

sudo rm --force \
    /etc/apt/apt.conf.d/01ubuntu \
    /etc/apt/sources.list.d/jammy.list \
    /etc/apt/preferences.d/libssl3

Then I refreshed my package metadata:

sudo apt update

🚀 Step 2: Launch the Release Upgrade

Now it was time for the main event. I initiated the upgrade with:

sudo do-release-upgrade

The release upgrader went through its usual routine — calculating changes, checking dependencies, and showing what would be removed or upgraded.

3 installed packages are no longer supported by Canonical.
22 packages will be removed, 385 new packages installed, and 3005 packages upgraded.
Download: ~5.2 MB
Estimated time: 17 mins @ 40 Mbit/s or over 2 hours @ 5 Mbit/s.

đŸ˜± Step 3: Wait, It Wants to Remove What?!

Among the packages marked for removal:

  • hardlink
  • fuse
  • Many linux-5.13.* kernel packages
  • Tools like grub-customizer and older versions of Python

🔍 Investigating hardlink

I use hardlink regularly, so I double-checked its availability.

No need to worry — it is still available in Ubuntu 22.04!
It moved from its own package to util-linux.
👉 manpages.ubuntu.com (hardlink)

So no problem there.

✅ Saving fuse

I aborted the upgrade and manually installed fuse to mark it as manually installed:

sudo apt install fuse

Then I restarted the upgrade.


🛠 Step 4: Keep the 5.13 Kernel

To keep using my current kernel version, I re-added the Impish repo after the upgrade but before rebooting.

awk '($1$3$4=="debjammymain"){$3="impish" ;print}' /etc/apt/sources.list \
    | sudo tee /etc/apt/sources.list.d/impish.list

Then I updated the package lists and reinstalled the kernel packages I wanted to keep:

sudo apt update
sudo apt install linux-{image,headers,modules,modules-extra,tools}-$(uname -r)

This ensured the 5.13 kernel and related packages would not be removed.


📌 Step 5: Unhold Held Packages

I checked which packages were held:

sudo apt-mark showhold

Many of them were 5.13.0-22 packages. I canceled the hold status:

sudo apt-mark unhold *-5.13.0-22-generic

⚙ Step 6: Keep GRUB on Your Favorite Kernel

To stop GRUB from switching to a newer kernel automatically and keep booting the same kernel version, I updated my GRUB configuration:

sudo nano /etc/default/grub

I set:

GRUB_DEFAULT=saved
GRUB_SAVEDEFAULT=true

Then I made sure GRUB’s main kernel script /etc/grub.d/10_linux was executable:

sudo chmod +x /etc/grub.d/10_linux

đŸ§œ Step 7: Clean Up Other Kernels

Once I was confident everything worked, I purged other kernel versions:

sudo apt purge *-5.13.*
sudo apt purge *-5.14.*
sudo apt purge *-5.16.*
sudo apt purge *-5.17.*
sudo apt purge linux-*-5.15.*-0515*-generic
sudo rm -rf /lib/modules/5.13.*

✅ Final Thoughts

This upgrade process allowed me to:

  • Enjoy the new features and LTS support of Ubuntu 22.04
  • Continue using the 5.13 kernel that works best with my hardware

If you need to preserve specific kernel versions or drivers, this strategy may help you too!


Have you tried upgrading while keeping your older kernel? Share your experience or ask questions in the comments!

black internal hdd on black surface

How big is a clean install of Ubuntu Jammy Jellyfish (22.04)?

Because curiosity killed the cat, not because it’s useful! 😀

Start with a clean install in a virtual machine

I start with a simple Vagrantfile:

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/jammy64"
  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "playbook.yml"
  end
end

This Ansible playbook updates all packages to the latest version and removes unused packages.

- name: Update all packages to the latest version
  hosts: all
  remote_user: ubuntu
  become: yes

  tasks:

  - name: Update apt cache
    apt:
      update_cache: yes
      cache_valid_time: 3600
      force_apt_get: yes

  - name: Upgrade all apt packages
    apt:
      force_apt_get: yes
      upgrade: dist

  - name: Check if a reboot is needed for Ubuntu boxes
    register: reboot_required_file
    stat: path=/var/run/reboot-required get_md5=no

  - name: Reboot the Ubuntu box
    reboot:
      msg: "Reboot initiated by Ansible due to kernel updates"
      connect_timeout: 5
      reboot_timeout: 300
      pre_reboot_delay: 0
      post_reboot_delay: 30
      test_command: uptime
    when: reboot_required_file.stat.exists

  - name: Remove unused packages
    apt:
      autoremove: yes
      purge: yes
      force_apt_get: yes

Then bring up the virtual machine with vagrant up --provision.

Get the installation size

I ssh into the box (vagrant ssh) and run a couple of commands to get some numbers.

Number of installed packages:

$ dpkg-query --show | wc --lines
592

Size of the installed packages:

$ dpkg-query --show --showformat '${Installed-size}\n' | awk '{s+=$1*1024} END {print s}' | numfmt --to=iec-i --format='%.2fB'
1.14GiB

I need to multiply the package size with 1024 because dpkg-query outputs size in kilobytes.

Total size:

$ sudo du --summarize --human-readable --one-file-system /
1.9G	/

Get the installation size using Ansible

Of course, I can also add this to my Ansible playbook, and then I don’t have to ssh into the virtual machine.

  - name: Get the number of installed packages
    shell: dpkg-query --show | wc --lines
    register: package_count
    changed_when: false
    failed_when: false
  - debug: msg="{{ package_count.stdout }}"

  - name: Get the size of installed packages
    shell: >
      dpkg-query --show --showformat '${Installed-size}\n' 
      | awk '{s+=$1*1024} END {print s}' 
      | numfmt --to=iec-i --format='%.2fB'
    register: package_size
    changed_when: false
    failed_when: false
  - debug: msg="{{ package_size.stdout }}"

  - name: Get the disk size with du
    shell: >
      du --summarize --one-file-system /
      | numfmt --to=iec-i --format='%.2fB'
    register: du_used
    changed_when: false
    failed_when: false
  - debug: msg="{{ du_used.stdout }}"

The output is then:

TASK [Get the number of installed packages] ************************************
ok: [default]

TASK [debug] *******************************************************************
ok: [default] => {
    "msg": "592"
}

TASK [Get the size of installed packages] **************************************
ok: [default]

TASK [debug] *******************************************************************
ok: [default] => {
    "msg": "1.14GiB"
}

TASK [Get the disk size with du] ***********************************************
ok: [default]

TASK [debug] *******************************************************************
ok: [default] => {
    "msg": "1.82MiB /"
}

Gitmojis are not just cute emojis

When you first encounter Gitmoji, it might feel like a whimsical idea — adding emojis to your Git commit messages? Surely that is just a fun way to decorate your history, right?

Well
 yes. But also, no. Gitmojis are much more than just cute little icons. They are a powerful convention that improves collaboration, commit clarity, and even automation in your development workflow. In this post, we will explore how Gitmojis can boost your Git hygiene, help your team, and make your commits more expressive — without writing a novel in every message.


What is Gitmoji?

Gitmoji is a project by Carlos Cuesta that introduces a standardized set of emojis to prefix your Git commit messages. Each emoji represents a common type of change. For example:

EmojiCodeDescription
✹:sparkles:New feature
🐛:bug:Bug fix
📝:memo:Documentation change
♻:recycle:Code refactor
🚀:rocket:Performance upgrade

Why Use Gitmoji?

1. Readable History at a Glance

Reading a log full of generic messages like fix stuff, more changes, or final update is painful. Gitmojis help you scan through history and immediately understand what types of changes were made. Think of it as color-coding your past.

đŸ§± Example — Traditional Git log:

git log --oneline
b11d9b3 Fix things
a31cbf1 Final touches
7c991e8 Update again

🔎 Example — Gitmoji-enhanced log:

🐛 Fix overflow issue on mobile nav
✹ Add user onboarding wizard
📝 Update README with environment setup
đŸ”„ Remove unused CSS classes

2. Consistency Without Bureaucracy

Git commit conventions like Conventional Commits are excellent for automation but can be intimidating and verbose. Gitmoji offers a simpler, friendlier alternative — a consistent prefix without strict formatting.

You still write meaningful commit messages, but now with context that is easy to scan.


3. Tooling Support with gitmoji-cli

Gitmoji CLI is a command-line tool that makes committing with emojis seamless.

🛠 Installation:

npm install -g gitmoji-cli

đŸ§Ș Usage:

gitmoji -c

You will be greeted with an interactive prompt:

✔ Gitmojis fetched successfully, these are the new emojis:
? Choose a gitmoji: (Use arrow keys or type to search)
❯ 🎹  - Improve structure / format of the code. 
  âšĄïž  - Improve performance. 
  đŸ”„  - Remove code or files. 
  🐛  - Fix a bug. 
  đŸš‘ïž  - Critical hotfix. 
  ✹  - Introduce new features. 
  📝  - Add or update documentation. 
(Move up and down to reveal more choices)

The CLI also supports conventional formatting and custom scopes. Want to tweak your settings?

gitmoji --config

You can also use it in CI/CD pipelines or with Git hooks to enforce Gitmoji usage across teams.


4. Better Collaboration and Code Review

Your teammates will thank you when your commits say more than “fix” or “update”. Gitmojis provide context and clarity — especially during code review or when you are scanning a pull request with dozens of commits.

🧠 Before:

fix
update styles
final commit

✅ After:

🐛 Fix background image issue on Safari
💄 Adjust padding for login form
✅ Add final e2e test for login flow

This is how a pull request with Gitmoji commits looks like on GitHub:


5. Automation Ready

Need to generate changelogs or trigger actions based on commit types? Gitmoji messages are easy to parse, making them automation-friendly.

Example with a simple script:

git log --oneline | grep "^✹"

You can even integrate this into release workflows with tools like semantic-release or your own custom tooling.


Do Not Let the Cute Icons Fool You

Yes, emojis are fun. But behind the smiling faces and sparkles is a thoughtful system that improves your Git workflow. Whether you are working solo or as part of a team, Gitmoji brings:

  • ✅ More readable commit history
  • ✅ Lightweight commit standards
  • ✅ Easy automation hooks
  • ✅ A dash of joy to your development day

So next time you commit, try it:

gitmoji -c

Because Gitmojis are not just cute.
They are practical, powerful — and yes, still pretty adorable.


🚀 Get Started

🎉 Happy committing!

silver and black hard disk drive

Suspending cloud backup of a NAS that cannot be reached

I use CrashPlan for cloud backups. In 2018 they stopped their Home solution, so I switched to their Business plan.

It works very well on Linux, Windows and Mac, but it was always a bit fickle on my QNAP NAS. There is a qpkg package for CrashPlan, and there are lots of posts on the QNAP support forum. After 2018, none of the solutions to run a backup on the NAS itself stopped working. So I gave up, and I didn’t have a backup for almost 4 years.

Now that I have mounted most of the network shares on my local filesystem, I can just run the backup on my pc. I made 3 different backup sets, one for each of the shares. There’s only one thing that I had to fix: if Crashplan runs when the shares aren’t mounted, then it thinks that the directories are empty, and it will delete the backup on the cloud storage. As soon as the shares come back online, the files are backed up again. It doesn’t have to upload all files again, because Crashplan doesn’t purge the files on it’s cloud immediately, but the file verification still happens. That takes time and bandwidth.

I contacted CrashPlan support about this issue, and this was their reply:

I do not believe that this scenario can be avoided with this product – at least not in conjunction with your desired setup. If a location within CrashPlan’s file selection is detached from the host machine, then the program will need to rescan the selection. This is in inherent drawback to including network drives within your file selection. Your drives need to retain a stable connection in order to avoid the necessity of the software to run a new scan when it sees the drives attached to the device (so long as they’re within the file selection) detach and reattach.

Since the drive detaching will send a hardware event from the OS to CrashPlan, CrashPlan will see that that hardware event lies within its file selection – due to the fact that you mapped your network drives into a location which you’ve configured CrashPlan to watch. A hardware event pointing out that a drive within the /home/amedee/Multimedia/ file path has changed its connection status will trigger a scan. CrashPlan will not shut down upon receiving a drive detachment or attachment hardware event. The program needs to know what (if anything) is still there, and is designed firmly to track those types of changes, not to give up and stop monitoring the locations within its file selection.

There’s no way around this, aside from ensuring that you either keep a stable connection. This is an unavoidable negative consequence of mapping a network drive to a location which you’ve included in CrashPlan’s file selection. The only solution would be for you to engineer your network so as not to interrupt the connection.

Nathaniel, Technical Support Agent, Code42

I thought as much already. No problem, Nathaniel! I found a workaround: a shell script that checks if a certain marker file on the network share exists, and if it doesn’t, then the script stops the CrashPlan service, which will prevent CrashPlan from scanning the file selection. As soon as the file becomes available again, then the CrashPlan service is started. This workaround works, and is good enough for me. It may not be the cleanest solution but I’m happy with it.

I first considered using inotifywait, which listens to filesystem events like modifying or deleting files, or unmount. However when the network connection just drops for any reason, then inotifywait doesn’t get an event. So I have to resort to checking if a file exists.

#!/bin/bash
file_list="/home/amedee/bin/file_list.txt"

all_files_exist () {
    while read -r line; do
        [ -f "$line" ]
        status=$?
        if ! (exit $status); then
            echo "$line not found!"
            return $status
        fi
    done < "$file_list"
}

start_crashplan () {
    /etc/init.d/code42 start
}

stop_crashplan () {
    /etc/init.d/code42 stop
}

while true; do
    if all_files_exist; then
        start_crashplan
    else
        stop_crashplan
    fi
    sleep 60
done
  • file_list.txt contains a list of testfiles on different shares that I want to check. They all have to be present, if even only one of them is missing or can’t be reached, then the service must be stopped.
/home/amedee/Downloads/.testfile
/home/amedee/Multimedia/.testfile
/home/amedee/backup/.testfile
  • I can add or remove shares without needing to modify the script, I only need to edit file_list.txt – even while the script is still running.
  • Starting (or stopping) the service if it is already started (or stopped) is very much ok. The actual startup script itself takes care of checking if it has already started (or stopped).
  • This script needs to be run at startup as root, so I call it from cron (sudo crontab -u root -e):
@reboot /home/amedee/bin/test_cifs_shares.sh

This is what CrashPlan support replied when I told them about my workaround:

Hello Amedee,

That is excellent to hear that you have devised a solution which fits your needs!

This might not come in time to help smooth out your experience with your particular setup, but I can mark this ticket with a feature request tag. These tags help give a resource to our Product team to gauge customer interest in various features or improvements. While there is no way to use features within the program itself to properly address the scenario in which you unfortunately find yourself, as an avenue for adjustments to how the software currently operates in regards to the attachment or detachment of network drives, it’s an entirely valid request for changes in the future.

Nathaniel, Technical Support Agent, Code42

That’s very nice of you, Nathaniel! Thank you very much!

silver and black hard disk drive

Mounting NAS shares without slow startup

I have a NAS, a QNAP TS-419P II. It’s about a decade old and it has always served me well. Due to various reasons I have never used it in an efficient way, it was always like a huge external drive, not really integrated in the rest of my filesystems.

The NAS has a couple of CIFS shares with very obvious names:

  • backup
  • Download
  • Multimedia, with directories Music, Photos and Videos

(There are a few more shares, but they aren’t relevant now.)

In Ubuntu, a user home directory has these default directories:

  • Downloads
  • Music
  • Pictures
  • Videos

I want to store the files in these directories on my NAS.

Mounting shares, the obvious way

First I moved all existing files from ~/Downloads, ~/Music, ~/Pictures, ~/Videos to the corresponding directories on the NAS, to get empty directories. Then I made a few changes to the directories:

$ mkdir backup
$ mkdir Multimedia
$ rmdir Music
$ ln -s Multimedia/Music Music
$ rmdir Pictures
$ ln -s Multimedia/Photos Pictures
$ rmdir Videos
$ ln -s Multimedia/Videos Videos

The symbolic links now point to directories that don’t (yet) exist, so they appear broken – for now.

The next step is to mount the network shares to their corresponding directories.

The hostname of my NAS is minerva, after the Roman goddess of wisdom. To avoid using IP addresses, I added it’s IP address to /etc/hosts:

127.0.0.1	localhost
192.168.1.1     modem
192.168.1.63	minerva

The shares are password protected, and I don’t want to type the password each time I use the shares. So the login goes into a file /home/amedee/.smb:

username=amedee
password=NOT_GOING_TO_TELL_YOU_:-p

Even though I am the only user of this computer, it’s best practice to protect that file so I do

$ chmod 400 /home/amedee/.smb

Then I added these entries to /etc/fstab:

//minerva/download	/home/amedee/Downloads	cifs	uid=1000,gid=1000,credentials=/home/amedee/.smb,iocharset=utf8 0 0
//minerva/backup	/home/amedee/backup	cifs	uid=0,gid=1000,credentials=/home/amedee/.smb,iocharset=utf8 0 0
//minerva/multimedia	/home/amedee/Multimedia	cifs	uid=0,gid=1000,credentials=/home/amedee/.smb,iocharset=utf8 0 0
  • CIFS shares don’t have a concept of user per file, so the entire share is shown as owned by the same user. uid=1000 and gid=1000 are the user ID and group ID of the user amedee, so that all files appear to be owned by me when I do ls -l.
  • The credentials option points to the file with the username and password.
  • The default character encoding for mounts is iso8859-1, for legacy reasons. I may have files with funky characters, so iocharset=utf8 takes care of that.

Then I did sudo mount -a and yay, the files on the NAS appear as if they were on the local hard disk!

Fixing a slow startup

This all worked very well, until I did a reboot. It took a really, really long time to get to the login screen. I did lots of troubleshooting, which was really boring, so I’ll skip to the conclusion: the network mounts were slowing things down, and if I manually mount them after login, then there’s no problem.

It turns out that systemd provides a way to automount filesystems on demand. So they are only mounted when the operating system tries to access them. That sounds exactly like what I need.

To achieve this, I only needed to add noauto,x-systemd.automount to the mount options. I also added x-systemd.device-timeout=10, which means that systemd waits for 10 seconds, and then gives up if it’s unable to mount the share.

From now on I’ll never not use noauto,x-systemd.automount for network shares!

While researching this, I found some documentation that claims you don’t need noauto if you have x-systemd.automount in your mount options. Yours truly has tried it with and without noauto, and I can confirm, from first hand experience, that you definitely need noauto. Without it, there is still the long waiting time at login.

sky sunny wave space

Jag lĂ€r mig svenska 🇾đŸ‡Ș

Jag brukade skriva pÄ den hÀr bloggen pÄ nederlÀndska. Nu Àr det mest pÄ engelska, men undantagsvis Àr det hÀr blogginlÀgget pÄ svenska.

I september 2020 började jag lÀra mig svenska pÄ kvÀllsskolan i Aalst. Varför? Det finns flera anledningar:

  • Jag spelar nyckelharpa, ett typiskt svenskt musikinstrument. Jag gĂ„r pĂ„ kurser hemma och utomlands, ofta frĂ„n svenska lĂ€rare. Det var sĂ„ jag lĂ€rde kĂ€nna mĂ€nniskor i Sverige och dĂ„ Ă€r det bra att prata lite svenska för att hĂ„lla kontakten online.
  • NĂ€r man slĂ„r upp nĂ„got pĂ„ nĂ€tet om nyckelharpa Ă€r det ofta pĂ„ svenska. Jag har ocksĂ„ en underbar bok “Nyckelharpan – Ett unikt svenskt kulturarv” av Esbjörn Hogmark och jag vill kunna lĂ€sa den och inte bara titta pĂ„ bilderna.
  • Jag tycker att Sverige Ă€r ett vackert land som jag kanske vill besöka nĂ„gon gĂ„ng. Norge ocksĂ„, och dĂ€r talar man en mĂ€rklig dialekt av svenska. 😛
  • Jag vill gĂ„ en kurs pĂ„ Eric Sahlström Institutet i Tobo nĂ„gon gĂ„ng. DĂ„ skulle det vara bra att förstĂ„ lĂ€rarna pĂ„ deras eget sprĂ„k.
  • Jag gillar sprĂ„k och sprĂ„kinlĂ€rning! Det hĂ„ller min hjĂ€rna frĂ€sch och frisk. 😀

And if you didn’t understand anything: there’s always Google Translate!

men playing computer games

The hunt for a kernel bug, part 5

Armed with the information from my previous research on a possible kernel bug, I opened a bug report on the Ubuntu bug tracker: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1963555.

It wasn’t long until my bug got confirmed. Someone else chimed in that they had also experienced USB issues. In their case it were external drive devices. Definitely a showstopper!

As of this date, there is a beta for Ubuntu 22.04, and my hope is that this version will either include a new enough kernel (5.16 or up), or that Ubuntu developers have manually cherry-picked the commit that fixes the issue. Let’s check with the Ubuntu Kernel Team:

Ubuntu Kernel Team

Oops… based on upstream 5.15… that’s not good. Maybe they cherry-picked upstream commits? I checked https://packages.ubuntu.com/jammy/linux-generic and the kernel is currently at 5.15.0.25.27. The changelog doesn’t mention anything about xhci or usb. I guess I still have to wait a bit longer…

green and grey circuit board

I have a ridiculous amount of kernels

In previous blogposts I wrote about how I found a possible bug in the Linux kernel, or more precisely, in the kernel that Ubuntu derived from the mainline kernel.

To be able to install any kernel version 5.15.7 or higher, I also had to install libssl3.

The result is that I now have 37 kernels installed, taking up little over 2 GiB disk space:

$ (cd /boot ; ls -hgo initrd.img-* ; ls /boot/initrd.img-* | wc -l)
-rw-r--r-- 1 39M mrt  9 09:54 initrd.img-5.13.0-051300-generic
-rw-r--r-- 1 40M mrt  9 09:58 initrd.img-5.13.0-19-generic
-rw-r--r-- 1 40M mrt  9 09:58 initrd.img-5.13.0-20-generic
-rw-r--r-- 1 40M mrt  9 09:57 initrd.img-5.13.0-21-generic
-rw-r--r-- 1 44M mrt 30 17:46 initrd.img-5.13.0-22-generic
-rw-r--r-- 1 40M mrt  9 09:56 initrd.img-5.13.0-23-generic
-rw-r--r-- 1 40M mrt  9 09:56 initrd.img-5.13.0-25-generic
-rw-r--r-- 1 40M mrt  9 09:56 initrd.img-5.13.0-27-generic
-rw-r--r-- 1 40M mrt  9 09:55 initrd.img-5.13.0-28-generic
-rw-r--r-- 1 40M mrt  9 09:55 initrd.img-5.13.0-30-generic
-rw-r--r-- 1 45M mrt  9 12:02 initrd.img-5.13.0-35-generic
-rw-r--r-- 1 45M mrt 24 23:17 initrd.img-5.13.0-37-generic
-rw-r--r-- 1 45M mrt 30 17:49 initrd.img-5.13.0-39-generic
-rw-r--r-- 1 39M mrt  9 09:54 initrd.img-5.13.1-051301-generic
-rw-r--r-- 1 39M mrt  9 09:54 initrd.img-5.13.19-051319-generic
-rw-r--r-- 1 37M mrt  9 09:53 initrd.img-5.13.19-ubuntu-5.13.0-22.22
-rw-r--r-- 1 37M mrt  9 09:53 initrd.img-5.13.19-ubuntu-5.13.0-22.22-0-g3ab15e228151
-rw-r--r-- 1 37M mrt  9 09:52 initrd.img-5.13.19-ubuntu-5.13.0-22.22-317-g398351230dab
-rw-r--r-- 1 37M mrt  9 09:52 initrd.img-5.13.19-ubuntu-5.13.0-22.22-356-g8ac4e2604dae
-rw-r--r-- 1 37M mrt  9 09:52 initrd.img-5.13.19-ubuntu-5.13.0-22.22-376-gfab6fb5e61e1
-rw-r--r-- 1 37M mrt  9 09:51 initrd.img-5.13.19-ubuntu-5.13.0-22.22-386-gce5ff9b36bc3
-rw-r--r-- 1 37M mrt  9 09:51 initrd.img-5.13.19-ubuntu-5.13.0-22.22-387-g0fc979747dec
-rw-r--r-- 1 37M mrt  9 09:50 initrd.img-5.13.19-ubuntu-5.13.0-22.22-388-g20210d51e24a
-rw-r--r-- 1 37M mrt  9 09:50 initrd.img-5.13.19-ubuntu-5.13.0-22.22-388-gab2802ea6621
-rw-r--r-- 1 37M mrt  9 09:50 initrd.img-5.13.19-ubuntu-5.13.0-22.22-391-ge24e59fa409c
-rw-r--r-- 1 37M mrt  9 09:49 initrd.img-5.13.19-ubuntu-5.13.0-22.22-396-gc3d35f3acc3a
-rw-r--r-- 1 37M mrt  9 09:49 initrd.img-5.13.19-ubuntu-5.13.0-22.22-475-g79b62d0bba89
-rw-r--r-- 1 37M mrt  9 09:48 initrd.img-5.13.19-ubuntu-5.13.0-23.23
-rw-r--r-- 1 40M mrt  9 09:48 initrd.img-5.14.0-051400-generic
-rw-r--r-- 1 40M mrt  9 10:31 initrd.img-5.14.21-051421-generic
-rw-r--r-- 1 44M mrt  9 12:39 initrd.img-5.15.0-051500-generic
-rw-r--r-- 1 46M mrt  9 12:16 initrd.img-5.15.0-22-generic
-rw-r--r-- 1 46M mrt 28 23:27 initrd.img-5.15.32-051532-generic
-rw-r--r-- 1 46M mrt 17 21:12 initrd.img-5.16.0-051600-generic
-rw-r--r-- 1 48M mrt 28 23:19 initrd.img-5.16.16-051616-generic
-rw-r--r-- 1 45M mrt 28 23:11 initrd.img-5.17.0-051700-generic
-rw-r--r-- 1 46M apr  8 17:02 initrd.img-5.17.2-051702-generic
37
  • Versions 5.xx.yy-zz-generic are installed with apt.
  • Versions 5.xx.yy-05xxyy-generic are installed with the Ubuntu Mainline Kernel Installer.
  • Versions 5.xx.yy-ubuntu-5.13.0-zz.zz-nnn-g<commithash> are compiled from source, where <commithash> is the commit of the kernel repository that I compiled.

The kernels in bold are the kernels where something unexpected happens with my USB devices:

  • Ubuntu kernels 5.13.23 and up – including 5.15 kernels of Ubuntu 22.04 LTS (Jammy Jellyfish).
  • Ubuntu compiled kernels, starting 387 commits after kernel 5.13.22.
  • Mainline kernels 5.15.xx.

When Ubuntu finally bases their kernel on mainline 5.16 or higher, then the USB bug will be solved.

colorful toothed wheels

This may be a controversial opinion…

… but you don’t need --- at the start of a YAML file in Ansible.

What does the Ansible documentation say?

I know, I know, if you look at the official documentation on docs.ansible.com, then all of the examples start with ---. And if the official examples do it, then everyone should just blindly copy that without thinking, right?

Wrong! The Ansible documentation on YAML syntax says:

There’s another small quirk to YAML. All YAML files (regardless of their association with Ansible or not) can optionally begin with --- and end with .... This is part of the YAML format and indicates the start and end of a document.

© Copyright Ansible project contributors.

I’ve added the emphasis: optionally. They then continue with one example with --- at the start and ... at the end. The funny thing is, that’s about the only example on the Ansible documentation site (that I could find) that ends with .... So the end marker ... is clearly optional. What about the start marker ---?

What does the YAML specification say?

Ansible uses version 1.2 of the YAML specification and unless you are doing something really exotic, that’s the only version you should care about. Revision 1.2.0 was published in July 2009 and revision 1.2.2 in October 2021. That last revision doesn’t make any changes to the specification, it only corrects some errors and adds clarity.

Chapter 9 of the YAML spec introduces two concepts: documents and streams.

A stream can contain zero or more documents. It’s called a (character) stream because it can be something else than a file on your hard disk, for example some data that’s sent over a network connection. So your Ansible playbook file with extension .yml or .yaml is not a YAML document, it’s a YAML stream.

A document can have several parts:

  • Document prefix: optional character encoding and optional comment lines.
    Seriously, it’s 2022, are you going to make life hard for yourself and use any other encoding than ASCII or UTF-8? The default encoding that every YAML processor, inclusing Ansible, must support is UTF-8. So You Ain’t Gonna Need It.
    Comments can be placed anywhere, so don’t worry.
  • Document directives: these are instructions to the YAML processor and aren’t part of the data structure. The only directive I’ve occasionally seen in the wild is %YAML 1.2, to indicate the version of YAML used. That’s the default version for Ansible anyway, so You Ain’t Gonna Need It.
  • Document markers: a parser needs some way to know where directives stop and document content begins. That’s the directives end marker, ---. There is also a document end marker, ..., which tells a parser to stop looking for content and start scanning for directives again. If there are no markers and the first line doesn’t start with % (a directive), then a parser knows that everything is content. In real life you probably won’t ever have multiple documents in the same stream (file), instead you’ll organize your Ansible code in separate .yaml files, with playbooks and roles and tasks etc.
  • Document content: that’s the only really interesting stuff you care about.

YAML knows 3 types of documents:

  • Bare documents: don’t begin with directives or marker lines. Such documents are very “clean” as they contain nothing other than the content. This is the kind of YAML documents I prefer for Ansible.
  • Explicit documents: begin with an explicit directives end maker (---) but have no directives. This is the style that many people use if they just copy/paste examples from Stack Overflow.
  • Directives documents: start with some directives, followed by an explicit directives end marker. You don’t need directives for Ansible.

Configuring yamllint

I use ansible-lint and yamllint in a pre-commit hook to check the syntax of my Ansible files. This is currently my .yamllint.yml:

rules:
  document-start:
    present: false
  truthy:
    allowed-values: ['true', 'false', 'yes', 'no']

document-start makes sure that there is no --- at the start of a file. I also have opinions on truthy: an Ansible playbook is supposed to be readable both by machines and humans, and then it makes sense to allow the more human-readable values yes and no.

Do you also have opinions that make you change the default configuration of your linters?

blur bright business codes

Install libssl3 on Ubuntu versions before Jammy

Ubuntu mainline kernel packages 5.15.7 and later bump a dependency from libssl1.1 (>= 1.1.0) to libssl3 (>= 3.0.0~~alpha1).

However, package libssl3 is not available for Ubuntu 21.10 Impish Indri. It’s only available for Ubuntu 22.04 Jammy Jellyfish (which is still in beta as of time of writing) and later.

libssl3 further depends on libc6>=2.34 and debconf, but they are available in 21.10 repositories.

Here are a few different ways to resolve the dependency:

Option 1

Use apt pinning to install libssl3 from a Jammy repo, without pulling in everything else from Jammy.

This is more complicated, but it allows the libssl3 package to receive updates automatically.
Do all the following as root.

  • Create an apt config file to specify your system’s current release as the default release for installing packages, instead of simply the highest version number found. We are about to add a Jammy repo to apt, which will contain a lot of packages with higher version numbers, and we want apt to ignore them all.
$ echo 'APT::Default-Release "impish";' \
    | sudo tee /etc/apt/apt.conf.d/01ubuntu
  • Add the Jammy repository to the apt sources. If your system isn’t “impish”, change that below.
$ awk '($1$3$4=="debimpishmain"){$3="jammy" ;print}' /etc/apt/sources.list \
    | sudo tee /etc/apt/sources.list.d/jammy.list
  • Pin libssl3 to the jammy version in apt preferences. This overrides the Default-Release above, just for the libssl3 package.
$ sudo tee /etc/apt/preferences.d/libssl3 >/dev/null <<%%EOF
Package: libssl3
Pin: release n=jammy
Pin-Priority: 900
%%EOF
  • Install libssl3:
$ sudo apt update
$ sudo apt install libssl3

Later, when Jammy is officially released, delete all 3 files created above

$ sudo rm --force \
    /etc/apt/apt.conf.d/01ubuntu \
    /etc/apt/sources.list.d/jammy.list \
    /etc/apt/preferences.d/libssl3

Option 2

Download the libssl3 deb package for Jammy and install it manually with dpkg -i filename.deb.

This only works if there aren’t any additional dependencies, which you would also have to install, with a risk of breaking your system. Here Be Dragons…