Skip to content

Uncategorized

Onzin, onzin, allemaal onzin!

Resurrecting My Windows Partition After 4 Years 🖥️🎮

Sometimes Linux life is bliss. I have my terminal, my editor, my tools, and Steam games that run natively. For nearly four years, I didn’t touch Windows once — and I didn’t miss it.

And then Fortnite happened.

My girlfriend Enya and her wife Kyra got hooked, and naturally I wanted to join them. But Fortnite refuses to run on Linux — some copy-protection magic that digs into the Windows kernel. It’s rare these days for a game to be Windows-only, but rare enough to shatter my Linux-only bubble. Suddenly, resurrecting Windows wasn’t a chore anymore; it was a quest for polyamorous Battle Royale glory. 🕹️

My Windows 11 partition had been hibernating since November 2021, quietly gathering dust and updates in a forgotten corner of the disk. Why it stopped working back then? I honestly don’t remember, but apparently I had blogged about it. I hadn’t cared — until now.


The Awakening – Peeking Into the UEFI Abyss 🐧

I started my journey with my usual tools: efibootmgr and update-grub on Ubuntu. I wanted to see what the firmware thought was bootable:

sudo efibootmgr

Output:

BootCurrent: 0001
Timeout: 1 seconds
BootOrder: 0001,0000
Boot0000* Windows Boot Manager ...
Boot0001* Ubuntu ...

At first glance, everything seemed fine. Ubuntu booted as usual. Windows… did not. It didn’t even show up in the GRUB boot menu. A little disappointing—but not unexpected, given that it hadn’t been touched in years. 😬

I knew the firmware knew about Windows—but the OS itself refused to wake up.


The Hidden Enemy – Why os-prober Was Disabled ⚙️

I soon learned that recent Ubuntu versions disable os-prober by default. This is partly to speed up boot and partly to avoid probing unknown partitions automatically, which could theoretically be a security risk.

I re-enabled it in /etc/default/grub:

GRUB_DISABLE_OS_PROBER=false

Then ran:

sudo update-grub

Even after this tweak, Windows still didn’t appear in the GRUB menu.


The Manual Attempt – GRUB to the Rescue ✍️

Determined, I added a manual GRUB entry in /etc/grub.d/40_custom:

menuentry "Windows" {
    insmod part_gpt
    insmod fat
    insmod chain
    search --no-floppy --fs-uuid --set=root 99C1-B96E
    chainloader /EFI/Microsoft/Boot/bootmgfw.efi
}

How I found the EFI partition UUID:

sudo blkid | grep EFI

Result: UUID="99C1-B96E"

Ran sudo update-grub… Windows showed up in GRUB! But clicking it? Nothing.

At this stage, Windows still wouldn’t boot. The ghost remained untouchable.


The Missing File – Hunt for bootmgfw.efi 🗂️

The culprit? bootmgfw.efi itself was gone. My chainloader had nothing to point to.

I mounted the NTFS Windows partition (at /home/amedee/windows) and searched for the missing EFI file:

sudo find /home/amedee/windows/ -type f -name "bootmgfw.efi"
/home/amedee/windows/Windows/Boot/EFI/bootmgfw.efi

The EFI file was hidden away, but thankfully intact. I copied it into the proper EFI directory:

sudo cp /home/amedee/windows/Windows/Boot/EFI/bootmgfw.efi /boot/efi/EFI/Microsoft/Boot/

After a final sudo update-grub, Windows appeared automatically in the GRUB menu. Finally, clicking the entry actually booted Windows. Victory! 🥳


Four Years of Sleeping Giants 🕰️

Booting Windows after four years was like opening a time capsule. I was greeted with thousands of updates, drivers, software installations, and of course, the installation of Fortnite itself. It took hours, but it was worth it. The old system came back to life.

Every “update complete” message was a heartbeat closer to joining Enya and Kyra in the Battle Royale.


The GRUB Disappearance – Enter Ventoy 🔧

After celebrating Windows resurrection, I rebooted… and panic struck.

The GRUB menu had vanished. My system booted straight into Windows, leaving me without access to Linux. How could I escape?

I grabbed my trusty Ventoy USB stick (the same one I had used for performance tests months ago) and booted it in UEFI mode. Once in the live environment, I inspected the boot entries:

sudo efibootmgr -v

Output:

BootCurrent: 0002
Timeout: 1 seconds
BootOrder: 0002,0000,0001
Boot0000* Windows Boot Manager ...
Boot0001* Ubuntu ...
Boot0002* USB Ventoy ...

To restore Ubuntu to the top of the boot order:

sudo efibootmgr -o 0001,0000

Console output:

BootOrder changed from 0002,0000,0001 to 0001,0000

After rebooting, the GRUB menu reappeared, listing both Ubuntu and Windows. I could finally choose my OS again without further fiddling. 💪


A Word on Secure Boot and Signed Kernels 🔐

Since we’re talking bootloaders: Secure Boot only allows EFI binaries signed with a trusted key to execute. Ubuntu Desktop ships with signed kernels and a signed shim so it boots fine out of the box. If you build your own kernel or use unsigned modules, you’ll either need to sign them yourself or disable Secure Boot in firmware.


Diagram of the Boot Flow 🖼️

Here’s a visual representation of the boot process after the fix:

flowchart TD
    UEFI["⚙️ UEFI Firmware BootOrder:<br/>0001 (Ubuntu) →<br/>0000 (Windows)<br/>(BootCurrent: 0001)"]

    subgraph UbuntuEFI["shimx64.efi"]
        GRUB["📂 GRUB menu"]
        LINUX["🐧 Ubuntu Linux<br/>kernel + initrd"]
        CHAINLOAD["🪟 Windows<br/>bootmgfw.efi"]
    end

    subgraph WindowsEFI["bootmgfw.efi"]
        WBM["🪟 Windows Boot Manager"]
        WINOS["💻 Windows 11<br/>(C:)"]
    end

    UEFI --> UbuntuEFI
    GRUB -->|boots| LINUX
    GRUB -.->|chainloads| CHAINLOAD
    UEFI --> WindowsEFI
    WBM -->|boots| WINOS

From the GRUB menu, the Windows entry chainloads bootmgfw.efi, which then points to the Windows Boot Manager, finally booting Windows itself.


First Battle Royale 🎮✨

After all the technical drama and late-night troubleshooting, I finally joined Enya and Kyra in Fortnite.

I had never played Fortnite before, but my FPS experience (Borderlands hype, anyone?) and PUBG knowledge from Viva La Dirt League on YouTube gave me a fighting chance.

We won our first Battle Royale together! 🏆💥 The sense of triumph was surreal—after resurrecting a four-year-old Windows partition, surviving driver hell, and finally joining the game, victory felt glorious.


TL;DR: Quick Repair Steps ⚡

  1. Enable os-prober in /etc/default/grub.
  2. If Windows isn’t detected, try a manual GRUB entry.
  3. If boot fails, copy bootmgfw.efi from the NTFS Windows partition to /boot/efi/EFI/Microsoft/Boot/.
  4. Run sudo update-grub.
  5. If GRUB disappears after booting Windows, boot a Live USB (UEFI mode) and adjust efibootmgr to set Ubuntu first.
  6. Reboot and enjoy both OSes. 🎉

This little adventure taught me more about GRUB, UEFI, and EFI files than I ever wanted to know, but it was worth it. Most importantly, I got to join my polycule in a Fortnite victory and prove that even a four-year-old Windows partition can rise again! 💖🎮

Dear Facebook,

We need to talk.

You and I have been together for a long time. I wrote blog posts, you provided a place to share them. For years that worked. But lately you’ve been treating my posts like spam — my own blog links! Apparently linking to an external site on my Page is now a cardinal sin unless I pay to “boost” it.
And it’s not just Facebook. Threads — another Meta platform — also keeps taking down my blog links.

So this is goodbye… at least for my Facebook Page.
I’m not deleting my personal Profile. I’ll still pop in to see what events are coming up, and to look at photos after the balfolk and festivals. But our Page-posting days are over.

Here’s why:

  • Your algorithm is a slot machine. What used to be “share and be seen” has become “share, pray, and maybe pay.” I’d rather drop coins in an actual jukebox than feed a zuckerbot just so friends can see my work.
  • Talking into a digital void. Posting to my Page now feels like performing in an empty theatre while an usher whispers “boost post?” The real conversations happen by email, on Mastodon, or — imagine — in real life.
  • Privacy, ads, and that creepy feeling. Every login is a reminder that Facebook isn’t free. I’m paying with my data to scroll past ads for things I only muttered near my phone. That’s not the backdrop I want for my writing.
  • The algorithm ate my audience. Remember when following a Page meant seeing its posts? Cute era. Now everything’s at the mercy of an opaque feed.
  • My house, my rules. I built amedee.be to be my own little corner of the web. No arbitrary takedowns, no algorithmic chokehold, no random “spam” labels. Subscribe by RSS or email and you’ll get my posts in the order I publish them — not the order an algorithm thinks you should.
  • Better energy elsewhere. Time spent arm-wrestling Facebook is time I could spend writing, playing the nyckelharpa, or dancing a Swedish polska at a balfolk. All of that beats arguing with a zuckerbot.

From now on, if people actually want to read what I write, they’ll find me at amedee.be, via RSS, email, or Mastodon. No algorithms, no takedowns, no mystery boxes.

So yes, we’ll still bump into each other when I check events or browse photos. But the part where I dutifully feed you my blog posts? That’s over.

With zero boosted posts and one very happy nyckelharpa,
Amedee

🚀 Smarter CI with GitHub Actions Cache for Ansible (aka “Stop Downloading the Internet”)

Mood: Slightly annoyed at CI pipelines 🧨
CI runs shouldn’t feel like molasses. Here’s how I got Ansible to stop downloading the internet. You’re welcome.


Let’s get one thing straight: nobody likes waiting on CI.
Not you. Not me. Not even the coffee you brewed while waiting for Galaxy roles to install — again.

So I said “nope” and made it snappy. Enter: GitHub Actions Cache + Ansible + a generous helping of grit and retries.

🧙‍♂️ Why cache your Ansible Galaxy installs?

Because time is money, and your CI shouldn’t feel like it’s stuck in dial-up hell.
If you’ve ever screamed internally watching community.general get re-downloaded for the 73rd time this month — same, buddy, same.

The fix? Cache that madness. Save your roles and collections once, and reuse like a boss.

💾 The basics: caching 101

Here’s the money snippet:

path: .ansible/
key: ansible-deps-${{ hashFiles('requirements.yml') }}
restoreKeys: |
  ansible-deps-

🧠 Translation:

  • Store everything Ansible installs in .ansible/
  • Cache key changes when requirements.yml changes — nice and deterministic
  • If the exact match doesn’t exist, fall back to the latest vaguely-similar key

Result? Fast pipelines. Happy devs. Fewer rage-tweets.

🔁 Retry like you mean it

Let’s face it: ansible-galaxy has… moods.

Sometimes Galaxy API is down. Sometimes it’s just bored. So instead of throwing a tantrum, I taught it patience:

for i in {1..5}; do
  if ansible-galaxy install -vv -r requirements.yml; then
    break
  else
    echo "Galaxy is being dramatic. Retrying in $((i * 10)) seconds…" >&2
    sleep $((i * 10))
  fi
done

That’s five retries. With increasing delays.
💬 “You good now, Galaxy? You sure? Because I’ve got YAML to lint.”

⚠️ The catch (a.k.a. cache wars)

Here’s where things get spicy:

actions/cache only saves when a job finishes successfully.

So if two jobs try to save the exact same cache at the same time?
💥 Boom. Collision. One wins. The other walks away salty:

Unable to reserve cache with key ansible-deps-...,
another job may be creating this cache.

Rude.

🧊 Fix: preload the cache in a separate job

The solution is elegant:
Warm-up job. One that only does Galaxy installs and saves the cache. All your other jobs just consume it. Zero drama. Maximum speed. 💃

🪄 Tempted to symlink instead of copy?

Yeah, I thought about it too.
“But what if we symlink .ansible/ and skip the copy?”

Nah. Not worth the brainpower. Just cache the thing directly.
It works. 🧼 It’s clean. 😌 You sleep better.

🧠 Pro tips

  • Use the hash of requirements.yml as your cache key. Trust me.
  • Add a fallback prefix like ansible-deps- so you’re never left cold.
  • Don’t overthink it. Let the cache work for you, not the other way around.

✨ TL;DR

  • ✅ GitHub Actions cache = fast pipelines
  • ✅ Smart keys based on requirements.yml = consistency
  • ✅ Retry loops = less flakiness
  • ✅ Preload job = no more cache collisions
  • ❌ Re-downloading Galaxy junk every time = madness

🔥 Go forth and cache like a pro.

Got better tricks? Hit me up on Mastodon and show me your CI magic.
And remember: Friends don’t let friends wait on Galaxy.

💚 Peace, love, and fewer ansible-galaxy downloads.

🧟‍♂️ Resurrecting a Dead Commit from the GitHub Graveyard

There comes a time in every developer’s life when you just know a certain commit existed. You remember its hash: deadbeef1234. You remember what it did. You know it was important. And yet, when you go looking for it…

💥 fatal: unable to read tree <deadbeef1234>

Great. Git has ghosted you.

That was me today. All I had was a lonely commit hash. The branch that once pointed to it? Deleted. The local clone that once had it? Gone in a heroic but ill-fated attempt to save disk space. And GitHub? Pretending like it never happened. Typical.

🪦 Act I: The Naïve Clone

“Let’s just clone the repo and check out the commit,” I thought. Spoiler alert: that’s not how Git works.

git clone --no-checkout https://github.com/user/repo.git
cd repo
git fetch --all
git checkout deadbeef1234

🧨 fatal: unable to read tree 'deadbeef1234'

Thanks Git. Very cool. Apparently, if no ref points to a commit, GitHub doesn’t hand it out with the rest of the toys. It’s like showing up to a party and being told your friend never existed.

🧪 Act II: The Desperate fsck

Surely it’s still in there somewhere? Let’s dig through the guts.

git fsck --full --unreachable

Nope. Nothing but the digital equivalent of lint and old bubblegum wrappers.

🕵️ Act III: The Final Trick

Then I stumbled across a lesser-known Git dark art:

git fetch origin deadbeef1234

And lo and behold, GitHub replied with a shrug and handed it over like, “Oh, that commit? Why didn’t you just say so?”

Suddenly the commit was in my local repo, fresh as ever, ready to be inspected, praised, and perhaps even resurrected into a new branch:

git checkout -b zombie-branch deadbeef1234

Mission accomplished. The dead walk again.


☠️ Moral of the Story

If you’re ever trying to recover a commit from a deleted branch on GitHub:

  1. Cloning alone won’t save you.
  2. git fetch origin <commit> is your secret weapon.
  3. If GitHub has completely deleted the commit from its history, you’re out of luck unless:
    • You have an old local clone
    • Someone forked the repo and kept it
    • CI logs or PR diffs include your precious bits

Otherwise, it’s digital dust.


🧛 Bonus Tip

Once you’ve resurrected that commit, create a branch immediately. Unreferenced commits are Git’s version of vampires: they disappear without a trace when left in the shadows.

git checkout -b safe-now deadbeef1234

And there you have it. One undead commit, safely reanimated.

🧹 Tidying Up After Myself: Automatically Deleting Old GitHub Issues

At some point, I had to admit it: I’ve turned GitHub Issues into a glorified chart gallery.

Let me explain.

Over on my amedee/ansible-servers repository, I have a workflow called workflow-metrics.yml, which runs after every pipeline. It uses yykamei/github-workflows-metrics to generate beautiful charts that show how long my CI pipeline takes to run. Those charts are then posted into a GitHub Issue—one per run.

It’s neat. It’s visual. It’s entirely unnecessary to keep them forever.

The thing is: every time the workflow runs, it creates a new issue and closes the old one. So naturally, I end up with a long, trailing graveyard of “CI Metrics” issues that serve no purpose once they’re a few weeks old.

Cue the digital broom. 🧹


Enter cleanup-closed-issues.yml

To avoid hoarding useless closed issues like some kind of GitHub raccoon, I created a scheduled workflow that runs every Monday at 3:00 AM UTC and deletes the cruft:

schedule:
  - cron: '0 3 * * 1' # Every Monday at 03:00 UTC

This workflow:

  • Keeps at least 6 closed issues (just in case I want to peek at recent metrics).
  • Keeps issues that were closed less than 30 days ago.
  • Deletes everything else—quietly, efficiently, and without breaking a sweat.

It’s also configurable when triggered manually, with inputs for dry_run, days_to_keep, and min_issues_to_keep. So I can preview deletions before committing them, or tweak the retention period as needed.


📂 Complete Source Code for the Cleanup Workflow

name: 🧹 Cleanup Closed Issues

on:
  schedule:
    - cron: '0 3 * * 1' # Runs every Monday at 03:00 UTC
  workflow_dispatch:
    inputs:
      dry_run:
        description: "Enable dry run mode (preview deletions, no actual delete)"
        required: false
        default: "false"
        type: choice
        options:
          - "true"
          - "false"
      days_to_keep:
        description: "Number of days to retain closed issues"
        required: false
        default: "30"
        type: string
      min_issues_to_keep:
        description: "Minimum number of closed issues to keep"
        required: false
        default: "6"
        type: string

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

permissions:
  issues: write

jobs:
  cleanup:
    runs-on: ubuntu-latest

    steps:
      - name: Install GitHub CLI
        run: sudo apt-get install --yes gh

      - name: Delete old closed issues
        env:
          GH_TOKEN: ${{ secrets.GH_FINEGRAINED_PAT }}
          DRY_RUN: ${{ github.event.inputs.dry_run || 'false' }}
          DAYS_TO_KEEP: ${{ github.event.inputs.days_to_keep || '30' }}
          MIN_ISSUES_TO_KEEP: ${{ github.event.inputs.min_issues_to_keep || '6' }}
          REPO: ${{ github.repository }}
        run: |
          NOW=$(date -u +%s)
          THRESHOLD_DATE=$(date -u -d "${DAYS_TO_KEEP} days ago" +%s)
          echo "Only consider issues older than ${THRESHOLD_DATE}"

          echo "::group::Checking GitHub API Rate Limits..."
          RATE_LIMIT=$(gh api /rate_limit --jq '.rate.remaining')
          echo "Remaining API requests: ${RATE_LIMIT}"
          if [[ "${RATE_LIMIT}" -lt 10 ]]; then
            echo "⚠️ Low API limit detected. Sleeping for a while..."
            sleep 60
          fi
          echo "::endgroup::"

          echo "Fetching ALL closed issues from ${REPO}..."
          CLOSED_ISSUES=$(gh issue list --repo "${REPO}" --state closed --limit 1000 --json number,closedAt)

          if [ "${CLOSED_ISSUES}" = "[]" ]; then
            echo "✅ No closed issues found. Exiting."
            exit 0
          fi

          ISSUES_TO_DELETE=$(echo "${CLOSED_ISSUES}" | jq -r \
            --argjson now "${NOW}" \
            --argjson limit "${MIN_ISSUES_TO_KEEP}" \
            --argjson threshold "${THRESHOLD_DATE}" '
              .[:-(if length < $limit then 0 else $limit end)]
              | map(select(
                  (.closedAt | type == "string") and
                  ((.closedAt | fromdateiso8601) < $threshold)
                ))
              | .[].number
            ' || echo "")

          if [ -z "${ISSUES_TO_DELETE}" ]; then
            echo "✅ No issues to delete. Exiting."
            exit 0
          fi

          echo "::group::Issues to delete:"
          echo "${ISSUES_TO_DELETE}"
          echo "::endgroup::"

          if [ "${DRY_RUN}" = "true" ]; then
            echo "🛑 DRY RUN ENABLED: Issues will NOT be deleted."
            exit 0
          fi

          echo "⏳ Deleting issues..."
          echo "${ISSUES_TO_DELETE}" \
            | xargs -I {} -P 5 gh issue delete "{}" --repo "${REPO}" --yes

          DELETED_COUNT=$(echo "${ISSUES_TO_DELETE}" | wc -l)
          REMAINING_ISSUES=$(gh issue list --repo "${REPO}" --state closed --limit 100 | wc -l)

          echo "::group::✅ Issue cleanup completed!"
          echo "📌 Deleted Issues: ${DELETED_COUNT}"
          echo "📌 Remaining Closed Issues: ${REMAINING_ISSUES}"
          echo "::endgroup::"

          {
            echo "### 🗑️ GitHub Issue Cleanup Summary"
            echo "- **Deleted Issues**: ${DELETED_COUNT}"
            echo "- **Remaining Closed Issues**: ${REMAINING_ISSUES}"
          } >> "$GITHUB_STEP_SUMMARY"


🛠️ Technical Design Choices Behind the Cleanup Workflow

Cleaning up old GitHub issues may seem trivial, but doing it well requires a few careful decisions. Here’s why I built the workflow the way I did:

Why GitHub CLI (gh)?

While I could have used raw REST API calls or GraphQL, the GitHub CLI (gh) provides a nice balance of power and simplicity:

  • It handles authentication and pagination under the hood.
  • Supports JSON output and filtering directly with --json and --jq.
  • Provides convenient commands like gh issue list and gh issue delete that make the script readable.
  • Comes pre-installed on GitHub runners or can be installed easily.

Example fetching closed issues:

gh issue list --repo "$REPO" --state closed --limit 1000 --json number,closedAt

No messy headers or tokens, just straightforward commands.

Filtering with jq

I use jq to:

  • Retain a minimum number of issues to keep (min_issues_to_keep).
  • Keep issues closed more recently than the retention period (days_to_keep).
  • Parse and compare issue closed timestamps with precision.
  • Exclude pull requests from deletion by checking the presence of the pull_request field.

The jq filter looks like this:

jq -r --argjson now "$NOW" --argjson limit "$MIN_ISSUES_TO_KEEP" --argjson threshold "$THRESHOLD_DATE" '
  .[:-(if length < $limit then 0 else $limit end)]
  | map(select(
      (.closedAt | type == "string") and
      ((.closedAt | fromdateiso8601) < $threshold)
    ))
  | .[].number
'

Secure Authentication with Fine-Grained PAT

Because deleting issues is a destructive operation, the workflow uses a Fine-Grained Personal Access Token (PAT) with the narrowest possible scopes:

  • Issues: Read and Write
  • Limited to the repository in question

The token is securely stored as a GitHub Secret (GH_FINEGRAINED_PAT).

Note: Pull requests are not deleted because they are filtered out and the CLI won’t delete PRs via the issues API.

Dry Run for Safety

Before deleting anything, I can run the workflow in dry_run mode to preview what would be deleted:

inputs:
  dry_run:
    description: "Enable dry run mode (preview deletions, no actual delete)"
    default: "false"

This lets me double-check without risking accidental data loss.

Parallel Deletion

Deletion happens in parallel to speed things up:

echo "$ISSUES_TO_DELETE" | xargs -I {} -P 5 gh issue delete "{}" --repo "$REPO" --yes

Up to 5 deletions run concurrently — handy when cleaning dozens of old issues.

User-Friendly Output

The workflow uses GitHub Actions’ logging groups and step summaries to give a clean, collapsible UI:

echo "::group::Issues to delete:"
echo "$ISSUES_TO_DELETE"
echo "::endgroup::"

And a markdown summary is generated for quick reference in the Actions UI.


Why Bother?

I’m not deleting old issues because of disk space or API limits — GitHub doesn’t charge for that. It’s about:

  • Reducing clutter so my issue list stays manageable.
  • Making it easier to find recent, relevant information.
  • Automating maintenance to free my brain for other things.
  • Keeping my tooling neat and tidy, which is its own kind of joy.

Steal It, Adapt It, Use It

If you’re generating temporary issues or ephemeral data in GitHub Issues, consider using a cleanup workflow like this one.

It’s simple, secure, and effective.

Because sometimes, good housekeeping is the best feature.


🧼✨ Happy coding (and cleaning)!

📦 Auto-growing disks in Vagrant: because 10 GB is never enough

Have you ever fired up a Vagrant VM, provisioned a project, pulled some Docker images, ran a build… and ran out of disk space halfway through? Welcome to my world. Apparently, the default disk size in Vagrant is tiny—and while you can specify a bigger virtual disk, Ubuntu won’t magically use the extra space. You need to resize the partition, the physical volume, the logical volume, and the filesystem. Every. Single. Time.

Enough of that nonsense.

🛠 The setup

Here’s the relevant part of my Vagrantfile:

Vagrant.configure(2) do |config|
  config.vm.box = 'boxen/ubuntu-24.04'
  config.vm.disk :disk, size: '20GB', primary: true

  config.vm.provision 'shell', path: 'resize_disk.sh'
end

This makes sure the disk is large enough and automatically resized by the resize_disk.sh script at first boot.

✨ The script

#!/bin/bash
set -euo pipefail
LOGFILE="/var/log/resize_disk.log"
exec > >(tee -a "$LOGFILE") 2>&1
echo "[$(date)] Starting disk resize process..."

REQUIRED_TOOLS=("parted" "pvresize" "lvresize" "lvdisplay" "grep" "awk")
for tool in "${REQUIRED_TOOLS[@]}"; do
  if ! command -v "$tool" &>/dev/null; then
    echo "[$(date)] ERROR: Required tool '$tool' is missing. Exiting."
    exit 1
  fi
done

# Read current and total partition size (in sectors)
parted_output=$(parted --script /dev/sda unit s print || true)
read -r PARTITION_SIZE TOTAL_SIZE < <(echo "$parted_output" | awk '
  / 3 / {part = $4}
  /^Disk \/dev\/sda:/ {total = $3}
  END {print part, total}
')

# Trim 's' suffix
PARTITION_SIZE_NUM="${PARTITION_SIZE%s}"
TOTAL_SIZE_NUM="${TOTAL_SIZE%s}"

if [[ "$PARTITION_SIZE_NUM" -lt "$TOTAL_SIZE_NUM" ]]; then
  echo "[$(date)] Resizing partition /dev/sda3..."
  parted --fix --script /dev/sda resizepart 3 100%
else
  echo "[$(date)] Partition /dev/sda3 is already at full size. Skipping."
fi

if [[ "$(pvresize --test /dev/sda3 2>&1)" != *"successfully resized"* ]]; then
  echo "[$(date)] Resizing physical volume..."
  pvresize /dev/sda3
else
  echo "[$(date)] Physical volume is already resized. Skipping."
fi

LV_SIZE=$(lvdisplay --units M /dev/ubuntu-vg/ubuntu-lv | grep "LV Size" | awk '{print $3}' | tr -d 'MiB')
PE_SIZE=$(vgdisplay --units M /dev/ubuntu-vg | grep "PE Size" | awk '{print $3}' | tr -d 'MiB')
CURRENT_LE=$(lvdisplay /dev/ubuntu-vg/ubuntu-lv | grep "Current LE" | awk '{print $3}')

USED_SPACE=$(echo "$CURRENT_LE * $PE_SIZE" | bc)
FREE_SPACE=$(echo "$LV_SIZE - $USED_SPACE" | bc)

if (($(echo "$FREE_SPACE > 0" | bc -l))); then
  echo "[$(date)] Resizing logical volume..."
  lvresize -rl +100%FREE /dev/ubuntu-vg/ubuntu-lv
else
  echo "[$(date)] Logical volume is already fully extended. Skipping."
fi

💡 Highlights

  • ✅ Uses parted with --script to avoid prompts.
  • ✅ Automatically fixes GPT mismatch warnings with --fix.
  • ✅ Calculates exact available space using lvdisplay and vgdisplay, with bc for floating point math.
  • ✅ Extends the partition, PV, and LV only when needed.
  • ✅ Logs everything to /var/log/resize_disk.log.

🚨 Gotchas

  • Your disk must already use LVM. This script assumes you’re resizing /dev/ubuntu-vg/ubuntu-lv, the default for Ubuntu server installs.
  • You must use a Vagrant box that supports VirtualBox’s disk resizing—thankfully, boxen/ubuntu-24.04 does.
  • If your LVM setup is different, you’ll need to adapt device paths.

🔁 Automation FTW

Calling this script as a provisioner means I never have to think about disk space again during development. One less yak to shave.

Feel free to steal this setup, adapt it to your team, or improve it and send me a patch. Or better yet—don’t wait until your filesystem runs out of space at 3 AM.

🧪 GitHub Actions and Environment Variables: Static vs. Dynamic Smackdown

Let’s talk about environment variables in GitHub Actions — those little gremlins that either make your CI/CD run silky smooth or throw a wrench in your perfectly crafted YAML.

If you’ve ever squinted at your pipeline and wondered, “Where the heck should I declare this ANSIBLE_CONFIG thing so it doesn’t vanish into the void between steps?”, you’re not alone. I’ve been there. I’ve screamed at $GITHUB_ENV. I’ve misused export. I’ve over-engineered echo. But fear not, dear reader — I’ve distilled it down so you don’t have to.

In this post, we’ll look at the right ways (and a few less right ways) to set environment variables — and more importantly, when to use static vs dynamic approaches.


🧊 Static Variables: Set It and Forget It

Got a variable like ANSIBLE_STDOUT_CALLBACK=yaml that’s the same every time? Congratulations, you’ve got yourself a static variable! These are the boring, predictable, low-maintenance types that make your CI life a dream.

✅ Best Practice: Job-Level env

If your variable is static and used across multiple steps, this is the cleanest, classiest, and least shouty way to do it:

jobs:
  my-job:
    runs-on: ubuntu-latest
    env:
      ANSIBLE_CONFIG: ansible.cfg
      ANSIBLE_STDOUT_CALLBACK: yaml
    steps:
      - name: Use env vars
        run: echo "ANSIBLE_CONFIG is $ANSIBLE_CONFIG"

Why it rocks:

  • 👀 Super readable
  • 📦 Available in every step of the job
  • 🧼 Keeps your YAML clean — no extra echo commands, no nonsense

Unless you have a very specific reason not to, this should be your default.


🎩 Dynamic Variables: Born to Be Wild

Now what if your variables aren’t so chill? Maybe you calculate something in one step and need to pass it to another — a file path, a version number, an API token from a secret backend ritual…

That’s when you reach for the slightly more… creative option:

🔧 $GITHUB_ENV to the rescue

- name: Set dynamic environment vars
  run: |
    echo "BUILD_DATE=$(date +%F)" >> $GITHUB_ENV
    echo "RELEASE_TAG=v1.$(date +%s)" >> $GITHUB_ENV

- name: Use them later
  run: echo "Tag: $RELEASE_TAG built on $BUILD_DATE"

What it does:

  • Persists the variables across steps
  • Works well when values are calculated during the run
  • Makes you feel powerful

🪄 Fancy Bonus: Heredoc Style

If you like your YAML with a side of Bash wizardry:

- name: Set vars with heredoc
  run: |
    cat <<EOF >> $GITHUB_ENV
    FOO=bar
    BAZ=qux
    EOF

Because sometimes, you just want to feel fancy.


😵‍💫 What Not to Do (Unless You Really Mean It)

- name: Set env with export
  run: |
    export FOO=bar
    echo "FOO is $FOO"

This only works within that step. The minute your pipeline moves on, FOO is gone. Poof. Into the void. If that’s what you want, fine. If not, don’t say I didn’t warn you.


🧠 TL;DR – The Cheat Sheet

ScenarioBest Method
Static variable used in all stepsenv at the job level
Static variable used in one stepenv at the step level
Dynamic value needed across steps$GITHUB_ENV
Dynamic value only needed in one stepexport (but don’t overdo it)
Need to show off with Bash skillscat <<EOF >> $GITHUB_ENV 😎

🧪 My Use Case: Ansible FTW

In my setup, I wanted to use:

ANSIBLE_CONFIG=ansible.cfg
ANSIBLE_STDOUT_CALLBACK=yaml

These are rock-solid, boringly consistent values. So instead of writing this in every step:

- name: Set env
  run: |
    echo "ANSIBLE_CONFIG=ansible.cfg" >> $GITHUB_ENV

I now do this:

jobs:
  deploy:
    runs-on: ubuntu-latest
    env:
      ANSIBLE_CONFIG: ansible.cfg
      ANSIBLE_STDOUT_CALLBACK: yaml
    steps:
      ...

Cleaner. Simpler. One less thing to trip over when I’m debugging at 2am.


💬 Final Thoughts

Environment variables in GitHub Actions aren’t hard — once you know the rules of the game. Use env for the boring stuff. Use $GITHUB_ENV when you need a little dynamism. And remember: if you’re writing export in step after step, something probably smells.

Got questions? Did I miss a clever trick? Want to tell me my heredoc formatting is ugly? Hit me up in the comments or toot at me on Mastodon.


✍️ Posted by Amedee, who loves YAML almost as much as dancing polskas.
💥 Because good CI is like a good dance: smooth, elegant, and nobody falls flat on their face.
🎻 Scheduled to go live on 20 August — just as Boombalfestival kicks off. Because why not celebrate great workflows and great dances at the same time?

Safer Commands with argv in Ansible: Pros, Cons, and Real Examples

When using Ansible to automate tasks, the command module is your bread and butter for executing system commands. But did you know that there’s a safer, cleaner, and more predictable way to pass arguments? Meet argv—an alternative to writing commands as strings.

In this post, I’ll explore the pros and cons of using argv, and I’ll walk through several real-world examples tailored to web servers and mail servers.


Why Use argv Instead of a Command String?

✅ Pros

  • Avoids Shell Parsing Issues: Each argument is passed exactly as intended, with no surprises from quoting or spaces.
  • More Secure: No shell = no risk of shell injection.
  • Clearer Syntax: Every argument is explicitly defined, improving readability.
  • Predictable: Behavior is consistent across different platforms and setups.

❌ Cons

  • No Shell Features: You can’t use pipes (|), redirection (>), or environment variables like $HOME.
  • More Verbose: Every argument must be a separate list item. It’s explicit, but more to type.
  • Not for Shell Built-ins: Commands like cd, export, or echo with redirection won’t work.

Real-World Examples

Let’s apply this to actual use cases.

🔧 Restarting Nginx with argv

- name: Restart Nginx using argv
  hosts: amedee.be
  become: yes
  tasks:
    - name: Restart Nginx
      ansible.builtin.command:
        argv:
          - systemctl
          - restart
          - nginx

📬 Check Mail Queue on a Mail-in-a-Box Server

- name: Check Postfix mail queue using argv
  hosts: box.vangasse.eu
  become: yes
  tasks:
    - name: Get mail queue status
      ansible.builtin.command:
        argv:
          - mailq
      register: mail_queue

    - name: Show queue
      ansible.builtin.debug:
        msg: "{{ mail_queue.stdout_lines }}"

🗃️ Back Up WordPress Database

- name: Backup WordPress database using argv
  hosts: amedee.be
  become: yes
  vars:
    db_user: wordpress_user
    db_password: wordpress_password
    db_name: wordpress_db
  tasks:
    - name: Dump database
      ansible.builtin.command:
        argv:
          - mysqldump
          - -u
          - "{{ db_user }}"
          - -p{{ db_password }}
          - "{{ db_name }}"
          - --result-file=/root/wordpress_backup.sql

⚠️ Avoid exposing credentials directly—use Ansible Vault instead.


Using argv with Interpolation

Ansible lets you use Jinja2-style variables ({{ }}) inside argv items.

🔄 Restart a Dynamic Service

- name: Restart a service using argv and variable
  hosts: localhost
  become: yes
  vars:
    service_name: nginx
  tasks:
    - name: Restart
      ansible.builtin.command:
        argv:
          - systemctl
          - restart
          - "{{ service_name }}"

🕒 Timestamped Backups

- name: Timestamped DB backup
  hosts: localhost
  become: yes
  vars:
    db_user: wordpress_user
    db_password: wordpress_password
    db_name: wordpress_db
  tasks:
    - name: Dump with timestamp
      ansible.builtin.command:
        argv:
          - mysqldump
          - -u
          - "{{ db_user }}"
          - -p{{ db_password }}
          - "{{ db_name }}"
          - --result-file=/root/wordpress_backup_{{ ansible_date_time.iso8601 }}.sql

🧩 Dynamic Argument Lists

Avoid join(' '), which collapses the list into a single string.

❌ Wrong:

argv:
  - ls
  - "{{ args_list | join(' ') }}"  # BAD: becomes one long string

✅ Correct:

argv: ["ls"] + args_list

Or if the length is known:

argv:
  - ls
  - "{{ args_list[0] }}"
  - "{{ args_list[1] }}"

📣 Interpolation Inside Strings

- name: Greet with hostname
  hosts: localhost
  tasks:
    - name: Print message
      ansible.builtin.command:
        argv:
          - echo
          - "Hello, {{ ansible_facts['hostname'] }}!"


When to Use argv

✅ Commands with complex quoting or multiple arguments
✅ Tasks requiring safety and predictability
✅ Scripts or binaries that take arguments, but not full shell expressions

When to Avoid argv

❌ When you need pipes, redirection, or shell expansion
❌ When you’re calling shell built-ins


Final Thoughts

Using argv in Ansible may feel a bit verbose, but it offers precision and security that traditional string commands lack. When you need reliable, cross-platform automation that avoids the quirks of shell parsing, argv is the better choice.

Prefer safety? Choose argv.
Need shell magic? Use the shell module.

Have a favorite argv trick or horror story? Drop it in the comments below.

🎣 The Curious Case of the Beg Bounty Bait — or: Licence to Phish

Not every day do I get an email from a very serious security researcher, clearly a man on a mission to save the internet — one vague, copy-pasted email at a time.

Here’s the message I received:

From: Peter Hooks <peterhooks007@gmail.com>
Subject: Security Vulnerability Disclosure

Hi Team,

I’ve identified security vulnerabilities in your app that may put users at risk. I’d like to report these responsibly and help ensure they are resolved quickly.

Please advise on your disclosure protocol, or share details if you have a Bug Bounty program in place.

Looking forward to your reply.

Best regards,
Peter Hooks

Right. Let’s unpack this.


🧯”Your App” — What App?

I’m not a company. I’m not a startup. I’m not even a garage-based stealth tech bro.
I run a personal WordPress blog. That’s it.

There is no “app.” There are no “users at risk” (unless you count me, and I̷̜̓’̷̠̋m̴̪̓ ̴̹́a̸͙̽ḷ̵̿r̸͇̽ë̵͖a̶͖̋ḋ̵͓ŷ̴̼ ̴̖͂b̶̠̋é̶̻ÿ̴͇́ọ̸̒ń̸̦d̴̟̆ ̶͉͒s̶̀ͅa̶̡͗v̴͙͊i̵͖̊n̵͖̆g̸̡̔).


🕵️‍♂️ The Anatomy of a Beg Bounty Email

This little email ticks all the classic marks of what the security community affectionately calls a beg bounty — someone scanning random domains, finding trivial or non-issues, and fishing for a payout.

Want to see how common this is? Check out:


📮 My (Admittedly Snarky) Reply

I couldn’t resist. Here’s the reply I sent:

Hi Peter,

Thanks for your email and your keen interest in my “app” — spoiler alert: there isn’t one. Just a humble personal blog here.

Your message hits all the classic marks of a beg bounty reconnaissance email:

  • Generic “Hi Team” greeting — because who needs names?
  • Vague claims of “security vulnerabilities” with zero specifics
  • Polite inquiry about a bug bounty program (spoiler: none here, James)
  • No proof, no details, just good old-fashioned mystery
  • Friendly tone crafted to reel in easy targets
  • Email address proudly featuring “007” — very covert ops of you

Bravo. You almost had me convinced.

I’ll be featuring this charming little interaction in a blog post soon — starring you, of course. If you ever feel like upgrading from vague templates to actual evidence, I’m all ears. Until then, happy fishing!

Cheers,
Amedee


😢 No Reply

Sadly, Peter didn’t write back.

No scathing rebuttal.
No actual vulnerabilities.
No awkward attempt at pivoting.
Just… silence.


#crying
#missionfailed


🛡️ A Note for Fellow Nerds

If you’ve got a domain name, no matter how small, there’s a good chance you’ll get emails like this.

Here’s how to handle them:

  • Stay calm — most of these are low-effort probes.
  • Don’t pay — you owe nothing to random strangers on the internet.
  • Don’t panic — vague threats are just that: vague.
  • Do check your stuff occasionally for actual issues.
  • Bonus: write a blog post about it and enjoy the catharsis.

For more context on this phenomenon, don’t miss:


🧵 tl;dr

If your “security researcher”:

  • doesn’t say what they found,
  • doesn’t mention your actual domain or service,
  • asks for a bug bounty up front,
  • signs with a Gmail address ending in 007

…it’s probably not the start of a beautiful friendship.


Got a similar email? Want help crafting a reply that’s equally professional and petty?
Feel free to drop a comment or reach out — I’ll even throw in a checklist.

Until then: stay patched, stay skeptical, and stay snarky. 😎

Creating 10 000 Random Files & Analyzing Their Size Distribution: Because Why Not? 🧐💾

Ever wondered what it’s like to unleash 10 000 tiny little data beasts on your hard drive? No? Well, buckle up anyway — because today, we’re diving into the curious world of random file generation, and then nerding out by calculating their size distribution. Spoiler alert: it’s less fun than it sounds. 😏

Step 1: Let’s Make Some Files… Lots of Them

Our goal? Generate 10 000 files filled with random data. But not just any random sizes — we want a mean file size of roughly 68 KB and a median of about 2 KB. Sounds like a math puzzle? That’s because it kind of is.

If you just pick file sizes uniformly at random, you’ll end up with a median close to the mean — which is boring. We want a skewed distribution, where most files are small, but some are big enough to bring that average up.

The Magic Trick: Log-normal Distribution 🎩✨

Enter the log-normal distribution, a nifty way to generate lots of small numbers and a few big ones — just like real life. Using Python’s NumPy library, we generate these sizes and feed them to good old /dev/urandom to fill our files with pure randomness.

Here’s the Bash script that does the heavy lifting:

#!/bin/bash

# Directory to store the random files
output_dir="random_files"
mkdir -p "$output_dir"

# Total number of files to create
file_count=10000

# Log-normal distribution parameters
mean_log=9.0  # Adjusted for ~68KB mean
stddev_log=1.5  # Adjusted for ~2KB median

# Function to generate random numbers based on log-normal distribution
generate_random_size() {
    python3 -c "import numpy as np; print(int(np.random.lognormal($mean_log, $stddev_log)))"
}

# Create files with random data
for i in $(seq 1 $file_count); do
    file_size=$(generate_random_size)
    file_path="$output_dir/file_$i.bin"
    head -c "$file_size" /dev/urandom > "$file_path"
    echo "Generated file $i with size $file_size bytes."
done

echo "Done. Files saved in $output_dir."

Easy enough, right? This creates a directory random_files and fills it with 10 000 files of sizes mostly small but occasionally wildly bigger. Don’t blame me if your disk space takes a little hit! 💥

Step 2: Crunching Numbers — The File Size Distribution 📊

Okay, you’ve got the files. Now, what can we learn from their sizes? Let’s find out the:

  • Mean size: The average size across all files.
  • Median size: The middle value when sizes are sorted — because averages can lie.
  • Distribution breakdown: How many tiny files vs. giant files.

Here’s a handy Bash script that reads file sizes and spits out these stats with a bit of flair:

#!/bin/bash

# Input directory (default to "random_files" if not provided)
directory="${1:-random_files}"

# Check if directory exists
if [ ! -d "$directory" ]; then
    echo "Directory $directory does not exist."
    exit 1
fi

# Array to store file sizes
file_sizes=($(find "$directory" -type f -exec stat -c%s {} \;))

# Check if there are files in the directory
if [ ${#file_sizes[@]} -eq 0 ]; then
    echo "No files found in the directory $directory."
    exit 1
fi

# Calculate mean
total_size=0
for size in "${file_sizes[@]}"; do
    total_size=$((total_size + size))
done
mean=$((total_size / ${#file_sizes[@]}))

# Calculate median
sorted_sizes=($(printf '%s\n' "${file_sizes[@]}" | sort -n))
mid=$(( ${#sorted_sizes[@]} / 2 ))
if (( ${#sorted_sizes[@]} % 2 == 0 )); then
    median=$(( (sorted_sizes[mid-1] + sorted_sizes[mid]) / 2 ))
else
    median=${sorted_sizes[mid]}
fi

# Display file size distribution
echo "File size distribution in directory $directory:"
echo "---------------------------------------------"
echo "Number of files: ${#file_sizes[@]}"
echo "Mean size: $mean bytes"
echo "Median size: $median bytes"

# Display detailed size distribution (optional)
echo
echo "Detailed distribution (size ranges):"
awk '{
    if ($1 < 1024) bins["< 1 KB"]++;
    else if ($1 < 10240) bins["1 KB - 10 KB"]++;
    else if ($1 < 102400) bins["10 KB - 100 KB"]++;
    else bins[">= 100 KB"]++;
} END {
    for (range in bins) printf "%-15s: %d\n", range, bins[range];
}' <(printf '%s\n' "${file_sizes[@]}")

Run it, and voilà — instant nerd satisfaction.

Example Output:

File size distribution in directory random_files:
---------------------------------------------
Number of files: 10000
Mean size: 68987 bytes
Median size: 2048 bytes

Detailed distribution (size ranges):
&lt; 1 KB         : 1234
1 KB - 10 KB   : 5678
10 KB - 100 KB : 2890
>= 100 KB      : 198

Why Should You Care? 🤷‍♀️

Besides the obvious geek cred, generating files like this can help:

  • Test backup systems — can they handle weird file size distributions?
  • Stress-test storage or network performance with real-world-like data.
  • Understand your data patterns if you’re building apps that deal with files.

Wrapping Up: Big Files, Small Files, and the Chaos In Between

So there you have it. Ten thousand random files later, and we’ve peeked behind the curtain to understand their size story. It’s a bit like hosting a party and then figuring out who ate how many snacks. 🍿

Try this yourself! Tweak the distribution parameters, generate files, crunch the numbers — and impress your friends with your mad scripting skills. Or at least have a fun weekend project that makes you sound way smarter than you actually are.

Happy hacking! 🔥