Skip to content

CI

🚀 Smarter CI with GitHub Actions Cache for Ansible (aka “Stop Downloading the Internet”)

Mood: Slightly annoyed at CI pipelines 🧨
CI runs shouldn’t feel like molasses. Here’s how I got Ansible to stop downloading the internet. You’re welcome.


Let’s get one thing straight: nobody likes waiting on CI.
Not you. Not me. Not even the coffee you brewed while waiting for Galaxy roles to install — again.

So I said “nope” and made it snappy. Enter: GitHub Actions Cache + Ansible + a generous helping of grit and retries.

🧙‍♂️ Why cache your Ansible Galaxy installs?

Because time is money, and your CI shouldn’t feel like it’s stuck in dial-up hell.
If you’ve ever screamed internally watching community.general get re-downloaded for the 73rd time this month — same, buddy, same.

The fix? Cache that madness. Save your roles and collections once, and reuse like a boss.

💾 The basics: caching 101

Here’s the money snippet:

path: .ansible/
key: ansible-deps-${{ hashFiles('requirements.yml') }}
restoreKeys: |
  ansible-deps-

🧠 Translation:

  • Store everything Ansible installs in .ansible/
  • Cache key changes when requirements.yml changes — nice and deterministic
  • If the exact match doesn’t exist, fall back to the latest vaguely-similar key

Result? Fast pipelines. Happy devs. Fewer rage-tweets.

🔁 Retry like you mean it

Let’s face it: ansible-galaxy has… moods.

Sometimes Galaxy API is down. Sometimes it’s just bored. So instead of throwing a tantrum, I taught it patience:

for i in {1..5}; do
  if ansible-galaxy install -vv -r requirements.yml; then
    break
  else
    echo "Galaxy is being dramatic. Retrying in $((i * 10)) seconds…" >&2
    sleep $((i * 10))
  fi
done

That’s five retries. With increasing delays.
💬 “You good now, Galaxy? You sure? Because I’ve got YAML to lint.”

⚠️ The catch (a.k.a. cache wars)

Here’s where things get spicy:

actions/cache only saves when a job finishes successfully.

So if two jobs try to save the exact same cache at the same time?
💥 Boom. Collision. One wins. The other walks away salty:

Unable to reserve cache with key ansible-deps-...,
another job may be creating this cache.

Rude.

🧊 Fix: preload the cache in a separate job

The solution is elegant:
Warm-up job. One that only does Galaxy installs and saves the cache. All your other jobs just consume it. Zero drama. Maximum speed. 💃

🪄 Tempted to symlink instead of copy?

Yeah, I thought about it too.
“But what if we symlink .ansible/ and skip the copy?”

Nah. Not worth the brainpower. Just cache the thing directly.
It works. 🧼 It’s clean. 😌 You sleep better.

🧠 Pro tips

  • Use the hash of requirements.yml as your cache key. Trust me.
  • Add a fallback prefix like ansible-deps- so you’re never left cold.
  • Don’t overthink it. Let the cache work for you, not the other way around.

✨ TL;DR

  • ✅ GitHub Actions cache = fast pipelines
  • ✅ Smart keys based on requirements.yml = consistency
  • ✅ Retry loops = less flakiness
  • ✅ Preload job = no more cache collisions
  • ❌ Re-downloading Galaxy junk every time = madness

🔥 Go forth and cache like a pro.

Got better tricks? Hit me up on Mastodon and show me your CI magic.
And remember: Friends don’t let friends wait on Galaxy.

💚 Peace, love, and fewer ansible-galaxy downloads.

Automating My Server Management with Ansible and GitHub Actions

Managing multiple servers can be a daunting task, especially when striving for consistency and efficiency. To tackle this challenge, I developed a robust automation system using Ansible, GitHub Actions, and Vagrant. This setup not only streamlines server configuration but also ensures that deployments are repeatable and maintainable.

A Bit of History: How It All Started

This project began out of necessity. I was maintaining a handful of Ubuntu servers — one for email, another for a website, and a few for experiments — and I quickly realized that logging into each one to make manual changes was both tedious and error-prone. My first step toward automation was a collection of shell scripts. They worked, but as the infrastructure grew, they became hard to manage and lacked the modularity I needed.

That is when I discovered Ansible. I created the ansible-servers repository in early 2024 as a way to centralize and standardize my infrastructure automation. Initially, it only contained a basic playbook for setting up users and updating packages. But over time, it evolved to include multiple roles, structured inventories, and eventually CI/CD integration through GitHub Actions.

Every addition was born out of a real-world need. When I got tired of testing changes manually, I added Vagrant to simulate my environments locally. When I wanted to be sure my configurations stayed consistent after every push, I integrated GitHub Actions to automate deployments. When I noticed the repo growing, I introduced linting and security checks to maintain quality.

The repository has grown steadily and organically, each commit reflecting a small lesson learned or a new challenge overcome.

The Foundation: Ansible Playbooks

At the core of my automation strategy are Ansible playbooks, which define the desired state of my servers. These playbooks handle tasks such as installing necessary packages, configuring services, and setting up user accounts. By codifying these configurations, I can apply them consistently across different environments.

To manage these playbooks, I maintain a structured repository that includes:

  • Inventory Files: Located in the inventory directory, these YAML files specify the hosts and groups for deployment targets.
  • Roles: Under the roles directory, I define reusable components that encapsulate specific functionalities, such as setting up a web server or configuring a database.
  • Configuration File: The ansible.cfg file sets important defaults, like enabling fact caching and specifying the inventory path, to optimize Ansible’s behavior.

Seamless Deployments with GitHub Actions

To automate the deployment process, I leverage GitHub Actions. This integration allows me to trigger Ansible playbooks automatically upon code changes, ensuring that my servers are always up-to-date with the latest configurations.

One of the key workflows is Deploy to Production, which executes the main playbook against the production inventory. This workflow is defined in the ansible-deploy.yml file and is triggered on specific events, such as pushes to the main branch.

Additionally, I have set up other workflows to maintain code quality and security:

  • Super-Linter: Automatically checks the codebase for syntax errors and adherence to best practices.
  • Codacy Security Scan: Analyzes the code for potential security vulnerabilities.
  • Dependabot Updates: Keeps dependencies up-to-date by automatically creating pull requests for new versions.

Local Testing with Vagrant

Before deploying changes to production, it is crucial to test them in a controlled environment. For this purpose, I use Vagrant to spin up virtual machines that mirror my production servers.

The deploy_to_staging.sh script automates this process by:

  1. Starting the Vagrant environment and provisioning it.
  2. Installing required Ansible roles specified in requirements.yml.
  3. Running the Ansible playbook against the staging inventory.

This approach allows me to validate changes in a safe environment before applying them to live servers.

Embracing Open Source and Continuous Improvement

Transparency and collaboration are vital in the open-source community. By hosting my automation setup on GitHub, I invite others to review, suggest improvements, and adapt the configurations for their own use cases.

The repository is licensed under the MIT License, encouraging reuse and modification. Moreover, I actively monitor issues and welcome contributions to enhance the system further.


In summary, by combining Ansible, GitHub Actions, and Vagrant, I have created a powerful and flexible automation framework for managing my servers. This setup not only reduces manual effort but also increases reliability and scalability. I encourage others to explore this approach and adapt it to their own infrastructure needs. What began as a few basic scripts has now evolved into a reliable automation pipeline I rely on every day.

If you are managing servers and find yourself repeating the same configuration steps, I invite you to check out the ansible-servers repository on GitHub. Clone it, explore the structure, try it in your own environment — and if you have ideas or improvements, feel free to open a pull request or start a discussion. Automation has made a huge difference for me, and I hope it can do the same for you.