Skip to content

GitHub

blur bright business codes

Automating My Server Management with Ansible and GitHub Actions

Managing multiple servers can be a daunting task, especially when striving for consistency and efficiency. To tackle this challenge, I developed a robust automation system using Ansible, GitHub Actions, and Vagrant. This setup not only streamlines server configuration but also ensures that deployments are repeatable and maintainable.

A Bit of History: How It All Started

This project began out of necessity. I was maintaining a handful of Ubuntu servers — one for email, another for a website, and a few for experiments — and I quickly realized that logging into each one to make manual changes was both tedious and error-prone. My first step toward automation was a collection of shell scripts. They worked, but as the infrastructure grew, they became hard to manage and lacked the modularity I needed.

That is when I discovered Ansible. I created the ansible-servers repository in early 2024 as a way to centralize and standardize my infrastructure automation. Initially, it only contained a basic playbook for setting up users and updating packages. But over time, it evolved to include multiple roles, structured inventories, and eventually CI/CD integration through GitHub Actions.

Every addition was born out of a real-world need. When I got tired of testing changes manually, I added Vagrant to simulate my environments locally. When I wanted to be sure my configurations stayed consistent after every push, I integrated GitHub Actions to automate deployments. When I noticed the repo growing, I introduced linting and security checks to maintain quality.

The repository has grown steadily and organically, each commit reflecting a small lesson learned or a new challenge overcome.

The Foundation: Ansible Playbooks

At the core of my automation strategy are Ansible playbooks, which define the desired state of my servers. These playbooks handle tasks such as installing necessary packages, configuring services, and setting up user accounts. By codifying these configurations, I can apply them consistently across different environments.

To manage these playbooks, I maintain a structured repository that includes:

  • Inventory Files: Located in the inventory directory, these YAML files specify the hosts and groups for deployment targets.
  • Roles: Under the roles directory, I define reusable components that encapsulate specific functionalities, such as setting up a web server or configuring a database.
  • Configuration File: The ansible.cfg file sets important defaults, like enabling fact caching and specifying the inventory path, to optimize Ansible’s behavior.

Seamless Deployments with GitHub Actions

To automate the deployment process, I leverage GitHub Actions. This integration allows me to trigger Ansible playbooks automatically upon code changes, ensuring that my servers are always up-to-date with the latest configurations.

One of the key workflows is Deploy to Production, which executes the main playbook against the production inventory. This workflow is defined in the ansible-deploy.yml file and is triggered on specific events, such as pushes to the main branch.

Additionally, I have set up other workflows to maintain code quality and security:

  • Super-Linter: Automatically checks the codebase for syntax errors and adherence to best practices.
  • Codacy Security Scan: Analyzes the code for potential security vulnerabilities.
  • Dependabot Updates: Keeps dependencies up-to-date by automatically creating pull requests for new versions.

Local Testing with Vagrant

Before deploying changes to production, it is crucial to test them in a controlled environment. For this purpose, I use Vagrant to spin up virtual machines that mirror my production servers.

The deploy_to_staging.sh script automates this process by:

  1. Starting the Vagrant environment and provisioning it.
  2. Installing required Ansible roles specified in requirements.yml.
  3. Running the Ansible playbook against the staging inventory.

This approach allows me to validate changes in a safe environment before applying them to live servers.

Embracing Open Source and Continuous Improvement

Transparency and collaboration are vital in the open-source community. By hosting my automation setup on GitHub, I invite others to review, suggest improvements, and adapt the configurations for their own use cases.

The repository is licensed under the MIT License, encouraging reuse and modification. Moreover, I actively monitor issues and welcome contributions to enhance the system further.


In summary, by combining Ansible, GitHub Actions, and Vagrant, I have created a powerful and flexible automation framework for managing my servers. This setup not only reduces manual effort but also increases reliability and scalability. I encourage others to explore this approach and adapt it to their own infrastructure needs. What began as a few basic scripts has now evolved into a reliable automation pipeline I rely on every day.

If you are managing servers and find yourself repeating the same configuration steps, I invite you to check out the ansible-servers repository on GitHub. Clone it, explore the structure, try it in your own environment — and if you have ideas or improvements, feel free to open a pull request or start a discussion. Automation has made a huge difference for me, and I hope it can do the same for you.


Advent of Code

I’m starting with Advent of Code (again)

From the AI-generated Wikipedia summary for a 10 year old:

The Advent of Code is an exciting annual computer programming event that takes place during the holiday season. It’s a fun challenge for programmers of all levels!

Every day in December leading up to Christmas, a new coding puzzle is released on the Advent of Code website. These puzzles are designed to test your problem-solving skills and help you improve your coding abilities.

You can participate by solving each puzzle using any programming language you’re comfortable with. The puzzles start off easy and gradually become more challenging as the days go by. You’ll get to explore different concepts like algorithms, data structures, and logical thinking while having lots of fun!

Not only will you have the opportunity to learn and practice coding, but there’s also a friendly community of fellow participants who share their solutions and discuss strategies on forums or social media platforms.

So if you enjoy coding or want to give it a try, the Advent of Code is a fantastic event for you! It’s a great way to sharpen your programming skills while enjoying the festive spirit during the holiday season.

Back in 2018 I created a GitHub repository with the good intention to work on all the puzzles, starting from the first year, 2015. Well, guess what never happened? ¯\_(ツ)_/¯

This year I’m starting again. I do not promise that I will work on a puzzle every day. Maybe I’ll spend more time on procrastinating setting up GitHub Actions. We’ll see…

My take on the Gilded Rose kata

The Gilded Rose Kata by Emily Bache is a staple in refactoring exercises. It offers a deceptively simple problem: refactor an existing codebase while preserving its behavior. I recently worked through the TypeScript version of the kata, and this post documents the transformation from a legacy mess into clean, testable code—with examples along the way.

But before diving into the code, I should mention: this was my very first encounter with TypeScript. I had never written a single line in the language before this exercise. That added an extra layer of learning—on top of refactoring legacy code, I was also picking up TypeScript’s type system, syntax, and tooling from scratch.


🧪 Development Workflow

Pre-Commit Hooks

pre-commit.com is a framework for managing and maintaining multi-language pre-commit hooks. It allows you to define a set of checks (such as code formatting, linting, or security scans) that automatically run before every commit, helping ensure code quality and consistency across a team. Hooks are easily configured in a .pre-commit-config.yaml file and can be reused from popular repositories or custom scripts. It integrates seamlessly with Git and supports many languages and tools out of the box.

I added eslint and gitlint:

- repo: https://github.com/pre-commit/mirrors-eslint
  hooks:
    - id: eslint

  - repo: https://github.com/jorisroovers/gitlint
    hooks:
      - id: gitlint

GitHub Actions

GitHub Actions was used to automate the testing workflow, ensuring that every push runs the full test suite. This provides immediate feedback when changes break functionality, which was especially important while refactoring the legacy Gilded Rose code. The setup installs dependencies with npm, runs tests with yarn, and ensures consistent results across different environments—helping maintain code quality and giving confidence to refactor freely while learning TypeScript.

name: Build

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  build:
    runs-on: ubuntu-latest

    strategy:
      matrix:
        node-version: [12.x]

    steps:
      - uses: actions/checkout@v2
      - name: Node.js
        uses: actions/setup-node@v1
        with:
          node-version: ${{ matrix.node-version }}
      - run: npm install -g yarn
        working-directory: ./TypeScript
      - name: yarn install, compile and test
        run: |
          yarn
          yarn compile
          yarn test
        working-directory: ./TypeScript

🔍 Starting Point: Legacy Logic

Originally, everything was handled in a massive updateQuality() function using nested if statements like this:

if (item.name !== 'Aged Brie' && item.name !== 'Backstage passes') {
    if (item.quality > 0) {
        item.quality--;
    }
} else {
    if (item.quality < 50) {
        item.quality++;
    }
}

The function mixed different concerns and was painful to extend.


🧪 Building Safety Nets

Golden master tests are a technique used to protect legacy code during refactoring by capturing the current behavior of the system and comparing it against future runs. In this project, I recorded the output of the original updateQuality() function across many item variations. As changes were made to clean up and restructure the logic, the tests ensured that the external behavior remained identical. This approach was especially useful when the codebase was poorly understood or untested, offering a reliable safety net while improving internal structure.

expect(goldenMasterOutput).toEqual(currentOutput);

🧹 Refactoring: Toward Structure and Simplicity

1. Extracting Logic

I moved logic to a separate method:

private doUpdateQuality(item: Item) {
    // clean, focused logic
}

This isolated the business rules from boilerplate iteration.

2. Replacing Conditionals with a switch

Using a switch statement instead of multiple if/else if blocks makes the code cleaner, more readable, and easier to maintain—especially when checking a single variable (like item.name) against several known values. It clearly separates each case, making it easier to scan and reason about the logic. In the Gilded Rose project, switching to switch also made it easier to later refactor into specialized handlers or classes for each item type, as each case represented a clear and distinct behavior to isolate.

switch (item.name) {
    case 'Aged Brie':
        this.updateBrie(item);
        break;
    case 'Sulfuras':
        break; // no-op
    case 'Backstage passes':
        this.updateBackstage(item);
        break;
    default:
        this.updateNormal(item);
}

This increased clarity and prepared the ground for polymorphism or factory patterns later.


🛠 Polishing the Code

Constants and Math Utilities

Instead of magic strings and numbers, I introduced constants:

const MAX_QUALITY = 50;
const MIN_QUALITY = 0;

I replaced verbose checks with:

item.quality = Math.min(MAX_QUALITY, item.quality + 1);

Factory Pattern

The factory pattern is a design pattern that creates objects without exposing the exact class or construction logic to the code that uses them. Instead of instantiating classes directly with new, a factory function or class decides which subclass to return based on input—like item names in the Gilded Rose kata. This makes it easy to add new behaviors (e.g., “Conjured” items) without changing existing logic, supporting the Open/Closed Principle and keeping the code modular and easier to test or extend.

switch (true) {
    case /^Conjured/.test(item.name):
        return new ConjuredItem(item);
    case item.name === 'Sulfuras':
        return new SulfurasItem(item);
    // ...
}

🌟 Feature Additions

With structure in place, adding Conjured Items was straightforward:

class ConjuredItem extends ItemUpdater {
    update() {
        this.decreaseQuality(2);
        this.decreaseSellIn();
    }
}

A corresponding test was added to confirm behavior.


🎯 Conclusion

The journey from legacy to clean architecture was iterative and rewarding. Key takeaways:

  • Set up CI and hooks early to enforce consistency.
  • Use golden master tests for safety.
  • Start small with extractions and switch statements.
  • Add structure gradually—factories, constants, classes.
  • With a clean base, adding features like “Conjured” is trivial.

All this while learning TypeScript for the first time!

You can explore the full codebase and history here:
📦 Gilded Rose Refactoring Kata — TypeScript branch

Curious to try it yourself, also in other languages?
Fork Emily Bache’s repo here: GildedRose-Refactoring-Kata on GitHub