Internal DevOps Platform
November 2021 – April 2024
Also available in ES →
TL;DR
- Solo: designed, built and operated the platform alone across 17 projects
- 3,000+ deployments in the first six months of operation
- Per-branch isolated environments with a shared Vault instance for hierarchical config
- The platform managed its own deployments through the same pipeline
How it started
I joined WITH Madrid as a full-stack developer in mid-2020, building data-intensive client platforms with Django and Vue. Alongside that work, I inherited responsibility for the Ansible-based deployment system: the mechanism that got code from a developer’s branch into production for every project the company ran.
The script itself wasn’t slow. The problem was everything around it: any developer could run it from their own machine, using shared credentials to every environment. There was no central record of who deployed what, when, or why. Debugging a failed deployment meant asking around. Rollbacks were manual and stressful.
It worked… until it didn’t, and when it didn’t, it was hard to know where to start.
I made the case for replacing it entirely.
The platform
The company’s brief was straightforward: build an internal deployment platform. That name, and the problem it implied, was essentially the full spec. Scope, design, priorities: mine to define.
I designed and built it from scratch, as the sole person responsible for the platform, moving deployments off developer laptops and into a controlled, observable pipeline.
The core of the system:
- A Python orchestration layer coordinating every step of the deployment lifecycle: build, test, image publication, deployment, and health verification.
- GitHub Actions as the entry point. A push to the right branch triggered the process automatically, with a full audit trail attached to every run.
- Containerised workflows with Docker for full reproducibility across development, staging, and production.
- Per-project, per-environment configuration that eliminated duplicated logic and made onboarding a new project a matter of minutes.
- No shared credentials: each environment had its own access controls, and no developer needed production secrets on their laptop. Configuration was managed centrally through a Hashicorp Vault instance: each project’s settings were stored hierarchically, so a feature branch that lacked its own config would automatically inherit from the develop environment. Production was always its own isolated set.
This included the platform’s own deployments: any update to it went through the same pipeline it operated for every other project.
I maintained and continued evolving the legacy Ansible system in parallel during the transition, until the new platform replaced it entirely.
The outcome
- 3,000+ deployments in the first six months of operation. Each feature branch got its own fully isolated environment: a copy of the development database, independent of every other branch and the live site. Production was only touched when a tag was deliberately cut.
- A full audit trail for every deployment: who triggered it, what changed, what the result was.
- Developers retained deployment autonomy without carrying the operational risk.
- Rollbacks went from a stressful manual process to a single action.
- Operational knowledge moved from people’s heads and individual laptops into the platform.
Beyond building the system, I acted as a technical mentor and bridge between development and operations, driving adoption and resolving the friction that comes with changing how people work.