$ building cloud infrastructure & CI/CD pipelines_
Chemistry graduate from King's College London transitioning into DevOps. Built production-style cloud infrastructure projects using AWS, Docker, Kubernetes and GitHub Actions — with hands-on debugging of networking, CI/CD, and container orchestration issues.
I'm a graduate DevOps engineer with a Chemistry degree from King's College London. I build and operate cloud infrastructure on AWS. I treat infrastructure as code: version-controlled, reproducible, and deployed through automation.
I've built and debugged real systems. That includes container networking failures, Route53 DNS misconfiguration, security group blocks, and non-deterministic CI pipeline outputs. I know what broken looks like and how to trace it.
My projects cover AWS static hosting with S3, CloudFront, and Route53, a containerised FastAPI service deployed via Kubernetes and GitHub Actions, and EC2 web infrastructure with NGINX and VPC networking. I'm currently building depth in Terraform and observability tooling.
Chemistry instilled rigorous analytical thinking and a precision-first approach to problem solving. Debugging infrastructure requires the same mindset as debugging an experiment: isolate variables, form a hypothesis, test systematically, document results.
A FastAPI-based Retrieval-Augmented Generation service built end-to-end: containerised, published, and deployed via an automated pipeline.
// WHY
Chose Kubernetes over plain Docker to surface real orchestration concerns — readiness probes, service discovery, and replica management. Kept app logic minimal deliberately to focus on infrastructure behaviour.
Hands-on AWS infrastructure: EC2 provisioning, NGINX web server setup, VPC networking, and systematic DNS and connectivity debugging.
// WHY
Built on EC2 rather than a managed service to force direct engagement with networking primitives: security groups, VPC routing, and DNS resolution. Understanding these layers is essential before abstracting them away.
A multi-container system built to surface real Docker failure modes: container networking, state persistence, configuration drift, and horizontal scaling with a reverse proxy.
/data to persist visit counter across container restartsREDIS_HOST and REDIS_PORT via Compose environment variables — decoupled config from image--scale web=3; resolved by switching Flask to expose and routing traffic through NGINXdocker logs, docker exec, and docker volume inspect
// WHY
Used Compose over Kubernetes to keep complexity proportional to scope. Chose Redis as a stateful dependency deliberately — it surfaces persistence, networking, and ephemerality issues early. NGINX was introduced only after scaling exposed real port-binding limitations, not pre-emptively.
Designed and deployed a full AWS static hosting stack. Configured CDN, custom domain, HTTPS termination, and cache management. Debugged real infrastructure issues from stale cache to DNS propagation failures.
/*) via AWS CLI post-deploy to flush stale edge-cached contentindex.html after redeployment — root cause was 24hr default TTL; resolved with a targeted invalidationdig +trace
// WHY
Chose S3 + CloudFront over EC2 hosting because the site is static — no reason to manage a server. Using CloudFront in front of S3 enforces HTTPS, adds edge caching, and keeps the S3 bucket private. Route53 was used to own the full DNS stack end-to-end.
Debugged DNS routing issues between Route53 and an EC2 instance. Used dig and nslookup to trace propagation delays and an A record pointing to a stale IP after an instance restart.
Diagnosed container networking failures during Docker builds where services couldn't communicate. Traced the issue to a missing network bridge configuration and incorrect port binding between containers.
Handled CI pipeline failures caused by inconsistent AI model outputs at YCX. Implemented JSON schema validation and output sanity checks to make the pipeline predictable and stable.
Investigated builds that succeeded locally but failed in CI due to unpinned base image tags and floating dependency versions. Fixed by pinning exact versions and validating across fresh CI environments.
Troubleshot an EC2 web server unreachable from the internet despite NGINX running correctly. Root cause: missing inbound rule on port 80 in the security group. Traced using curl and EC2 Instance Connect.
Recovered commits lost after an accidental hard reset by using git reflog to identify the correct commit hash and restore the branch — without losing any work.
A consistent, layer-by-layer debugging process is how infrastructure failures get diagnosed accurately and fixed permanently. This is the framework I apply to every infrastructure problem.
One active focus. Not four things labelled "in progress" with nothing to show.
Provisioning AWS infrastructure declaratively using Terraform. Moving beyond console-click infrastructure toward version-controlled, reproducible resource definitions. Currently managing VPCs, EC2 instances, S3 buckets, and IAM roles via terraform apply.
I'm actively seeking DevOps, Cloud Infrastructure, and Platform Engineering opportunities where I can contribute and grow.
Currently building production-style cloud infrastructure projects and documenting debugging workflows on GitHub.