GitOps on Bare Metal: ArgoCD, GitLab, and Self-Hosted Delivery
GitOps on Bare Metal: ArgoCD, GitLab, and Self-Hosted Delivery
GitOps has a branding problem. Most content about it assumes you're running GKE with GitHub and a cloud-hosted ArgoCD instance. The principles are universal, but the implementation details diverge sharply when everything -- the Git server, the CI runner, the delivery engine, and the cluster itself -- runs on hardware you own.
This guide covers implementing GitOps on a self-hosted bare-metal Kubernetes cluster. GitLab provides the Git repository and CI pipeline. ArgoCD provides continuous delivery. Nothing leaves your network.
What GitOps actually means
Strip away the marketing and GitOps is a simple idea: Git is the single source of truth for your infrastructure state. An agent in the cluster continuously reconciles actual state against desired state in Git.
That's it. When you want to change something -- deploy a new version, scale a service, update a configuration -- you change a file in Git. The agent (ArgoCD) detects the change and applies it to the cluster.
What you gain:
- Audit trail -- Every change is a Git commit with an author, timestamp, and message.
git logis your change management system. - Rollback -- Revert a commit, ArgoCD applies the previous state. No need to remember what changed or run manual rollback commands.
- Drift detection -- If someone
kubectl apply's something by hand, ArgoCD detects the drift and either alerts or auto-corrects. - Reproducibility -- The cluster state is defined in files. Lose the cluster, rebuild it from the same repo.
What you lose:
- Speed of ad-hoc changes -- You can't just
kubectl edita deployment and move on. Every change goes through Git. This is a feature, but it feels slow when you're debugging at 2 AM. - Simplicity -- GitOps adds infrastructure (ArgoCD, Git server, CI runners). For a 3-pod hobby cluster, it's overkill. For production, it's essential.
Architecture
The self-hosted stack
The full pipeline:
- GitLab -- Git repository hosting, CI/CD pipelines, container registry (optional if using Harbor)
- GitLab Runner -- Executes CI jobs inside the cluster (builds, tests, image pushes)
- Harbor -- Container registry with vulnerability scanning (Trivy)
- ArgoCD -- Continuous delivery. Watches Git repos, syncs desired state to the cluster
All four run inside the same Kubernetes cluster they manage. Yes, this is a chicken-and-egg problem. We'll address it.
Repository structure
A clean GitOps repository structure matters more than people think. Here's what works:
infrastructure/
├── apps/
│ ├── grafana/
│ │ ├── namespace.yaml
│ │ ├── helmrelease.yaml
│ │ └── values.yaml
│ ├── authentik/
│ │ ├── namespace.yaml
│ │ ├── helmrelease.yaml
│ │ └── values.yaml
│ └── harbor/
│ ├── namespace.yaml
│ ├── helmrelease.yaml
│ └── values.yaml
├── platform/
│ ├── cert-manager/
│ ├── metallb/
│ ├── traefik/
│ └── ceph-csi/
├── argocd/
│ ├── apps.yaml # App of Apps
│ └── projects.yaml
└── clusters/
└── production/
├── kustomization.yaml
└── secrets/ # Sealed secrets or SOPS-encrypted
The split matters:
platform/contains infrastructure services that the cluster needs to function: CNI, ingress, storage drivers, cert management. These are deployed first and rarely change.apps/contains application workloads: monitoring, identity, CI/CD, user-facing services. These change frequently.argocd/contains ArgoCD's own configuration, including the "App of Apps" pattern.clusters/contains environment-specific overrides.
App of Apps pattern
ArgoCD's "App of Apps" pattern solves the bootstrap problem. You create a single ArgoCD Application that points to a directory of Application manifests. ArgoCD reads the directory, creates each child Application, and each child Application manages its own set of resources.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: apps
namespace: argocd
spec:
project: default
source:
repoURL: https://gitlab.yourdomain.com/infrastructure/gitops.git
targetRevision: main
path: argocd
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: true
This single Application creates and manages all other Applications. Add a new YAML file to the argocd/ directory, push to Git, and ArgoCD creates the new application automatically.
Deploying GitLab
GitLab is the heaviest component in this stack. A production deployment includes:
- Webservice (Rails application, 2+ replicas)
- Gitaly (Git storage backend)
- GitLab Shell (SSH access)
- Sidekiq (background job processor)
- KAS (Kubernetes Agent Server)
- PostgreSQL (HA, 3-node via CloudNativePG)
- Redis
- Registry (optional if using Harbor)
- Runner (CI/CD job execution)
- MinIO (object storage for uploads, artifacts, LFS)
Deploy via the official GitLab Helm chart:
helm repo add gitlab https://charts.gitlab.io/
helm install gitlab gitlab/gitlab \
--namespace gitlab \
--create-namespace \
-f gitlab-values.yaml
Key values to configure:
global:
hosts:
domain: yourdomain.com
gitlab:
name: gitlab.yourdomain.com
ingress:
class: traefik
configureCertmanager: false
tls:
enabled: true
# Use external PostgreSQL (CloudNativePG)
psql:
host: gitlab-postgresql-rw.gitlab.svc
password:
secret: gitlab-postgresql-secret
key: password
# Disable built-in PostgreSQL -- we use CloudNativePG
postgresql:
install: false
# GitLab Runner for CI/CD
gitlab-runner:
install: true
runners:
privileged: false
config: |
[[runners]]
[runners.kubernetes]
namespace = "gitlab"
image = "ubuntu:24.04"
GitLab Runner security
The runner executes user-defined CI jobs. In a shared cluster, this is a security boundary. Don't run privileged runners unless you absolutely need Docker-in-Docker builds. For container image builds, use Kaniko or Buildah instead -- they build OCI images without requiring Docker socket access or privileged mode.
Deploying ArgoCD
ArgoCD is lighter than GitLab. A production deployment:
helm repo add argo https://argoproj.github.io/argo-helm
helm install argocd argo/argo-cd \
--namespace argocd \
--create-namespace \
-f argocd-values.yaml
Key configuration:
server:
ingress:
enabled: true
ingressClassName: traefik
hosts:
- argocd.yourdomain.com
tls:
- secretName: argocd-tls
hosts:
- argocd.yourdomain.com
# Connect to GitLab
configs:
repositories:
infrastructure:
url: https://gitlab.yourdomain.com/infrastructure/gitops.git
type: git
passwordSecret:
name: argocd-gitlab-credentials
key: password
usernameSecret:
name: argocd-gitlab-credentials
key: username
SSO integration
If you're running Authentik (and you should be), configure ArgoCD to use OIDC:
configs:
cm:
url: https://argocd.yourdomain.com
oidc.config: |
name: Authentik
issuer: https://auth.yourdomain.com/application/o/argocd/
clientID: <client-id>
clientSecret: $oidc.authentik.clientSecret
requestedScopes:
- openid
- profile
- email
Now ArgoCD login goes through Authentik. One identity provider for everything.
The chicken-and-egg problem
ArgoCD manages applications in the cluster. But ArgoCD itself runs in the cluster. GitLab provides the Git source. But GitLab runs in the cluster. How do you bootstrap?
The answer: manual bootstrap, then hand off to GitOps.
- Install RKE2 and Cilium (manual)
- Install ArgoCD via Helm (manual)
- Install GitLab via Helm (manual)
- Push all manifests to GitLab
- Configure ArgoCD to watch the GitLab repo
- ArgoCD now manages everything -- including itself
From this point forward, changes go through Git. ArgoCD manages its own Helm release. GitLab manages its own Helm release. Updates to either are committed to Git and ArgoCD applies them.
What if ArgoCD breaks itself? This is why you keep the Helm values files and the helm install commands documented. If ArgoCD's self-managed update goes wrong, you can manually reinstall it from the command line, reconnect it to the Git repo, and it will re-sync everything.
CI/CD pipeline: from commit to deployment
A typical pipeline for an application running in this stack:
# .gitlab-ci.yml
stages:
- build
- push
- deploy
build:
stage: build
image: gcr.io/kaniko-project/executor:latest
script:
- /kaniko/executor
--context $CI_PROJECT_DIR
--dockerfile Dockerfile
--destination registry.yourdomain.com/$CI_PROJECT_PATH:$CI_COMMIT_SHA
--destination registry.yourdomain.com/$CI_PROJECT_PATH:latest
deploy:
stage: deploy
image: alpine/git
script:
# Update the image tag in the GitOps repo
- git clone https://gitlab.yourdomain.com/infrastructure/gitops.git
- cd gitops
- "sed -i 's|image: registry.yourdomain.com/.*|image: registry.yourdomain.com/$CI_PROJECT_PATH:$CI_COMMIT_SHA|' apps/$APP_NAME/values.yaml"
- git add .
- git commit -m "deploy: $CI_PROJECT_NAME $CI_COMMIT_SHA"
- git push
The pipeline:
- Builds the container image using Kaniko (no Docker daemon needed)
- Pushes to Harbor
- Updates the image tag in the GitOps repo
- ArgoCD detects the change and deploys
The developer pushed code. GitLab built it. Harbor stored it. ArgoCD deployed it. No one ran kubectl manually.
Monitoring the pipeline
ArgoCD exposes Prometheus metrics out of the box. Key metrics to watch:
argocd_app_info-- Application sync status. Alert on any app inOutOfSyncorDegradedstate.argocd_app_sync_total-- Sync operations over time. A sudden spike may indicate a problem.argocd_git_request_total-- Git fetch operations. If this drops to zero, ArgoCD has lost contact with GitLab.
In Grafana, the ArgoCD community dashboard gives you visibility into every application, its sync status, health, and recent sync history.
The discipline tax
GitOps requires discipline. Every change through Git. Every secret encrypted before committing. Every manual fix followed by a commit that codifies it.
When someone kubectl apply's a quick fix at 2 AM and doesn't commit it, ArgoCD will revert it on the next sync. This feels painful in the moment. But the alternative -- undocumented changes accumulating until nobody knows the actual state of the cluster -- is worse.
The discipline tax is the price of reproducibility. Pay it.