Voiced by Amazon Polly |
Introduction
GitOps uses Git as the single source of truth for both application code and infrastructure. You can automate your deployment pipeline with traceability and resilience by integrating GitHub Actions, GitHub Container Registry (GHCR), Kubernetes, and Argo CD. This post outlines setting up a GitOps-driven CI/CD workflow for a React Vite app, from code push to production deployment.
Pioneers in Cloud Consulting & Migration Services
- Reduced infrastructural costs
- Accelerated application deployment
GitHub Actions Workflow
This workflow automates the CI and triggers CD via Argo CD.
Step 1: Build the React Application
Purpose: Compile and bundle the React application.
- Check out the source code from the repository.
- Sets up js v18 environment.
- Installs pnpm and project dependencies.
- Executes the build command: pnpm run build.
Note: Using pnpm instead of npm or yarn reduced the build time by approximately 40%.
Step 2: Build and Push Docker Image
Purpose: Containerize the application and push the image to a container registry.
- Uses Docker Buildx for optimized multi-stage builds.
- Authenticates with GitHub Container Registry (GHCR).
- Builds the Docker image and tags it using the Git commit SHA.
- Pushes the image to GHCR for versioned storage and deployment.
Step 3: Update Kubernetes Manifest and Trigger Deployment
Purpose: Update the Kubernetes deployment with the new image version and trigger a GitOps-based deployment.
- Check out the gitops-infra repository containing Kubernetes manifests.
- Uses sed to update the image tag in deployment.yaml.
- Commits and pushes the updated manifest if any changes are detected.
- Argo CD automatically detects the commit and syncs the changes to the Kubernetes cluster, completing the deployment pipeline.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
# .github/workflows/ci-cd.yaml name: Build Step on: push: branches: ['master'] jobs: build: runs-on: ubuntu-latest strategy: matrix: node-version: [18.x] steps: - uses: actions/checkout@v3 - name: Use Node.js ${{ matrix.node-version }} uses: actions/setup-node@v3 with: { node-version: ${{ matrix.node-version }} } - name: Install pnpm run: npm install -g pnpm - name: Cache pnpm and node_modules uses: actions/cache@v3 with: path: | ~/.pnpm-store node_modules key: ${{ runner.os }}-pnpm-${{ hashFiles('**/pnpm-lock.yaml', '**/package.json') }} - name: Install Dependencies run: pnpm install --frozen-lockfile - name: Build Project run: pnpm run build build-and-push-docker-image: name: Build Docker Image and Push to GHCR runs-on: ubuntu-latest needs: build steps: - uses: actions/checkout@v3 name: Checkout Code - uses: docker/setup-buildx-action@v2 name: Set up Docker Buildx - uses: docker/login-action@v2 name: GHCR Login with: registry: ghcr.io username: ${{ github.actor }} password: ${{ secrets.G_TOKEN }} - uses: docker/build-push-action@v2 name: Build and Push Docker Image with: context: ./ push: ${{ github.ref == 'refs/heads/master' }} tags: | ghcr.io/repo_name/gitops-01:${{ github.sha }} cache-from: type=registry,ref=ghcr.io/ /gitops-01:latest cache-to: type=inline update-manifest-stage: runs-on: ubuntu-latest needs: build-and-push-docker-image steps: - uses: actions/checkout@v3 with: repository: repo_name/gitops-infra ref: 'master' token: ${{ secrets.G_TOKEN }} - name: Configure Git run: | git config --global user.email "demo@gmail.com" git config --global user.name "demo" - name: Update Deployment Image run: | sed -i "s#image: .*#image: ghcr.io/repo_name/gitops-01:${{ github.sha}}#g" deployment.yaml - name: Detect Changes id: check_changes run: | git diff --quiet || echo "changes_detected=true" >> $GITHUB_OUTPUT - name: Commit & Push if Changed if: steps.check_changes.outputs.changes_detected == 'true' run: | git add deployment.yaml git commit -m "Update image to ghcr.io/repo_name/gitops-01:${{ github.sha }}" git push origin master |
MultiStage Dockerfile
First stage
- Based on node:18-alpine. Installs dependencies and runs npm run build to generate /app/dist.
Second stage
Based on a minimal nginx:1.23-alpine. Copies static files from the build stage into Nginx’s default root. Exposes port 80. The final image size is small, containing no build tools or Node runtime.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
# Stage 1: Build the React application FROM node:18-alpine AS build-step WORKDIR /app COPY package*.json ./ RUN npm install COPY . . RUN npm run build # Stage 2: Serve the built app using Nginx FROM nginx:1.23-alpine COPY --from=build-step /app/dist /usr/share/nginx/html/ EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] |
Kubernetes Manifests
These manifests define how the application runs in Kubernetes.
deployment.yaml
This Deployment runs 4 replicas of a stateless React app served via Nginx, using a specific image from GitHub Container Registry (GHCR). It authenticates using an imagePullSecret and exposes port 80 for HTTP traffic.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
apiVersion: apps/v1 kind: Deployment metadata: name: react-app spec: replicas: 4 selector: matchLabels: app: react-app template: metadata: labels: app: react-app spec: imagePullSecrets: - name: ghcr-secret containers: - name: react-app image: ghcr.io/username/gitops-01:1fd1b2ca4f29d1fe3b49bdb03a2956c5a5e68513 imagePullPolicy: Always ports: - containerPort: 80 |
service.yaml
Exposes the app via NodePort on port 31000.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
apiVersion: v1 kind: Service metadata: name: react-app spec: type: NodePort selector: app: react-app ports: - port: 80 protocol: TCP targetPort: 80 nodePort: 31000 |
Conclusion
This setup enhances developer productivity, ensures versioned deployments, and brings traceable, auditable automation to your delivery workflow, which is ideal for modern cloud-native teams.
Drop a query if you have any questions regarding ArgoCD and we will get back to you quickly.
Making IT Networks Enterprise-ready – Cloud Management Services
- Accelerated cloud migration
- End-to-end view of the cloud environment
About CloudThat
CloudThat is an award-winning company and the first in India to offer cloud training and consulting services worldwide. As a Microsoft Solutions Partner, AWS Advanced Tier Training Partner, and Google Cloud Platform Partner, CloudThat has empowered over 850,000 professionals through 600+ cloud certifications winning global recognition for its training excellence including 20 MCT Trainers in Microsoft’s Global Top 100 and an impressive 12 awards in the last 8 years. CloudThat specializes in Cloud Migration, Data Platforms, DevOps, IoT, and cutting-edge technologies like Gen AI & AI/ML. It has delivered over 500 consulting projects for 250+ organizations in 30+ countries as it continues to empower professionals and enterprises to thrive in the digital-first world.
FAQs
1. How do I roll back a bad deployment?
ANS: – Simply revert the Git commit that updated the image tag in your Kubernetes manifest. Once pushed, Argo CD will detect the change and automatically redeploy the previous version to the cluster, no manual intervention is required.
2. Why use a multistage Dockerfile?
ANS: – To produce lean and secure Docker images. The first stage handles building the application, while the second serves it using a minimal Nginx image. This approach removes unnecessary build tools and dependencies from the final image, reducing the size and surface area for vulnerabilities
WRITTEN BY Karthick S
Comments