Deploying a Simple E-Commerce Website on AKS

Deploying a Simple E-Commerce Website on AKS

Starting a journey into the world of cloud technologies can be as daunting as it is thrilling, especially when it comes to navigating the complexities of Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications. This March, I planned to learn Kubernetes through Kodekloud and coincidentally discovered a challenge called The Kubernetes Resume Challenge by Forrest Brazeal. Since I was already on the path to learning Kubernetes, and this challenge enticed me to deploy a simple e-commerce site on the cloud, thereby fostering learning through building, I embraced this challenge. This challenge, often referred to as the Kubernetes Challenge, part of the Cloud Resume Challenge, is not just a test of one's technical prowess but a rite of passage for those looking to cement their place in the cloud computing domain. In this blog, I embark on a detailed recount of my experiences, learnings, and occasional stumbles as I navigated through the Kubernetes Challenge. From setting up my initial environment to deploying my first containerized application, this narrative aims to not only share my journey but also serve as a beacon for those who might follow in these footsteps, shedding light on the intricacies of Kubernetes and cloud architecture. Join me as I unravel the layers of this challenge, offering insights, tips, and reflections on this pivotal experience.

Objective of the Challenge

This project highlights proficiency in Kubernetes and containerization, demonstrating the ability to deploy, scale, and manage web applications efficiently in a K8s environment, underscoring cloud-native deployment skills.

Step 1: Certification

As I mentioned earlier, I've just started learning Kubernetes from Kodekloud. So, I don't have any certifications yet, but I'm hoping to earn one this year :)

Step 2: Containerize Your E-Commerce Website and Database

A. Web Application Containerization

If you want to deploy your application first step should be to containerize your app and for that you've to build an image and push it to docker hub. So I created a Dockerfile at the root of the application with the following contents.

FROM php:7.4-apache

RUN apt-get update && \
    docker-php-ext-install mysqli

COPY /app /var/www/html

EXPOSE 80

Now run the command docker build -t yourusername/imagename:tag . replacing the placeholders with the actual values to build the image.

Once the image is built, you can push it to Docker Hub using the docker push command:

docker push yourusername/imagename:tag

Ensure that the name and tag in the docker push command match exactly what you used in the docker build command.

Now our web application Docker image is available on Docker Hub.

B. Database Containerization

For our Database component, we won't need to create a custom Docker image; instead, we'll simply pull an image from the Public DockerHub.

Notice that there is a db-load-script.sql script, it's essential to understand its functionality before proceeding confidently.

Step 3: Set Up Kubernetes on a Public Cloud Provider

Now It's time to set our Kubernetes cluster on the Public Cloud Provider. I've chosen AKS as I already have free tier activated in it you can chose any of your choice.

I just followed this guide. After completing all the steps it took just few minutes to create a cluster.

Step 4: Deploy Your Website to Kubernetes

In this step we will deploy our website to Kubernetes. For this to work we will be needing a kubernetes resource called Deployment. This deployment utilizes the Docker image we previously pushed to our Docker Hub repository.

In this step, we've also set up another Deployment resource to host our mariadb image. However, configuring this resource involves additional steps, such as specifying the ROOT PASSWORD for the database and setting up the db-load-script.sql.

As for the db-load-script.sql, we've created a ConfigMap Kubernetes resource to store its data. Configmaps are great to store non-confidential data in key-value pairs.

To create these Kubernetes objects we've to write definition files in yaml( I dont like this if you're not in an IDE this can lead to errors ) but instead a useful trick would be to use the imperative approach. This can save a lot of time specially when you are giving your exams and it's less error prone.

For instance, if you wish to create a Deployment definition file, you can execute the command:

k create deploy site-deploy --image=busybox --dry-run=client -o yaml > site-deploy.yaml

Now lets break down this command:

  • kubectl: This is the command-line tool for interacting with the Kubernetes API. It allows you to deploy applications, inspect and manage cluster resources, and view logs. In the above command I've used k instead of kubectl because I've set an alias for it to save more time. You can also do that by running the command alias k=kubectl .

  • create deploy: This is a shorthand for create deployment. A deployment in Kubernetes is a resource that can manage a set of replicas of a pod. Deployments are used to create and update instances of your application.

  • site-deploy: This is the name given to the deployment being created. It's a user-defined identifier that you can use to refer to this deployment within Kubernetes commands.

  • --image=busybox: This specifies the container image to be used for the pods managed by this deployment. In this case, it's using busybox, which is a lightweight Linux distribution useful for many tasks. Each pod in this deployment will start with this container image.

  • --dry-run=client: This option tells kubectl to simulate the command execution without actually performing any actions on the cluster. It's useful for generating configuration files or testing commands to ensure they're correctly formed without risking changes to your cluster's state. The client mode performs the dry-run operation on the client side, without sending anything to the server.

  • -o yaml: This flag specifies the output format for the command. In this case, it's set to yaml, which is a human-readable data serialization standard, commonly used for configuration files. Kubernetes extensively uses YAML for defining resources.

  • > site-deploy.yaml: This part of the command redirects the output of the kubectl command into a file named site-deploy.yaml. Instead of displaying the generated YAML on the screen, it saves it to this file. This YAML file contains the deployment configuration based on the parameters provided in the command.

The resulting site-deploy.yaml file contains a YAML representation of a Kubernetes deployment resource configured to use the busybox image. This file can be edited further if needed and then applied to a Kubernetes cluster with kubectl apply -f site-deploy.yaml, creating the deployment as configured.

Note: Please note that, you can also create pod, services etc with this command also.

Configuring the DB:

Generally when I work on an application I always try to finish the backend work first because without the data we can't see anything in the frontend right! So we'll first deploy the Database( just use the previous imperative way and edit the fields as you need ).

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mariadb-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mariadb
  template:
    metadata:
      labels:
        app: mariadb
    spec:
      containers:
      - name: mariadb
        image: mariadb:latest
        resources: {}
        ports:
        - containerPort: 3306
        env:
        - name: MARIADB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              key: password
              name: mariadb-secret
        - name: MARIADB_USER
          valueFrom:
            secretKeyRef:
              key: username
              name: mariadb-secret
        volumeMounts:
          - name: mariadb-initdb
            mountPath: /docker-entrypoint-initdb.d
          - name: pvc-storage
            mountPath: var/lib/mysql
      volumes:
      - name: mariadb-initdb
        configMap:
          name: mariadb-initdb-config
      - name: pvc-storage
        persistentVolumeClaim:
          claimName: mariadb-pvc

This code snippet is the final version of the challenge that's why you can see persistent volume applied here.

After the successful deployment you can remotely connect to the Pod.

Once connected, login to the database.

Check if there are Database created, specifically, the ecomdb. There should also be a products table in the ecomdb database. All of these predefined data are created by the db-load-script.sql.

Once everything is verified create a separate user and give the user access to the database. It's best practice not to use the root user for every tasks.

CREATE USER '<new_user>'@'%' IDENTIFIED BY '<password>';
GRANT ALL PRIVILEGES ON <database_name>.* TO '<new_user>'@'%';

FLUSH PRIVILEGES;

Take note of the username and password that we've given to the user as website Pod will use this user the to connect to the database.

Configuring the Website:

With our database ready, our website now possesses the requisite variables for authenticating with the database.

However, before proceeding, we need to make adjustments in the app/index.php file to ensure our PHP application can fetch the database connection strings that we will provide via Environment Variables.

This Environment Variables will then be defined in our Deployment definition file.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: site-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: site-deployment
  template:
    metadata:
      labels:
        app: site-deployment
    spec:
      containers:
      - name: ecom-site
        image: bishal2469/my_php_app:v1
        env:
        - name: DB_HOST
          valueFrom:
            configMapKeyRef:
              key: DB_HOST
              name: site-configmap
        - name: DB_USER
          valueFrom:
            configMapKeyRef:
              key: DB_USER
              name: site-configmap
        - name: DB_NAME
          valueFrom:
            configMapKeyRef:
              key: DB_NAME
              name: site-configmap
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              key: password
              name: site-secret
        - name: FEATURE_DARK_MODE
          valueFrom:
            configMapKeyRef:
              key: FEATURE_DARK_MODE
              name: feature-toggle-config
        livenessProbe:
          httpGet:
            path: /healthcheck.php
            port: 80
          initialDelaySeconds: 15
          timeoutSeconds: 1
          periodSeconds: 15
          failureThreshold: 3
        readinessProbe:
          httpGet:
            path: /db_healthcheck.php
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 10
        ports:
        - containerPort: 80

Step 5: Expose Your Website

For me this the most interesting part because after this step we should be able to access our website through a link. Now, it's time to set up a Kubernetes Service to make our Deployment accessible to external users.

apiVersion: v1
kind: Service
metadata:
  name: site-service
spec:
  type: LoadBalancer
  selector:
    app: site-deployment
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80

It's crucial to ensure the selector section in your definition file contains the correct values ( its the label of our deployment in our case ) . While creating service with pods use the labels of the pod here instead of deployments.

Once the website Service is established, it generates a DNS endpoint through which we can access the web application. we can find this endpoint by retrieving the details of the Service like kubectl describe svc site-service .

You can access my website with this link till 3rd April 2024, cause after that my free subscription will end.

I had to delete the cluster early because it was draining my credits too fast. Sorry.

Step 6: Implement Configuration Management

In this step, we'll create a ConfigMap with the data of FEATURE_DARK_MODE set to true. Subsequently, we'll need to adjust our app/ code to accommodate this configuration by adapting to the value of the Environment Variable. Finally, we'll modify our website Deployment to include the ConfigMap.

We'll implement the Dark Mode feature by creating a separate .css file specifically for this mode. Our approach involves verifying that the Environment VariableFEATURE_DARK_MODE is set to true. Once confirmed, we'll render the dark mode style by linking the appropriate .css file in our application.

I don't come from web dev background so this step was bit challenging to me but a fellow challenger Edward Allen Mercado's blog helped me to solve this. So huge thanks to him :).

Step 7: Scale Your Application

Now We'll scale our application to handle large number of traffic. You can scale your application by running the following command:

 kubectl scale deployment/yoursitedeploymentname --replicas=6

This command increases the number of Pods in your Deployment to 6. But remember this command doesn't increase the number of replicas mentioned in the deployment definition file. You can monitor the growing number of Pods by running:

kubectl get po -w

Once the scaling is done, see if the newly created Pods are in Running State, then go to our website endpoint and verify if it behaves as expected, without any errors, despite the increased resources.

Step 8: Perform a Rolling Update

In this step our task is to update the website's body with a promotional banner. To accomplish this, we'll need to modify the code in the app/ directory accordingly. Once this is done, we'll have to rebuild and push the image to Docker Hub. Don't forget to tag the image with v2 or something like that.

After updating the Docker image, we'll have to make sure that our website Deployment is using the latest version of the image. We can do this either by deleting and recreating the Deployment or by using the following command:

kubectl set image deployment/<web_deployment_name> <container_name>=<your_docker_repo>:<new_tag>

Remember, like with scaling, this change updates the live state in the cluster but does not modify the original deployment YAML file.

Use kubectl rollout status deployment/deploymentname to watch the rolling update process.

Step 9: Roll Back a Deployment

Oops! Looks like the banner we deployed in the previous step introduced a bug. To fix this bug, we need to roll back the deployment to the previous state. We can do so easily by applying the following command:

kubectl rollout undo deployment/<website_deployment_name>

Once the roll out is completed successfully, verify that the website has returned to its previous state, without the banner, ensuring the bug have been resolved.

Step 10: Autoscale Your Application

It's time to implement autoscaling to ensure optimal performance under varying workloads. To achieve this, we'll utilize the Horizontal Pod Autoscaler resource.

Simply execute the following command to implement autoscaling:

kubectl autoscale deployment <website_deployment_name> --cpu-percent=50 --min=2 --max=10

You can monitor the behavior of the Horizontal Pod Autoscaler and your Pods by executing the following commands

kubectl get hpa -w
kubectl get pod -w

This allows you to observe how the autoscaler adjusts the number of Pods based on the generated load, ensuring optimal resource utilization.

Step 11: Implement Liveness and Readiness Probes

In this phase, we'll enhance the reliability of our website by adding liveness and readiness probes. These probes ensure that before our website starts running, it is verified as working, and it maintains its operational state throughout its lifecycle.

To achieve this, we'll modify the Deployment definition to include these probes and then recreate our Deployment.

In my case, I've utilized specific path and separate php files to verify their functionality, such as /db_healthcheck.php and healthcheck.php.

If you'd like to try this on my website endpoint, simply append the mentioned path to the URL.

This implementation ensures that our website is always responsive and maintains its availability, contributing to a seamless user experience.

Step 12: Utilize ConfigMaps and Secrets

In this step, we are instructed to revisit the implementation of our database and website to ensure the secure management of database connection strings and feature toggles, avoiding hardcoding them in the application. I had already implemented this step earlier, knowing that it is best practice to use ConfigMaps for storing non-confidential data and Secrets for confidential data.

This approach ensures that our application's sensitive information remains protected, contributing to a more robust and secure deployment.

It is important to remember that avoiding the push of Kubernetes Secrets to GitHub (or any version control system) and enabling encryption at rest for Secrets are both considered best practices for managing sensitive information in Kubernetes environments.

Bonus Credit

Package Everything in Helm

I was not familiar with Helm, so when I saw that I had to use it, I first completed the Helm course from Kodekloud to gain insights about it. In summary, Helm simplifies Kubernetes application deployment and management by packaging applications and their dependencies into charts, which can be easily deployed, updated, and shared. It manages Kubernetes resources through templates, allowing for reusable and customizable deployments. Helm also tracks the version history of deployments, making it easier to roll back to previous versions if necessary, thereby enhancing the manageability and deployability of complex applications within Kubernetes clusters.

You can use the following command to create a helm chart with template:

helm create chartname

To deploy your application on the Kubernetes cluster run :

helm install releasename pathtochart

Tips: use helm lint <chart_path> To validate a Helm chart. This command checks the chart for possible issues or mistakes without deploying it.

For performing a dry run of a Helm chart installation to see what would be installed without actually deploying anything, you can use the --dry-run flag with the helm install command like this:

helm install releasename pathtochart --dry-run --debug

Implement Persistent Storage

Persistent storage in Kubernetes offers a way to store data that persists beyond the lifecycle of individual pods. This is crucial for stateful applications, such as databases, that need to save data permanently, even when pods are destroyed or recreated.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mariadb-pvc
spec:
  resources:
    requests:
      storage: 250M
  accessModes:
    - ReadWriteOnce
  storageClassName: default

Implement Basic CI/CD Pipeline

Automating the build and deployment process using GitHub Actions offers several benefits that streamline development workflows, improve productivity, and ensure consistency. Here are some of the key advantages:

  1. Continuous Integration/Continuous Deployment (CI/CD): GitHub Actions facilitate CI/CD practices by allowing you to automatically run your build, test, and deployment scripts whenever a specific event occurs in your repository, such as a push to a particular branch or a pull request. This ensures that your code is always in a deployable state and reduces the manual effort involved in deploying applications.

  2. Improved Code Quality: By automating testing as part of your workflows, you can catch bugs and issues early in the development process. Automated tests run on every commit or pull request, ensuring that changes are vetted for quality before they are merged into the main branch.

  3. Efficiency and Speed: Automation reduces the time it takes to go from code commit to deployment, enabling faster iteration and feedback cycles. This is particularly beneficial in agile development environments where speed and efficiency are key.

  4. Consistency and Reliability: Automated workflows ensure that the build and deployment processes are performed in a consistent manner, reducing the likelihood of errors that can occur with manual processes. This consistency helps in maintaining a reliable deployment process, especially when deploying across different environments.

  5. Scalability: As your project grows, manual processes can become a bottleneck. Automation with GitHub Actions allows your build and deployment processes to scale with your project without increasing the manual workload on your team.

Here's the code snippet which I've used to make the CI/CD pipeline:

name: Build and Deploy to AKS

on:
  push:
    branches:
      - master

jobs:
  build-and-push-docker:
   runs-on: ubuntu-latest
   steps:
     - uses: actions/checkout@v3
       name: Checkout source code
     - name: Log in to Docker Hub
       uses: docker/login-action@v3
       with:
         username: ${{ secrets.DOCKER_USERNAME }}
         password: ${{ secrets.DOCKER_PASSWORD }}
     - name: Build and push Docker image
       uses: docker/build-push-action@v5
       with:
         context: .
         file: ./Dockerfile
         push: true
         tags: ${{ secrets.DOCKER_USERNAME }}/my_php_app:latest

  deploy-to-aks:
   needs: build-and-push-docker
   runs-on: ubuntu-latest
   steps:
     - uses: actions/checkout@v3
     - name: Set up Kubectl
       uses: azure/setup-kubectl@v3
     - name: Set up Helm
       uses: azure/setup-helm@v3
     - name: Decode AKS credentials
       run: |
        mkdir -p $HOME/.kube
        echo "${{ secrets.KUBE_CONFIG }}" | base64 -d > $HOME/.kube/config
     - name: Helm Upgrade and Deploy
       run: helm upgrade --install ecomsite ./ecomsite-chart --set image.repository=bishal2469/my_php_app,image.tag=latest

This is my first time implementing a pipeline, so I'm not entirely sure if I should use the kube_config in this way or not. Nonetheless, I'm leaving it like this for now. I'll learn the best practices soon.

The ecomsite: click here

The repo: click here

Conclusion

As I reflect on the completion of the Kubernetes Challenge, I'm filled with a sense of accomplishment and a deeper appreciation for the complexities and capabilities of Kubernetes within cloud infrastructure. This journey, though challenging, has been incredibly rewarding, providing me with practical experience and a solid foundation in cloud-native technologies. This was a great start to my learning journey and now I also have a new project. If you have any queries please leave them in the comments and I'll be happy to answer those.