Pre-requisites
Ubuntu OS (Xenial or later)
sudo privileges
Internet access
t2.medium instance type or higher
Setup Master and Worker Node
Run the following commands on both the master and worker nodes to prepare them for kubeadm.
sudo su
apt update -y
apt install docker.io -y
systemctl start docker
systemctl enable docker
curl -fsSL "https://packages.cloud.google.com/apt/doc/apt-key.gpg" | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/kubernetes-archive-keyring.gpg
echo 'deb https://packages.cloud.google.com/apt kubernetes-xenial main' > /etc/apt/sources.list.d/kubernetes.list
apt update -y
apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y
Master Node
Initialize the Kubernetes master node.
sudo su kubeadm init
Set up local kubeconfig (both for the root user and normal user):
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Apply Weave network:
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
Generate a token for worker nodes to join:
kubeadm token create --print-join-command
Worker Node
Run the following commands on the worker node.
sudo su kubeadm reset pre-flight checks
Paste the join command you got from the master node and append
--v=5
at the end.Installation
To install and run the application on your Kubernetes cluster, follow these steps:
Clone this repository to your local machine.
Navigate to folder
Create a manifest file
Deployment manifest:
apiVersion
andkind
: These fields specify the API version and the resource type, which is a Deployment in this case.metadata
: This section contains metadata for the Deployment, including its name and labels. Labels are useful for selecting and categorizing resources.spec
: This is where you define the desired state of the Deployment.replicas
: Specifies the number of replicas (pods) that you want to run. In your case, it's set to 1, meaning there will be one pod running.selector
: Defines how the Deployment selects which pods to manage based on labels. It matches pods with the labelapp: taskmaster
.template
: Describes the pod template that the Deployment uses to create new pods.metadata
: Contains labels for pods created by this template.spec
: Defines the specification for the pod.containers
: Specifies the containers running in the pod. In this case, there's one container named "taskmaster" using the specified Docker image.ports
: Lists the ports that the container exposes. Port 5000 is exposed in this example.imagePullPolicy
: Specifies the policy for pulling the container image. "Always" means it will always attempt to pull the latest image.
kubectl apply
command to create the Deployment in your Kubernetes cluster:kubectl apply -f taskmaster-deployment.yaml
kubectl get pods
You can check worker node is app is running or not
docker ps
command is used to list the running Docker containers on your local system.
To scale the application to 3 apps we can run the below command on a worker node.
Now that the app is running we need to communicate these with externally for that we will create a service.yml file in the master node
Kubernetes Service manifest creates a Service named "taskmaster-svc" with the following configuration:
metadata
: This section contains metadata for the Service, including its name.spec
: This is where you define the specifications for the Service.selector
: Specifies how the Service should select pods to route traffic to. In this case, it selects pods with the labelapp: taskmaster
. This label should match the label used in your Deployment or pods.ports
: Defines the ports that the Service should listen on and forward traffic to.port
: Specifies the port on which the Service will listen within the cluster. In this case, it's set to port 80, which means that the Service will listen on port 80 within the cluster.targetPort
: Specifies the port to which traffic should be forwarded inside the pods. It's set to port 5000, which is the port your "taskmaster" application is running on inside the pods.nodePort
: This field specifies a port on the node itself (in this case, 30007) to expose the Service externally. When usingNodePort
type, the Service will be accessible on all cluster nodes at this port.
type
: Sets the type of the Service. In this case, it's set toNodePort
, which means the Service will be accessible from outside the cluster at the specifiednodePort
.
To deploy this Service, save it to a YAML file (e.g., taskmaster-service.yaml
) and use the kubectl apply
command to create the Service in your Kubernetes cluster:
kubectl apply -f taskmaster-service.yaml
Once applied, you can access your "taskmaster" application externally by connecting to any node in your cluster on the specified nodePort
, which in this case is 30007. For example, if your cluster's node has an IP address of <NODE_IP>
, you can access your service at <NODE_IP>:30007
.
Keep in mind that using NodePort
for external access may not be suitable for production environments as it exposes your service on a static port on every node, which may not scale well.
Now application is deployed and able to external access.
Let's move ahead and deploy MongoDB .
Firstly we need to create a persistent volume
Create a Persistent Volume (PV): You can define a Persistent Volume that represents a piece of storage in your cluster. PVs can be provisioned statically or dynamically, depending on your needs.
apiVersion
andkind
: These fields specify the API version and resource type, which is a Persistent Volume (v1
and, respectively).metadata
: This section contains metadata for the Persistent Volume, including its name.spec
: This is where you define the specifications for the Persistent Volume.capacity
: Specifies the storage capacity of the PV. In this case, it's set to 256Mi (256 megabytes).accessModes
: Describes how the PV can be accessed by pods. In this example, it's set to "ReadWriteOnce," which means the PV can be mounted as read-write by a single pod at a time.hostPath
: Specifies that the storage for this PV will be provided by a directory on the host machine (/tmp/db
in this case). This is often used for development and testing purposes, but it's not suitable for production deployments because it doesn't provide data persistence across nodes or cluster failures.
Create a Persistent Volume Claim (PVC): PVCs are requests for storage by pods. They are used to claim storage from available PVs
This PVC definition is requesting storage resources with read-write access and a capacity of 256Mi.
To use this PVC, you would typically include it in a pod definition as a volume claim.
The YAML manifest defines a Kubernetes Deployment for a MongoDB containerized application that uses a Persistent Volume Claim (PVC) for data storage.
Let's break down the key components of this Deployment:
apiVersion
andkind
: These fields specify the API version and resource type, which is a Deployment (apps/v1
and deployment, respectively).metadata
: This section contains metadata for the Deployment, including its name and labels.spec
: This is where you define the desired state of the Deployment.selector
: Specifies how the Deployment selects pods to manage based on labels. In this case, it selects pods with the labelapp: mongo
.template
: Describes the pod template that the Deployment uses to create new pods.metadata
: Contains labels for pods created by this template. In this case, pods created by this Deployment will have the labelapp: mongo
.spec
: Defines the specification for the pod.containers
: Specifies the containers running in the pod. There's one container named "mongo" using the official MongoDB image. It exposes port 27017, which is the default MongoDB port.volumeMounts
: This section defines where the PVC will be mounted inside the pod. It mounts the PVC named "storage" at the path/data/db
.
volumes
: Defines the volumes that can be used by the pods created by this template.name
: Specifies the name of the volume, which is "storage" in this case.persistentVolumeClaim
: This section specifies that the volume is backed by a Persistent Volume Claim (PVC) named "mongo-pvc."
This Deployment is designed to create pods running the MongoDB container with access to persistent storage. The "storage" volume is backed by the "mongo-pvc" PVC, ensuring that data stored in the /data/db
path inside the pod persists across pod restarts and reschedules.
kubectl apply -f mongo.yaml
Kubernetes will create the Deployment, and the pods it manages will automatically have access to the specified PVC for data storage.
You can verify the db by accessing the worker node.
Now get external access with the below service.yml file
Happy Learning!