Welcome to the second post on challenge 4! In the last post, we deployed our application (app & dashboard) on a local Kubernetes cluster using KinD (Kubernetes in Docker). Since the instructions also suggested running the application on a remote cluster, Im now going to show you how to do that using Amazon EKS.
You can review the Challenge 4 instructions from Salaboy for more context.
Here you have two options:
- If you’ve already completed challenge 4 by deploying to KinD, you can skip steps 1 to 6 and follow along with this blog from where I start.
- If you want to skip the local KinD deployment and only deploy the application on EKS, start from step 1 and skip step 5, then continue with this blog.
In this post, I’ll be using the Docker image I built previously and the files already in the Challenge 4 directory.
Pre-requisites
- Ensure you have an AWS account set up with an IAM user or role with enough permissions to create EKS cluster: LINK
- To access your AWS account via the Command Line Interface (CLI), make sure AWS CLI is installed on your machine. Follow the installation guide here: LINK
- Configure environment variables for your IAM user to enable CLI access to your AWS account: LINK
- Install the
kubectl
command-line tool to manage Kubernetes clusters: LINK or LINK - Install the
eksctl
CLI to manage EKS clusters: LINK - Ensure Helm is installed for Kubernetes package management: LINK or LINK
Step 1. Create EKS cluster 🔗
Create your Kubernetes cluster through eksctl
with the following commands:
eksctl create cluster \
--name eks-wp \
--region us-east-1 \
--zones us-east-1a,us-east-1b \
--managed
Once your cluster is deployed, you can check the connectivity with the command: kubectl cluster-info
To view the nodes: kubectl get nodes
To list all running pods: kubectl get pods -A
To get more detailed information on the nodes (instances) hosting the pods: kubectl get pods -o wide -A
Step 2. Deploy the App 🔗
Here we should have the already-created files so we only need to deploy all the YAML files to our AWS EKS cluster using:
kubectl apply -f k8s/
This will deploy all the necessary services and pods for the app, dashboard, and database.
Step 3. Access the App 🔗
Now comes the tricky part. Since our services are of type NodePort, we should be able to access them using the public IP addresses of the nodes. With NodePort, the services are exposed on the node’s IP addresses at specific ports.
- App Service: The app service is exposed on NodePort
30000
. - Dashboard Service: The dashboard service is exposed on NodePort
30001
.
Here’s how we should be able to access the services externally:
- Find the public IP of the EKS worker nodes (EC2 instances). We can get these from the AWS console under the EC2 section or use the AWS CLI to retrieve them:
aws ec2 describe-instances --region us-east-1 --query 'Reservations[*].Instances[*].[PublicIpAddress]' --output text
- Once we have the public IP of the nodes, we can access the services by navigating to the following URLs on the browser:
http://<public-ip>:30000 for the app
http://<public-ip>:30001 for the dashboard
Debugging Access Issues 🔗
Unfortunately, this didn’t work for me. I suspected that the issue was with the security group settings on the EKS nodes. By default, security groups might block traffic on the NodePort range (30000-32767).
To fix that, I found the security group associated with your EKS worker nodes: aws eks describe-cluster --name eks-wp --region us-east-1 --query 'cluster.resourcesVpcConfig.securityGroupIds' --output text
And added an inbound rule to allow traffic on the NodePort range (30000-32767) from any IP (0.0.0.0/0).
However, even after fixing the security group, I still couldn’t access the services!
Using a LoadBalancer Instead of NodePort 🔗
Next I tried changing the service type from NodePort to LoadBalancer. This will automatically create an external load balancer (usually an AWS ELB) and assign a public IP that we can use to access the services externally.
To do so I had to update the app and dashboard service YAML files:
app-service.yaml
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: app
ports:
- port: 80
targetPort: 3000
type: LoadBalancer
dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
name: dashboard
spec:
selector:
app: dashboard
ports:
- port: 80
targetPort: 3001
type: LoadBalancer
Then apply the new configurations:
kubectl apply -f k8s/app-service.yaml
kubectl apply -f k8s/dashboard-service.yaml
After applying the changes, Kubernetes will create an AWS LoadBalancer. We can check the external IP by running: kubectl get services
Once the EXTERNAL-IP is available, a field is populated with a public IP or DNS name for our services. With that, we can copy and paste them into the web browser and see our app and dashboard.
Debugging Connectivity Issues 🔗
Even after accessing the services, I ran into ANOTHER issue: the dashboard wasn’t showing any data from the app. This usually points to a problem with how the dashboard connects to the app.
Honestly I was a little bit tired mentally so I asked Salaboy for help. Upon inspecting the developer tools console, he found that the dashboard was still trying to connect to localhost:3001
, which won’t work in a remote environment because now it should connect to the external IP address.
The issue was in the index.html
of the dashboard, where the socket connection was hardcoded (const socket = io('http://localhost:3001');
)
How To Fix It
We need to create an environment variable so we can parameterize that, or a variable for the client. Remember that the HTML is sent from the server to the client. An alternative is to use the browser to detect the URL where the page is hosted.
To resolve this, we need to dynamically set the server address in the HTML, instead of hardcoding localhost
. Because remember that the loadbalancer will always change so we need to write a variable that retrieves the loadbalancer every time. This way, the connection will always use the correct address, even when the load balancer IP changes.
Here’s how we could approach it:
- Environment Variables: Create an environment variable that stores the app’s address, which will be passed to the HTML at runtime. This allows flexibility in different environments.
const socket = io(window.location.origin); // Dynamically use the host where the app is deployed
- Browser-Based Detection: Alternatively, you can use JavaScript in the browser to detect the server’s address and automatically adjust the connection string:
const socket = io(`${window.location.protocol}//${window.location.hostname}:${window.location.port}`);
Now once we change the code, remember that we need to:
- Create a new multi-architecture docker image
- Push it to docker hub
- Update the
dashboard-deployment.yaml
with the new docker image name - Apply the file:
kubectl apply -f dashboard-deployment.yaml
If you follow all these steps you should get your dashboard working!!
Step 4. Clean up your environment 🔗
To clean up your environment, follow these steps:
- Delete the Kubernetes resources:
kubectl delete -f k8s/
- Use
eksctl
to delete your EKS cluster and all of its dependencies via CloudFormation:
eksctl delete cluster eks-wp
Questions (answers coming soon) 🔗
- Am I missing anything between Step 1 and Step 2? Should I have created an IAM OIDC Provider for the cluster, add an IAM service account, install EBS CSI Driver and deploy the Amazon EBS CSI Driver to the cluster like I did here: https://www.juliafmorgado.com/posts/easily-deploy-wordpress-and-mysql-on-amazon-eks/ What is the purpose of these steps?
- How should I manage the database secrets? In this current blog post, I put the db credentials directly on the deployment.yaml file, but on the other blog (mentioned above), I created a kustomization file with the db secret. Which approach is better? What are the best practices?
If you liked this article, follow me on Twitter (where I share my tech journey daily), connect with me on LinkedIn, check out my IG, and make sure to subscribe to my Youtube channel for more amazing content!!