My Kubernetes CKA Exam Experience & Concepts Explained
I took the CKA exam with The Linux Foundation last week. I had been preparing for this exam for about 3 months total. I think I could have cut that down to two comfortably. One month of classwork, one studying. This isn’t going to be one of those articles about how I passed the exam in 48 hours or anything like that. My k8s experience prior to taking a course was minimal with a little bit of docker. I was a total newbie, but I knew this was the cert I wanted. I love using Kubernetes. It feels like taking a model home from the hobby store, carefully placing each piece, and ending up with something you can be a little proud of when everything comes together. It’s going to be great seeing it’s evolution and improvements to creating resources.
I started with ACG’s CKA course. I completed the coursework and did all of the practice tests until I could do everything from memory. The k8s playgrounds are excellent. I’d look something up in the docs I hadn't tried . Then Id’ build it, see how I can break it, fix it, destroy it. Over and over. I purchased the exam and set a date for a week. A big thing to remember is the test comes with two free practice exam sessions from killer.sh. I would consider the best tool to prepare. I took those tests, which are very similar to the real exam, and was failing miserably. I was frustrated. I started watching some YouTube videos on CKA preparation which all told me I did the wrong course. ACG’s CKA course is a great course, but it should really act as an intro to k8s. Do not expect to be exam ready. I hear that kodekloud’s course is much better for test prep. Another tool that really helped, was killerkoda from killer.sh . Free environments that get you comfortable with some of the test scenarios. It really helped me with understanding roles/rolebinding for example. These are available free 24/7, the practice exams you only have access to for 36 hours, but you can take the tests in that time as much as you want. Solutions and explanations are also provided. They say the questions are harder than the actual exam, but I found them to be quite equal.
The first time I took the test was a bit of a worst case scenario. The proctor said they couldn’t make out my ID. An hour with tech support just for them reviewing this. Then, I was allowed into the exam. I had to switch to a laptop at the last minute to fix the camera issue, but this laptop didn’t have an Ethernet port, WiFi only. The input lag was horrendous. Type a command, count to 3, text starts being input on the command line. I was so thrown off by this and stressing out trying to get into the test, it took me quite a while to find my groove. I knew as soon as the test ended I had failed. I immediately grabbed a pad of paper and wrote down the scenarios I struggled with. I received my test results 24 hours later, 57–66 is a pass. I didn’t want to just pass, I wanted complete understanding of the source material. I set my next exam date and gave myself four days. Back to studying. “Why is this section here in the yaml file? What can I do with this? What power does it have?” are questions I would think to myself. I’d imagine later down the road, being in a position requiring k8s knowledge and not being able to deliver when people are depending on me. I knew where my pain points were, and set out to correct them. Next, I used the another practice test and worked on it non-stop until it expired. I purchased an extra one to activate a day before my exam. I like to set my exams for about 10–1030am. At that point in the morning I’ve had my coffee and at about 8am I start the 2 hour practice test and completely finish it and examine solutions again and again. Take a little break until test time so I am not burned out. I take the test, expectations set, nerves a bit lower. Pass. The first test failure was really discouraging, but it motivated me to learn more.
Here’s a few examples that fell in the gap between my test prep material and the real exam. These should hopefully help others be more prepared.
ETCD Backup & Restore
The first, ETCD backup and restore. Easy enough right? The first time this really threw me off because I was expecting to have to execute all the commands from the master node. Sometimes you may try to execute an etcdctl command and get the “etcdctl not found” error. An easy way to ground yourself and find out where you’re at:
$ whereis etcdctl
You get a return telling you a path, go ahead and execute commands. In test prep I always executed these commands from the master, but you may need to execute from root. Also, remember to include the data directory on your restore, as well as edit your etcd.yaml manifest /etc/kubernetes/manifests/etcd.yaml with the path as well.
$ ETCDCTL_API=3 etcdctl --endpoints=x.x.x.x \
--cacert=/path/to/certs \
--cert=/path/to/certs \
--key=/path/to/key \
--data-dir=/path/to/data/ \
snapshot restore /path/to/snapshot
Exposing Services
When you finish the exam and receive the email with your score, they let you know what areas you were weakest in. Mine was networking initially, but not any more. I went over the concepts repeatedly and different networking scenarios. It can be confusing which way you approach network policies. You may be asked to create an ingress, exposing a service on a path using a specific port. Networking policies are not created through kubectl create, you need to find them in the docs for a template. Ingress + path means you can search ingress in the docs, then search the page for “path:” to go right to it. The docs bring up the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
Change the service name and paths, you’ve now completed the question in under a minute.
Another tip — if you’re looking for the right template, searching “kind:” on the pages will bring you immediately to them.
They may ask you to only expose a pod on a port. DO NOT go to network policy and try to paste the template in a new yaml file. It’s easy with kubectl.
$ kubectl expose pod -n namespacename --name=of-service --type=NodePort --port=80
If you’re exposing a port of an existing container in a deployment, it’s as easy as:
$ kubectl edit deployment deployment-name -n namespace
Add a container port in the deployment.
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
name: of-port-you-want
then run your expose command:
$ kubectl expose deployment name-of-deployment -n namespace --name=of-service --port=80 --type=NodePort --protocol=TCP
to create the service exposing the port on the deployment.
Learning kubectl run and kubectl create
Know the difference between imperative and declarative in regards to k8s.
The biggest time saver you can have is creating everything you can from the command line. DO NOT edit templates for pods, services, etc.
$ kubectl create -h
$ kubectl run -h
This will give you an example of what’s possible in these commands. As I mentioned earlier, some things you will need templates for, like Network Policies.
Ddding dry run and output yaml will not create the resources, but will output into the file of your choosing for easy editing. This is explained here.
--dry-run=client -o yaml > name_of.yaml
You can set these up as an export, but I noticed when switching contexts in the test it wouldn’t always be recognized and would create resources. I’d waste time deleting them sometimes which I didn’t like.
dry run and output to yaml$ export do="--dry-run=client -o yaml"and when deleting resources resources$ export now="--force --grace-period 0"
For instance, you may be asked to create a pod with two containers. We’ll use nginx and busybox, naming them container1 & container 2 respectively. Do not go to the docs for templates. Run a command similar to this:
$ kubectl run name-of-pod -n namespace --image=nginx --dry-run=client -o yaml > pod.yaml
It creates a yaml file like this:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: name-of-pod
name: name-of-pod
namespace: name-of-namespace
spec:
containers:
- image: nginx
name: container1 #change
- image: busybox #add
name: container2 #add
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
Again, in under a minute you’ve completed the question by just having to change the container name and add the additional busybox container. This will save you huge amounts of time. When I completed the retake, I ended with 60 minutes to spare. It gave me time to check my configurations and files twice. A minute here or there really adds up.
Hopefully a few of these tips will get you over the edge with a passing score. Not everyone will pass the first try and nothing worthwhile is easy. Best of luck!