Docker Kubernetes sample interview questions
- liveness and readiness probes, startup probes
Liveness probe checks the container health as we tell it do, and if for some reason the liveness probe fails, it restarts the container. it basically checks application is up or not
apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers: - name: liveness image: k8s.gcr.io/busybox args: - /bin/sh - -c - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 3 periodSeconds: 5
Readiness probe checks if application is ready to accept traffic or not
In some cases we would like our application to be alive, but not serve traffic unless some conditions are met e.g, populating a dataset, waiting for some other service to be alive etc. In such cases we use readiness probe. If the condition inside readiness probe passes, only then our application can serve traffic.
crash loop backoff:-
image not present in the location u mentioned
exponential back-off limit
restart policy: always
restart policy: never (crash loop backoff never) deployment container can take care of replicas of pods, so restart policy we can tag as never
multi container
init containers :- if init containers failed , deployment stopped
sidecar containers
we have used prometheus and grafana with alertmanager and nodeexporter plugins
prometheus only fetch metrics like , but i need to scrape the https endpoint for which output is json format which prometheus does not understand
The blackbox exporter allows blackbox probing of endpoints over HTTP, HTTPS, DNS, TCP and ICMP.
for https based applications , there will be an endpoints like /metrics or /health which can be monitored, if those endpoints are also secure, we need to use bearer tokens or jw2 tokens every request inorder to scrape the data
json exporter takes json as input and send out data understandable by prometheus and can be fetched by promql
operators vs helm
they use crd and leverage Kubernetes API resource capabilities to write own custom scripts
its also kind of package manager like helm consisting of all kubernetes manifests files like depl , svc , pv, PVC , cm , secrets, svc, etc
using operators , managing is little easier compare to helm but i feel we should be able to understand operator framework
currently we are using helm in our project
Prometheus is an open-source tool for collecting metrics and sending alerts. It was developed by SoundCloud. It has the following primary components: The core Prometheus app – This is responsible for scraping and storing metrics in an internal time series database.
Prometheus expects metrics to be available on targets on a path of /metrics . So this default job is scraping via the URL: localhost:9090/metrics.
prometehus accepts only 4 types of metrics
Counters.
Gauges.
Histograms.
Summaries.
default deployment strategy used by Kubernetes is
rolling update, others are canary, blue-green
less downtime, less cost effective :- canary. u choose which user accesses the latest and which user access newer deployment
rolling update, newer version pods come up in increased way,old instances will be deleted, u r replacing older instances with newer instances of pods, every user gets effected
resource quotas constrain ur resource to namespaces like for example if u r deploying all prod , dev , stage in a single cluster , limit ur resources to dev , stage and allocate all remaining resources to prod env
cpu 10 cores , mem 20 gb
can we restrict wrt single pod ?any resource in that ns can utilize as much as possible from that ns , if we put limit range for pod, we can restrict each pod usage
helm 3 version
helm create
helm install
somebody changes the manifests directly and apply directly using kubectl commands, will it consider when u upgrade ? YES, it will consider the live state of cluster . helm uses 3 way strategy merge patch , older manifests files , newer manifests files and live state of the cluster also. so correct patch is generated based on considering these 3
docker ARG for pulling docker base image
https://www.jeffgeerling.com/blog/2017/use-arg-dockerfile-dynamic-image-specification
To use an
ARG
in yourDockerfile
'sFROM
:ARG MYAPP_IMAGE=myorg/myapp:latest FROM $MYAPP_IMAGE ...
Then if you want to use a different image/tag, you can provide it at runtime:
docker build -t container_tag --build-arg MYAPP_IMAGE=localimage:latest
kubernetes 1.22
port forwarding in Kubernetes
we can use it for testing an pod by forwarding particular port onto host machine
kubectl port-forward svc/{{your service name}} 7000:80
kubectl port-forward TYPE/NAME [options] LOCAL_PORT:REMOTE_PORT
use of labels and annotations in Kubernetes
why do we need container orchestration
scaling ur replicas , monitoring, bringing up ur failed pods
why Kubernetes over Docker swarm
DS is a basic form of container orchestration tool, it lacs in a few things like RBAC, exposing containers in svc , no in-built dashboard to view the complete view of cluster,
K8s architecture
master node , worker nodes who run container workloads
master components :-api server , kubescheduler , controllers like deployment controller , doamen set controller etc , etcd database
api server acts as front end to complete clusterwe can communicate with cluster via api server only using kubectl utility
https://hashnode.com/edit/clgynndso000f09k28p61db6k
what if master node goes down
how to see resources in cluster
we have kubeconfig for every kind of cluster , its config file where we have kubernetes context , users details
$HOME/.kube/kubeconfig
set context and connect to that particular cluster
list of objects u have worked in kubernetes
kubectl api-resources
stateful sets
naming : deployname-hashval (hashval chnages if pod restarts)
databases pods should have standard naming conventions , if restarts , it should come up with same name
scaling in : last instances will be deleted and not random one like dpeloyments
kubectl explain pod.spec -documentation of attaributes of api objects
headless svc - writing , cluster ip svc - reading