Scheduling pods on Nodes

·

3 min read

Node Selectors:-

Kubernetes only schedules the pod onto nodes that have each of the labels you specify. It supports equality based selectors

kubectl label nodes <node-name> disktype=ssd

kubectl get nodes --show-labels

Node Affinity:-

Node affinity is conceptually similar to nodeSelector, allowing you to constrain which nodes your Pod can be scheduled on based on node labels. There are two types of node affinity:

  • requiredDuringSchedulingIgnoredDuringExecution: Pod will not be scheduled if rules do not match (Pods remain in pending state), but pods already running are ignored (irrespective of the rules).

  • preferredDuringSchedulingIgnoredDuringExecution: Pod will be scheduled in available node if rules do not match, and pods already running are ignored (irrespective of the rules).

  • requiredDuringSchedulingRequiredDuringExecution: Pod will not be scheduled if rules do not match (Pods remain in pending state), and pods already running are evicted if rules do not match.

  • It supports set based selectors

  • Node affinity can be used when you want schedule pods on particular nodes having disk attached or nodes with particular hardware requirements

Pod Affinity:-

You can constrain a Pod so that it is restricted to run on particular node(s), or to prefer to run on particular nodes.

Similar to node affinity are two types of Pod affinity and anti-affinity as follows:

  • requiredDuringSchedulingIgnoredDuringExecution

  • preferredDuringSchedulingIgnoredDuringExecution

Supports set based selectors . You can use the In, NotIn, Exists and DoesNotExist values in the operator field for Pod affinity and anti-affinity.

  • Pod affinity can tell the scheduler to locate a new pod on the same node as other pods if the label selector on the new pod matches the label on the current pod.

  • Pod anti-affinity can prevent the scheduler from locating a new pod on the same node as pods with the same labels if the label selector on the new pod matches the label on the current pod.

Pod affinity can be used if you want both pods to be scheduled on same node if those pods needs to be communicated constantly

PS :- Node Affinity ensures that pods are hosted on particular nodes. Pod Affinity ensures two pods to be co-located in a single node.

Taints & Tolerations:-

  • A combination of Taints & Tolerations and Node Affinity rules can be used together to completely dedicate nodes for specific pods. We use Taints & Tolerations to prevent other pods from being scheduled on the desired nodes and then we use Node Affinity to have our pods scheduled on the desired nodes.

  • Taint should be used when you want to mark a node as unavailable for certain pods. For example, you can use taint to mark a node as "maintenance" and prevent pods from being scheduled on the node while it is undergoing maintenance.

  • Node affinity should be used when you want to specify which nodes a pod should or should not be scheduled on based on node labels. Node affinity provides more fine-grained control over pod scheduling compared to node selector and allows you to specify complex rules for pod scheduling based on multiple node labels.

  • Pod affinity should be used when you want to specify which pods a pod should or should not be scheduled with based on labels. Pod affinity can be used to ensure that certain pods are co-located on the same node or to ensure that certain pods are separated from each other.

Why can't we schedule any of our pods on master node?

When the Kubernetes cluster is first set up, a Taint is set on the master node. This automatically prevents any pods from being scheduled on this node. You can see this as well as modify this behavior if required. However, a best practice is not to deploy application workloads on a master server.