0% found this document useful (0 votes)
8 views

Kubernetes Ultimate Notes

This document serves as a comprehensive guide on Kubernetes, covering various components such as etcd, kube-apiserver, kubelet, and kube-scheduler, along with their configurations and commands. It also discusses concepts like namespaces, resource quotas, deployments, and secrets, while providing imperative commands for managing Kubernetes resources. Additionally, it highlights important topics like upgrading Kubernetes clusters, authentication methods, and backup strategies for etcd data.

Uploaded by

Akhil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Kubernetes Ultimate Notes

This document serves as a comprehensive guide on Kubernetes, covering various components such as etcd, kube-apiserver, kubelet, and kube-scheduler, along with their configurations and commands. It also discusses concepts like namespaces, resource quotas, deployments, and secrets, while providing imperative commands for managing Kubernetes resources. Additionally, it highlights important topics like upgrading Kubernetes clusters, authentication methods, and backup strategies for etcd data.

Uploaded by

Akhil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 45

Kubernetes ultimate notes: )

ETCDCTL_API=3
--listen-client-url =http://127.0.0.1:2379
--advertise-client-url=http://127.0.0:2379

--initial-cluster= node1: http://127.0.0.1:2380 , 3380, 4380


--listen-peer-url: http://127.0.01:2380
./etcdctl get –prefix -keys-only /registry/pods/namespace
./etcdctl get –prefix -keys-only /registry

Kubeapi server:
Authenticates the user
Validates the request
Retrieve data( for get request )
Update etcd ( for post/update request )
Communicates to scheduler and kubelet
 Lets see this --authorization-mode=Node,RBAC in the upcoming
explanation
o
--service-account-key-file=/var/lib/kubernetes/service-account.pem
(private key of sa token issuer)
K create token <sa name>
--service-cluster-ip-range=10.32.0.0/24

--service-node-port-range=30000-32767
Note:
Manifests files for static pods are stored in
/etc/kubenretes/manifests/
Normally certificates are stored in /etc/Kubernetes/pki
System service files are stored in /etc/systemd/system
Ps -aux | grep kube-apiserver
kubeControllerManager:
--cluster-cidr =
--cluster-name=
KubeScheduler:
When ever you think of scheduler you must know this concepts.
Taints and tolerations
Node selector, Node affinity, Pod Affinity, there may be anti affinity
concepts as well.
It filters the nodes, ranks the nodes and schedules the pods on to the
nodes.
Note: Every component that needs to communicate with the Kube-
apiserver needs a kube config file ( lets evaluate this)

Kubelet:
It registers the node, creates the pods and monitor the node.
It has some important parameters to remember
--kubeconfig
--container-runtime-endpoint:
--network-plugin:
Kubeproxy:

--config: <contains path to config file of kube proxy>


Which contains which proxier it uses. Lets re-evaluate in the
upcoming sessions.
Note: KubeProxy is deployed as a daemon set on worker nodes by
kubeadm.

There is this another important component called coredns. Will see


in the upcoming lectures.

Services:
Nothing to note but will learn about load balancer/ Ingress in the
upcoming sessions.

Note:
Init container in docker is achieved by linking the container.
Docker run <image id> -link <container name>

Replicaset
No need but there is a replication controller, diff is selector.
K get/delete replicaset
K scale –replicas=2 replicaset <replicaset name>
Or go change the yaml file
K replace -f <replicasetname.yaml>
Deployments:
Upgrading the pods and restoring to the previous version is its
nature apart from replicating and scaling
K set image deploy <deploy name> image=<new image>
K rollout history deploy <deploy name>
K rollout undo deploy <deploy name>
K rollout status deploy <deploy name>
K rollout undo deploy <deploy name> --to-revision= <revision
number>
To see the image of a specific version..
K rollout history deploy <deploy name> --revision 2

Namespaces:
How the objects address each other
<service name>.namespace.svc.cluster.local
<pod IP, replace . by ->.namesapce.pod.cluster.local
We will learn more in coredns networking
The root domain cluster.local is defined in the coredns configuration.
ResourceQuota
We can set ResourceQuota for a namespace specifying the
requests and limits < assuming it for the entire namespace>
reevaluate and even the number of pods that can run in a
namespace( hard setting)
Yes, it’s the aggregated resource for all the pods, even we can restrict
the count of other objects like ds by specifying in the resource quota.

Imperative[ The important ]


K run <pod name> --image =
K create deploy <deploy name> --image=
K expose deploy <deploy name> --port=
K scale deploy <deploy name > --replicas=
K set image deploy <deploy name> conatinerName=<new image>
K create -f <deploy name>
K edit
K replace -f
K replace –force -f <filename> #will delete and recreate the
resources if necessary>

K delete

Scheduling:
nodeName
nodeAffinity:
nodeSelector:

tying a pod to a node matching a specific label.


K get nodes –show-lables
K label nodes <nnode name> <key>=<value>
We can deploy our own scheduler if we want and specify the
scheduler in the pod yaml file.
Spec:
Containers:
schedulerName:
NetworkPolicy:
By default, every pod is accessible to every other pod in kubenetes.
We can restrict the access by creating networkPolicy to the pod
Note:
If the policy rules are under the single -, it is an intersection
If there are separate -, then it is a union.

Ingress:
Nothing new to learn about ingress. It is a single point of contact for
our k8’s workloads which offers load balancing, host based/ path
based routing, ssl terminations( which I don’t know).

Ingress controller and its family ( pods and services ) does all the
routing atlast. Be careful with its configuration which can be a
configmap if it is deployed as a pod. ( nginx.conf file or .cd file
something that is created when ingress obj is deployed and attached
to nginx.conf file)

It comes down to
rules:
-host:
http:
paths:
---------

When we move forward we should know about


 path
 pathType
 rewrite-target

For now I am skipping.

Taints and Tolerations:


Nothing new to learn but remember these commands.
Kubectl taint nodes <node name> key=value:Effect
Effect > noSchedule, noExecute, preferNoSchedule
Kubectl describe nodes <node name> | grep Taints
Spec:
Containers:
Tolerations:
Key:
Operator:
Effect:
Value:
Note:
NodeSelector is strictly choosing a pod that matches a particular
label. NodeAffinity is more relaxed way of selecting. Giving a range of
labels for the pod to choose between.
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: my-vpa
spec:
targetRef:
apiVersion: "apps/v1"
kind: "Deployment"
name: "my-deployment"
updatePolicy:
updateMode: "Off"

Static Pods:

/etc/Kubernetes/manifests. It is used to deploy control plane


components bcz by that time we do not scheduler to schedule the
pods.

--pod-manifest-path=/etc/Kubernetes/manifests of kubelet service


bcz it is the one which takes care of static pods but no controller.
We can still see the static pod when getting them but we can not
modify a live object.
Note:
Static pod is appended by a node Name in its name.
Note:
OOM killed error in pod means, pod ran out of memory for the
containers to run. It happens when the limit set on the pod is not
sufficient for the running of the pod.

HPA is a concept, we will learn in future.


In a nut-shell, Metrics-server collects metrics from the kubelets
exposing them via kube-api-server which can be consumed by the
HPA.
Exposed metrics is seen by
Kubectl top node
Kubectl top pods commands.
Note:
kubectl create -f .
will create objects from all the yaml files in the current directory.
CMD/ENTRYPOINT vs ARGS/COMMAND
CMD/ENTRYPOINT are docker concepts.
CMD: [ “sleep”, “10” ] ..default command at the start of the
container
Docker run ubuntu sleep 20 …possible and overridden
ENTRYPOINT: [“sleep”] ..not overridden
CMD: [“10”] ..can be defaulted
Docker run ubuntu-sleeper 10 ..possible.
ARGS/COMMAND are pod concepts.
Docker run ubuntu sleep 20 can be achieved by
Spec:
Image:
Args: [“sleep”, “20”]

Docker run –name ubuntu-sleeper ubuntu –entrypoint sleep 20 can


be achieved by
Command: [“seep”]
Args: [“20”]

Environment variable:
You can perform something called exporting, when we talk about
env variable
Export KUBECONFIG=/path to kubeconfig file
Export ETCDCTL_API=3
Can be achieved in pod setting as

Spec:
Image:
Name:
Env:
Name:
Value
…………
valueFrom:
configmapKeyRef:
name:
key:
…………
Configmap
K create configmap <name> --from-literal=key=value/
--from-file=finename.keyname
………
envForm:
configmapRef:
name:
……..
Volumes:
-name:
Configmap:
Name:
Secrets:
K create secret generic <secretname> --from-literal=key=value/
--from-file=filename.key
All the data in created secret is base64 encoded.
Imp linux concepts:
Echo -n “pooja” | base64
Echo -n “durga” | base64
Echo -n “value” | base64 –decode

Note:
There are different types of secrets like “generic”. I am not going into
the context of it. Will see if that is necessary upon going forward.
Apart from everything is very similar to Configmap, creation, using it
as a env variable or using it as a volume.
 While revising came to know another type of secret apart from
general is service-account-token
 One point to add here is, we create secret of type service
account token to create a non expiring token for service
account ( while creating the secret, service account name
should be mentioned in the annotations) and every data in the
secret is base64 encoded( needs validation)
Note:
Init containers are not shown in k get po command.
Another important note is that every container that a pod creates
should be given a name.

Cordon/uncordon vs drain
K cordon node-name <unschedules>
K uncordon node-name <schedules>
K drain node-name < deletes the pods on the node and unschedules
it>

Another important topic which I haven’t touched the most is….


“ Upgrading Kubernetes cluster “

 K get nodes -- you will see the version over there, which is the
cluster version.
 Points to Remember..
 K8s community supports only 2 versions below the release
version
 The version of the nodes is indeed the version of the kubelets
running on the node.
 Taking kube-apiserver versions as a base version (x)
 Allowed clutser component versions are..
 Scheduler, controller-manager ( x /x-1)
 Kubelet, kubeProxy ( x/x-1/x-2)
 Kubectl ( x+1/x-1/x )
 Coredns / etcd are the third party components whose versions
are independent of the k8s version.
Procedure involved in upgrading the Kubernetes cluster.
The bible here k8s documentation, follow it blindly.
The typical steps involves…

V1.26.7 …. 1>major version, 26>minor version, 7>patch version

Find what version of linux distribution you are running.


Cat /etc/*release*/

Update your apt package itself and see what versions of kubeadm are
available
apt update
apt-cache madison kubeadm

Upgrade the kubeadm and see that it is of the version that you are
interested in.
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.27.x-00 && \
apt-mark hold kubeadm

see the plan that kubeadm offers and apply the respective version
plan.
Kubeadm upgrade plan
Kubeadm upgrade apply <version>

Before going to upgrade any control plane components(kubelets r


kubectl, which are not deployed by the kubeadm), drain the nodes, b
e it either controlplane node or worker node.
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.27.x-00 kubectl=1.27.x-
00 && \
apt-mark hold kubelet kubectl

after that restart the services and uncordon the nodes.

On worker nodes, before cordoning and upgrading the kubelets, here


is the simple commad you should run
sudo kubeadm upgrade node
Another important concept after upgrading the Kubernetes cluster is
backing up the Kubernetes cluster.
One thing you can do, storing all the object manifest files in a secured
and safe location.
Other thing is backing up the etcd cluster, since it is the source of
truth for the cluster.
Export ETCDCTL_API=3 … required for good handling of etcdctl
Etcdctl snapshot save <snapshotfile>
Note:
Etcd server has this –data-dir option where all the etcd data has been
stored.
Etcdctl snapshot restore –data-dir <dir path to which the saved data
needs to be restored>
After that, change the –data-dir value and start the etcd server.
May be follow the documentation on how and what to do.
Note:
We cannot drain nodes of the pods which are not part of replicaset.
Note:
Never ever fail to upgrade the apt and refresh the apt cache before
to look for the latest kubeadm version available.

Ah! That’s a very long time of taking the test. There are many
important things that I have learned from the test.
Etcd server itself uses some –data-dir which s added as a volume
mount to the container.

When we perform etcdctl snapshot save <filename>, you need


mention where the snapshot needs to be stored.
Etcdctl –data-dir <new path> snapshot restore <snapshotfile>
>>new path is the path to which you wish to restore the data from
the snapshot
>> SO…. There is a diff between snapshot path and the data path
though : \

Note:
To switch between the clusters just like that
Kubectl config use-context <context name>

Note:
If the etcd is running and same space on the controlplane, then it is
set to setup as stacked etcd.
Ahaaaaaaaa, again I have taken a good amount of time to take this
test. There are lot of points that I have learnt

When etcd is running on an external server…

When ever we take the backup, its good to move to the another
secured server.
Scp file.snapshot student-server:
It moves the file from current server to student-server root directory.

Always check the path to the certificates from etcd service


arguments.
When a new data dir has been created, etcd service user may not
have access to the data file. We need to change the ownership of the
file using the command
Chown -R etcd:etcd <path to the data dir>
The value to the argument needs to be changed in the service unit
file </etc/systemd/system/>
Systemctl daemon-reload
Systemctl restart etcd
Are necessary.
Authentication:
How an entity can authenticate to k8s server.
Static files : user +Password, user+token.
Where this files are stored( may be on the controlplane only and
given as –basic-auth-file or -token-auth-file to kube-apiserver ) but
who gives the token ( we see in the upcoming lectures.) )
Certificates, serviceaccounts and third party authentication services
like LDAP.

A typical static.csv file looks like


Password1, use1, userID, group1
Password2, user2, userID2, group2
Curl -v -k http://<controlplane IP>/api/v1/pods -u “user1:password1”
Curl -v -k http://<controlplane IP>/api/v1/pods –header
“Authorization: Bearer <token>”
It is not at all recommended, in case if needed this static files are
given as volume mounts to kube-apiserver pod.
Who gives the token.
For all the components in the k8s cluster, corresponding client and
server certificates are created.
For a normal external user, how the certificates are created is listed
below.
User : will generate a key pair, gives the keypair along with common
name and group name and create .csr file
Now the token in the .csr file generated is again base64 encoded and
given CSR ( certificate signing request ) object.
Admin will sign the CSR and decodes the token in the signed CSR and
give it to the user for authentication.
The catch here is how admin signs the CSR and how certificates are
processed!
Any one who access have access to the CA key and CA cert can sign
the certificate. The access is given to certificates controllers( here it is
packaged as Kubernetes controllers) and given as a parameter to –
cluster-CA-file/key to controller manager service.
The controller manager process the CSR files.
Note:
The CSR object is defined by the api version kubernets-client-
API( needs validation)
Enough with the theory and now jump to the important commands.
Note:
To view the certificate..
Openssl x509 -in <path to the certificate> -text -noout
Casestudy:
Let say “myuser” needs to authenticate to the cluster.
Openssl genrsa -out myuser.key 2048
Openssl req -new -key myuser.key -out myuser.csr
Echo “key” | base64 | tr -d “\n”
Create a CSR, mentioning request value and specifying what type of
auth it is. Client auth or server auth.
Kubectl get csr
Kubectl certificate approve myuser.csr
Kubectl certificate deny myuser.csr.
Note:
After approving a certificate field will be generated, which needs to
be decoded and given to the myuser.
After creating necessary roles and rolebindings, myuser can use the
token in his kubeconfig file or in his api query to access the kube-
apiserver directly.
Notes:
Fields to see in any certificate is CN name, Alternate names, group
name, expiration date, issue name.
Points to remember about kubeconfig file:
K config view
K config view –kubeconfig=<path to the kubeconfig file>
Cluster, User and contexts are the contents of the kubeconfig file.
You can specify the namespace option in context fields, so you will be
on that namespace when you as a user access the cluser.
K config use-context <context name>
K config current-context
Some times we see the whole key instead of file. It is bcz crt-data
property has been mentioned instead of just crt

Points to remember about api groups.


Kube-apiserver offers different api groups. Different resources are
defined and accessed within different diff api groups.
Broadly classified as v1 ( core api)(which defines pods) and named
api.
Under named api, we have
Apps ( deployments/replicasets etc..)
Authentication api
Certificates api
Storage api
Networking api
Extensions api
Note:
All these apis allow us to use different verbs like create/delete etc..
Note:
I see mentioning –authorization-mode in kube-apiserver config as
RBAC.
There are other authorization modes as well
Node ( Node authorization module authorizes cluster resources like
kubelet, by looking at the user name and the group they belongs to
like system.node and grants the respective permissions.)
ABAC( Attribute based access control module, allows us create a
“policy” object for each user and specify what resources he can
access and what actions he can do. Tedious task)
RBAC( Role based access control module, we know about it.)
Webhook ( we can route the authorization task to a third party
service)
AlwaysAllow
AlwaysDeny ( needs validation)

--authorization-mode: Node, RBAC, Webhook, authorization works in


the same order.
Role and Role Binding:
Role: we need to look at what is the apigroup, resources like pods,
resource names like pod name and verbs like create, delete, list, get
etc.. may be namespace as well
RoleBinding: we need to look at the subj, who is the user, by which
module he is being authorized ( need verification) and roleRef, to
which role he needs to be assigned etc..
Note:
CluserRole and ClusterRole binding are very much similar except that
they are applicable to cluster level resources.
Example: Nodes, PVs, clusterRole, clusterRoleBindings, CSR, Storage
etc..
Note: Role and RoleBindings are applicable to the resources on the
namespace they are deployed.
It is not necessary for the ClusterRole to look after for the cluster
wide resources only. They can be applicable to namespaced
resources as well, except that ClusterRole is for that resource in all
the namespaces.
Note:
Kubectl auth can-I delete po –as <username>
Kubectl auth can-I delete po #sees the privileges of current-user.

Service Account:

It is the topic that you have to pay the attention bcz, it has
undergone different changes right after v1.19..
Old way of working:
Role and RoleBindings are for users.
ServiceAccounts are for the services who are accessing the cluster.
K create sa <name>
There will be a secret inside the created sa.
Describe the secret to fetch the token and give it to the service.
If the services are inside the cluster, the token is mounted as a
volume.
Note:
When a cluster is bought up, a default sa is created and this sa is
given to the pod which has limited privileges. If you want to assign
your own service account..
Spec:
Containers:
Serviceaccountname:
The token is then mounted as a volume typically under
/var/run/secrets/Kubernetes.io/tokens ..
This poses a security risk because, the token is same for all the pods
in the namespace and it has no expiry date. Also, additional effort for
creating secrets to get the token

New way of handling Service account:


Now the service account is “subject bound”, “object bound” and
“time bound”.

What remains same is that, a default sa is created for every


namespace.
But there is TokenRequestAPI which creates a token for a specifies SA
which is time bound( default 1hr from the creation of token but –
duration field can be specified to give the expiration date )
Also, there is this “ServiceAccountAdmissionController” which adds
the created token as a projected volume on to the container.
In depth matter not required in general…
The projected volume contains.. crt files, tokens and downward API
which passes all these info container without forcefully pushing the
data into the container.
You can still do the old way of creating token that is secret bound but
at your own risk.

Note:
Spec:
Serviceaccountname:
automountdefaultServiceAccount: False
is also possible to add our own serviceAccount by disabling the
automount.

Concept of image pulling in Kubernetes:


Image: nginx
Default to…
Docker.io/library/nginx
Dokcer.io is the dns of dockerhub, if no account has been mentioned,
library, the official dockerhub account has been taken, nginx image
image has been pulled.
Imgae: Pujitha/nginx
Docker.io/Pujitha/nginx is also possible
What if we wants to use a private registry…
Like image: qvantel.io/Pujitha/nginx
The k8s cluster needs to login to private registry for which it requires
credentials.
For this, a “secret of type Docker Registry” needs to be created.
K create secret docker-registry \
--docker-sevrer:
--user-name:
--user-password:
--user-email:

Needs to be given.
This secret should be invoked as..
Spec:
Container:
imagePullSecret:
-name: <secret name>
Security in running a container in docker followed the Kubernetes.

By default docker runs the container as a root user which is


dangerous bcz the process in the container can modify anything on
the host. To overcome this, we have diff options like
Docker run –user 1000 <image name>
Or specify the user in DOKCERFILE itself
-cap-up or -cap-down the different linux powers like mac-admin, kill
Or give all the privileges. This can be done by
Docker run -cap-up mac-admin <image name>
Docker run -cap-down kill <image name>
Docker run –privileged <image name>
Similar kind of security is arranged in Kubernetes pods itself using
“Security Contexts”
Spec:
Securitycontexts:
runAsUser: 1000
template:
container:
……It is applied for all the containers in the pod….

Or
Spec:
template:
conatainers:
securityContext:
runAsUser:
privileges:
NetworkPolicies:
Nothing new about network policies but there are few points to
remember.

The ingress rule is configured for the request. We need not to create
egress rule for the reply. We only needs to configure egress rule if
there is a request intrun.
Policy:
-ingress
-egress. It works this way.
Ingress:
From:
-IPblock:
Ports:
Protocol:
port

Egress:
to:
This is the structure that you should remember.

Kubectx is a useful tool to switch between the contexts.


Kubectx <list all the conetxts>
Kubectx < context-name> # switch to the context
Kubectx - #switch back to the previous contexts
Kubectx -c #shows the current contexts.

Note:
Another beautiful command that helps to debug is
Crictl ps -a
Crictl ps <cont id> --previous.
It it is a problem with connection, normally it the expiration date of
the certificate or the certificates are not mounted correctly.

Note:
While creating csr, “signer name” should be mentioned. Look for the
default signer names in the documentation.
“usage” also needs to be mentioned. Look for the allowable values in
yaml file. Like, client.auth or server.auth

Another trick with CSR is that, if we want to see want groups the user
is requesting access for can only be seen in yaml format. Describe
command will not show these details.
Note:
You can only check the auth for the namespaced resources.
If you stuck at what resource name to use, use the one in the get
command.
K explain storageclass #will give all the details along with its api-
group.
Verbs: [“*”] #possible
Note:
It is always better to look for the info in yaml format when we can’t
see it in describe command like serviceaccounts used by the pod.
Note:
The rolebinding can take user as “ServiceAccount” for the sa object
Manually token can be created without saAdmissionController using
RequestToken API as
K create token <sa name>

Note:
autoMountServiceAccountToken is to disable the auto mount of
token to pod by ServiceAccountAdmissionController but not the sa
itself.
securityContexts: also take the capabilities field.
Beautiful, you can just get the manifest file structure by
K get networkpolicy -o yaml
Note:
Carefully look at the indentations when creating networkPolicies.
Storage:
Nothing new about PV ( hotsPath/awsElasticBlockStore( FSType), )
about an understanding of access modes.
ReadWriteOnce ( Volume can be Read and Write by a single node,
however, all the pods in the same node can read and write to it.
ReadWriteMany ( Volume can be read and written by many pods)
ReadOnlyMany( Volume can be mounted as ReadOnly Mode by many
nodes)
ReadWriteOncePod ( Only one pod can read and write to it. No other
pod on any other node.)
This is the order which pvc checks for its binding to pv.
Sufficient Capacity
Access Modes
Volume Modes

Storage Class
Selector
Note: Not sure about Volume Modes and Selector. Will get to know.
Selector: A PV can select a PVC based on the labels of the pv.
One of the VolumeModes is FileSystem.

Note: If the critirea doesn’t match, the pvc will be in pending state.
So, for a PV, Capacity, access modes and the type are important.
There is something called, PersistentVolumeReclaimPolicy. It can be
Available, Bound, Released, Failed. --- these are states.
The “Recycle” reclaim policy is deprecated. But we can create a
custom pod which deletes the data in the volume.

Note:
StatefulSets are out of CKA. They are important. You can learn.
StorageClasses:
When a PV is created which is reserving the storage from google
cloud, first google cloud bucket needs to be created then you can
create a PV specifying the bucket name. This is called Static
Provisioning.
But a Stoarge class ( defining provisioners ) will create the bucket
using provisioning plugins when ever the StoargeClass has been
created. When a PVC tries to claim the storage mentioning
(StorageClassName) it automatically creates a PV.
Note:
PersistentVolumeRecliamPolicy: Retain
We can’t delete the PVC if the pod is using the claim.
We need to first delete the pod and the PVC.
The PV will come o release state only after PVC is deleted.

When creating PV and PVC, Volume mode compatibility is also


necessary.

Difference between DNS server and DNS resolver.


DNS Server > Server on which DNS entries and which are served by
some system software called Core DNS
DNS Resolver > DNS client and the software which sends the DNS
queries to DNS server ( it can be on the same cust system or
completely another server.) (needs validation)

 When we talk about the dns, the two helpful commands to get
the dns details are..
Nslookup <IP address>
Dig <IP address>
 When we talk about Core DNS, the imp conf file we should
know is /etc/Core file. Which has info for where to lookup for
DNS entries.
/etc/hosts/

Basic networking commands..


Switches ( To connect systems in a LAN)
Ip link
Ip addr
Ip addr add <IP address> dev etho ( what type of interface,
whether dev or bridge is important )

Routers ( To connect diff LANS)


Ip route
Ip route add < switch IP address> via <IP of router/ gateway for
the LAN to connect>

Note:
Even another server can act as router. That said, the data transfer
b/w two interfaces is possible only
/proc/sys/net/IPv4/IP_forwarding =1 or /etc/sysctl.conf,
ip_forwarding=1

/etc/resolv.conf
Nameserver <ip address>
Search <web.com> <cloud.web.com>
DNS entries:
A type
AAAA type
Canonical entries.
NAT Tables:
Network Address Translator.
It will allow your hosts in your private network to reach the public
network, using a NAT router( it can be another linux machine as well)
How!
There is connection tracking table in /proc/net/nettrack,
Which defines what is the source address, what is the destination
address, what is the source port and what is the dest port.
What we call as a NAT entries
Iptables -t NAT -A POSTROUTING -s <ip address > -J MASQUERADE
Will add an entry to NAT table of type MASQUERADE.
Iptables -t NAT -L -v -n
#will list the entries in the NAT table
There are other type of entries as well SNAT, DNAT…
Note:
A LAN can be divided into different VLANs ( like diff subnets) to
improve efficiency, security and performance.

To better understand about virtual networks, we should know about


namesapces..
Ip netns # will list the available namesapces.
Note:
Out of nowhere, I got this info. Inorder to get the traffic on a
particular interface,
tcpdump -I <Interface>
Loopback interface is a virtual interface with a specific IP address,
unlike physical Interface, whose IP address can be changed, lo IP
address can’t be changed.
Any packet given to loopback interface is looped backed to sender
itself. ( mostly used for testing)
Note:
Ipnet > IP address of the interface, 128.0.0.1/24 is the subnet, that is,
the range of IP addresses that are forwarded to the ethernet.
Brd > broadcast IP, using this IP we can send request to all the range
of IP addresses.
For every sending request received on the interface, it looks into
routing table, if the gateway is 0.0.0.0, it means that no gateway
needed and the IP address is reachable on the network in which the
interface is connected.
If the gateway is some IP address, then it is typically a router which
routes the traffic to another network( all the NAT, publicly available
IP, comes into play)
In between there is something called, sending an arp to the
destination IP and getting its MAC address, these are some advanced
conecpts.
Concept of setting up the network between the two namespaces.

Ip link
Ip netns add red
Ip netns add blue
Ip netns add veth-red type veth peer name veth-blue
# setting up a virtual thread.
Ip netns set veth-red netns red
Ip netns set veth-blue netns blue
Ip exec netns red ip addr add <ip address> dev veth-red
Ip exec netns blue ip addr add <ip address> dev veth-blue
#attaching the thread to ns and giving them the address.
Ip -n blue link set veth-blue up
Ip -n red link set veth-red up
#bringing the interfaces up.
If there are more than two network namesapces, how a bridge
network is configured

Create virtual threads and attach on end to all the namespaces and
other ends to a common switch ( which is considered as an interface
to the system)
Ip link add veth-red type veth peer name veth-red-brd
Ip link set veth-red netns red
Ip link set veth-red-brd master v-net-0
ip -n red add addr <ip address > veth-red
Since v-net-0 is an interface to the system, how do we add the
address..
Ip addr add <address> dev v-net-0
How the traffic on the virtual interface reach the physical switch
It reaches the physical switch, by using physical interface as a
gateway.
Ip route add <switch address> via <physical interface>
To transfer the packet from physical interface to virtual interface, IP-
forwarding should be enabled.
When the traffic goes out of the physical address, rewrite the source
address to virtual Ip address, so that the reply traffic will know the
exact destination
IP rewriting is done in NAT table
Iptables -t NAT -A POSTROUTING -s < virtual interface IP> -J
MASQUERADE

Note:
Ip link del veth
There is this concept of port forwarding in docker
Docker run <image name> --port 8080:80
It means that any request to host potr 8080 is forwarded to
conatine report 80.
How docker does this
Docker creates a DNAT rule, to replace the destination IP address to
container IP address.
Iptables -t NAT -A PREROUTING -sport 8080 -dport 80 -destination-
address <container address> -J DNAT.
Note:
All these instructions are packed with a standard set of rules and a
container network interface can be developed.
Docker run –network none <image name>
Bridge add <cont id> /var/run/netns/<contid>
Note:
Docker doesn’t support CNI. Bcz it build way before it. There is a
workaround in k8s to use the docker as a container runtime but it is
no more supported( crictl is used typically)
So “Conatiner run time” creates the container, “Conatiner network
interface” creates the networking for the container.

where to find what type of proxy is used by the kube-porxy.

k logs <kube-proxy-pod-id> | grep proxy

--------------------------------

where to check the cluster level servcie IP range

inpsect kube-api-server
---------------------

how to find the interface created by the weave

ip addr | grep bridge


ip addr | grep weave

------------
what is the default gateway configured for the pods to reach the
other pods

go to the worker node, look at the routing table.

pods try to connect to the other pods through weave interface. Try to
find a answer by it.
Display all the sokcets along with its program name
Netstat -anp | grep etcd
Note:
It is important that, whatever cni plugin is configured, the
corresponding network solution should be installed.
Note:
The default gateway for the pods on any node is the bridge interface
created by the cni service.

This is a nice study guide..


https://devopscube.com/cka-exam-study-guide/

Note:
Why do we need to mention all the available etcd servers to kube-
apiserver.
Etcd is a distributed system, we can access through any of the
available servers and they themselves elect the leader to update the
database.
Some points to know about etcd :
Etcd uses raft protocol and works in leader-election mode.
The minimum number of nodes required for the HA functionality,
what is called as a quoram is
(n/2)+1 .
Just a note:
Vagrant tool is a VM provisioning tool.
The process of setting up the kubernetescluster goes like this.
Bringing up the VMs using vagrant.
Install the container runtime.
Install the kubeadm
Initialize master
Set up the pod network
Join Workers.

Indetail steps:
After bringing up the VMs ..
Before installing the container runtime, know what linux distribution
you are running.
Set the ip_forwarding values to 1 and bring up all the necessary
modules required to set the network tracking mechanisms.
[Follow the documentation]

After that install the container run time. Easy way is to use apt-get
package tool. Try to install only the container runtime but no other
modules.
[ Follow documentation, which takes you to the github repository ]

Before going forward to installing Kubeadm…


Note about Cgroup drivers:
Cgroups are the things in linux which put the constraints on the
resource usage in linux
Container run time, kubelet somehow needs to intersect with
CGroups functionalities. So related CGroup plugins needs to be
installed.
[Look for the documentation on how to install the related Cgroup
plugin. Some files needs to be updated by clearing all the basic
configuration]
Ps -p 1 # will give the default init system the linux using#
After this install the kubeadm
Note: All these steps needs to be done on both master node and the
worker node.
Now initialize the master node. There are certain parameters you
needs give.
--cluster-endpoints # in a HA cluster ( not required now )
--pod-network-cidr # seeing the bridge interface created by the
containerd will help.
--container-socket # kubeadm will identify well known CRI ( not
required now)
--advertise-cluster-ip # IP address of the master node
Now before going to install the network plugin, make sure the
kubelet is configured with same CGroup driver.
If not, you need to manually create a config file and pas it as –config
parameter during the kubeadm –initiliaze command

The output of the master initialization needs to be saved. It contains


the commands that we need to further run both on the master nodes
and worker nodes.
Run it accordingly and you see the components on the controlplane
up and the worker nodes, joing the cluster and communicating with
the master : )

Very important points when performing cluster installation


Check what version of Kubernetes you are installing and point to
correct version in the documentation.

Some VM spinning scripts like vagrant uses default interface


forwarding, which the flannel also uses, which may not be correct.
Look for keywords like ( flannel interface ) in official documentation.

( --iface=eth0, add this as a argument to the flannel pod)


Note:
Troubleshooting is a vast concept. Let me put few points where I
have gone wrong.

Application failure:
Same names, endpoints, ports, environment variables.
Controlplane failure:
Mostly look at the logs of the container for detailed verbose.
Wrong executable, wrong mounting etc..
Worker Node failure:
Mostly with the kubelets
Carefully look at the journalctl -u kubelet logs
Go one after one. Carefully look at all the kubelet config files.

Network failure:
Issues with CNI. Like CNI not installed.
Kube-proxy issues, carefully look at kubeProxy daemonsets, values in
configmap should match that in ds manifest file( unlike volumes,
where we see if the files exist or not, it abot value)
Note:
Root element in a json path is $
$.car.color
$.vehicles.color
Any ouput of the json path is a LIST []
If this is a List
[
-pooja
-durga
]
$[0]
$[?( @.model == rear right )]
$[2] >> $[?(@ > 5)] >> print all the values on in the list.

kubectl create namespace development


kubecrl create serviceaccount sa -n development
kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods
kubectl create clusterrolebinding pod-reader --clusterrole=pod-reader --
serviceaccount=development:sa

To allow traffic from all the sources, the ingress object can be left
empty.
During troubleshooting, its good to check starting from the name.
We can’t debug if the container itself is not created.
Why initContainer is going to the crash loop back off error
From version 1.28 the side car container is implemented as a init
conatainer, by setting up the restart policy as always.
Before 1.28, it is implemented as a main container with all the values
So when looking at the docs look for the correct version of the docs.

How do we test a pod if it is reachable from another pod.


Kubectl run test –image nginx –rm -it –restart=Never – nslookup
<serviceName>
Kubectl get deploy –show-labels
Note:
When downloading etcd and etcdctl, add the directories to one of
the directories of PATH variable.
SubPath
Subpath will append the contents to the mountPath directory instead
of overwriting it.
We can also use the same volume to create a another volume
mount. We can append the data to different volume mount
directories at the same time.

Volumes:
-name: v1
ConfigMap: <name>
volumeMounts:
- Name: v1
mountPath: <diff mount path>

- Name: v1
mountPath: <diff mount path>

It works like..

/var/lib/kubelet/pods/<pod ID>/volumes/mountpath/value
/var/lib/kubelet/pods/<pod ID>/mountpath/value

Journalctl -u kubelet –since “30 min ago” | grep “Error:”


When kubelet service is not running, try running start kubelet instead
of restart kubelet.
Systemctl start kubelet.
K get po –show-labels
k get pods -o custom-
columns=’POD_NAME:.metadata.name,IP_ADDRESS:.status.podIP’ –
sort-by=.status.podIP
To test the availability of the pod..
K run test –image nginx –rm –it –restart=Never – nslookup
<serviceName>

You might also like