Interview Questions
Interview Questions
1. Installing Plugins
2. Installation of a Child Module
3. Initialization of the backend
What is Terraform provider?
Answer: Terraform is a tool for managing and informing infrastructure resources
such as physical machines, virtual machines (VMs), network switches,
containers, and more. A provider is responsible for API interactions that are
thoughtful and reveal resources. Terraform is compatible with a wide range of
cloud providers.
What is Terraform D?
Answer: Terraform D is a plugin used on most in-service systems and Windows.
Terraform init by default searches next directories for plugins.
How will you upgrade plugins on Terraform?
Answer: Run ‘terraform init’ with ‘-upgrade’ option. This command rechecks the
releases.hashicorp.com to find new acceptable provider versions. It also
downloads available provider versions. “.terraform/plugins/<OS>_<ARCH>” is
the automatic downloads directory.
terraform init: In order to prepare the working directory for use with
Terraform, the terraform init command performs Backend Initialization, Child
Module Installation, and Plugin Installation.
terraform apply: The terraform apply command executes the actions
proposed in a Terraform plan
terraform apply –auto-approve: Skips interactive approval of plan before
applying.
terraform destroy: The terraform destroy command is a convenient way to
destroy all remote objects managed by a particular Terraform configuration.
terraform fmt: The terraform fmt command is used to rewrite Terraform
configuration files to a canonical format and style
terraform show: The terraform show command is used to provide human-
readable output from a state or plan file.
ANSIBLE
1. What is Ansible?
Ansible is developed in Python. It is a software tool. It is useful while deploying any application
using ssh without any downtime. Using this tool one can manage and configure software
applications very easily.
Roles Playbooks
1. Agentless
2. Very low overhead
3. Good performance.
Ansible Puppet
Simplest Technology Complex Technology
There are many similar automation tools available like Puppet, Capistrano, Chef, Salt, Space Walk,
etc, but Ansible categorizes into two types of servers: controlling machines and nodes.
The controlling machine, where Ansible is installed and Nodes are managed by this controlling
machine over SSH. The location of nodes is specified by the controlling machine through its
inventory.
The controlling machine (Ansible) deploys modules to nodes using SSH protocol and these
modules are stored temporarily on remote nodes and communicate with the Ansible machine
through a JSON connection over the standard output.
Ansible is agent-less, which means no need for any agent installation on remote nodes, so it
means there are no background daemons or programs executing for Ansible when it’s not
managing any nodes.
Ansible can handle 100’s nodes from a single system over an SSH connection and the entire
operation can be handled and executed by one single command ‘ansible’. But, in some cases,
where you are required to execute multiple commands for a deployment, here we can build
playbooks.
Playbooks are a bunch of commands which can perform multiple tasks and each playbook are in
YAML file format.
Ansible can be used in IT Infrastructure to manage and deploy software applications to remote
nodes. For example, let’s say you need to deploy a single software or multiple software to 100’s
of nodes by a single command, here ansible comes into the picture, with the help of Ansible you
can deploy as many applications to many nodes with one single command, but you must have a
little programming knowledge for understanding the ansible scripts.
We’ve compiled a series on Ansible, title ‘Preparation for the Deployment of your IT Infrastructure
with Ansible IT Automation Tool‘, through parts 1-4 and covers the following topics.
Modules: Ansible works effectively by connecting nodes and pushing out scripts called
"Ansible modules". It helps to manage packages, system resources, files, libraries, etc.
Inventories: These are the lists of nodes or hosts containing their databases, servers, IP
addresses, etc.
APIs: These are used for commuting public or private cloud services.
Plugins: Plugins augment Ansible's core functionality. Also offers extensions and options
for the core features of Ansible - transforming data, connecting to inventory, logging
Playbooks: Describes the tasks that need to be executed. They are simple code files
written in YAML format and can be used to declare configurations, automating tasks, etc.
Hosts: Hosts are node systems that are automated by Ansible on any machine like Linux,
Networking: Ansible can be used to automate multiple networks and services. It uses a
Cloud: A system of remote servers that allows you to store, manage, and process data,
CI/CD is one of the best software development practices to implement and develop code
effectively. CI stands for Continuous Integration, and CD stands for continuous delivery.
Continuous Integration is a collection of practices that drive developers to implement and check in
code to version control repositories. Continuous delivery picks up where continuous Integration
ends. This process builds software in such a way that software will be released into production at
any given time.
Ansible is an excellent tool for CI/CD processes, which provides a stable infrastructure to a
provision target environment and then deploys the application to it.
Yes, Ansible has the concept of roles that helps to create reusable content. To create a role, you
need to follow Ansible's conventions of structuring directories and naming files.
Configuration management is the practice to handle updates and manage the consistency of a
product's performance over a particular period of time. Ansible is an open-source IT Configuration
Management tool, which automates a wide variety of challenges in complex multi-tier IT
application environments.
11. What are the differences between the variable name and environment
variables?
Use
Use the IPV4 address for variable
{{ansible_env.SOME_VARIABLE}}
names.
for remote environment variables
To create an empty file, Ansible uses a file module. For this, we need to set up two parameters.
1. Path - This place represents the location where the file gets created, either the relative or
an absolute path. Also, the name of the file includes here.
2. State - For creating a new file, this parameter should be set to touch.
13. How will you set the environment variable or any path for a task or entire
playbook?
To set the environment variables, we use the environment keyword. We'll use it at the task or
other levels in the play:
environment:
PATH: "{{ ansible_env.PATH }}:/thingy/bin"
SOME: value
14. How would you describe yourself in terms of what you do and how you’d like
to be remembered?
Obviously, I’d like to be remembered as a master of prose who forever changed the face of
literature as we know it, but I’m going to have to settle for being remembered as a science fiction
writer (and, more and more, critic) who wrote the occasional funny line and picked up a few
awards.
The incurable addiction had begun. Meanwhile, science classes just seemed to be the part of a
school that made the most sense, and I fell in love with Pelican pop-maths titles – especially
Kasner’s and Newman’s ‘Mathematics and the Imagination’ and all those books of Martin
Gardner’s ‘Scientific American’ columns.
16. Tell us about your software company and what sort of software it
produced(s).
This goes back to the 1980s and the Apricot home computers, the early, pretty, and non-PC-
compatible ones. My pal Chris Priest and I both used them for word processing, and he persuaded
me to put together a disk of utilities to improve the bundled ‘SuperWriter’ w/p, mostly written in
Borland Turbo Pascal 3 and later 4: two-column printing, automated book index preparation,
cleaning the crap out of the spellcheck dictionary, patching SuperWriter to produce dates in UK
format, and so on.
Then I redid the indexing software (‘AnsibleIndex’) in CP/M for the Amstrad PCW and its Locoscript
word processors. When the Apricot market collapsed, I wrote an Apricot emulator in assembler so
that people could keep using their horrible but familiar old software on a PC. Eventually, in a fit of
nostalgia, I collected all my columns for ‘Apricot File’ and various Amstrad PCW magazines as
books unoriginally titled ‘The Apricot Files’ and ‘The Limbo Files’. (That’s probably enough self-
promotion, but there’s lots more at https://ansible.uk/.)
17. Describe your newsletter Ansible and who it’s aimed at.
It appears monthly and has been called the ‘Private Eye’ of science fiction, but isn’t as cruel and
doesn’t (I hope) recycle old jokes quite as relentlessly. Though I feel a certain duty to list some
bread-and-butter material like conventions, award winners, and deaths in the field, ‘Ansible’ skips
the most boring SF news – the long lists of books acquired, books published, book sales figures,
major new remainders – in favor of quirkier items and poking fun at SF notables. The most popular
departments quote terrible lines from published SF/fantasy and bizarre things said about SF by
outsiders (‘As Others See Us’). All the back issues of ‘Ansible’ since it started in 1979 can be read
online.
Within the market, they are many automation tools like Puppet, Capistrano, Chef, Salt, Space
Walk, etc.
When it comes to Ansible, this tool is categorized into two types of servers:
1. Controlling machines
2. Nodes.
nodes. So there are no background programs that are executed while it is managing any
nodes.
Ansible is able to handle a lot of nodes from a single system over an SSH connection.
Playbooks are defined as a bunch of commands where that are capable of performing
No, it is not possible to manage Windows Nano Server using Ansible as it doesn't have full access
to the .Net framework, which is primarily used by internal components and modules.
Ansible is classified as a web-based solution which makes Ansible very easy to use. It is
considered to be or acts like a hub for all of your automation tasks. The tower is free for usage till
10 nodes.
Ansible Dashboard
Real-time job status updates
Multi-playbook workflows
Who Ran What Job When
Scale capacity with tower clusters
Integrated notifications
Schedule ansible jobs
Manage and track inventory
Remote command execution
REST API & Tower CLI Tool.
Usually, the documentation is kept in the main project folder in the git repository. Complete
instructions on this can be available in docs.
If you are just looking to access the existing variables then you can use the “env” lookup plugin.
For example:
local_home:”{{lookup(‘env’,’HOME’)}}”
It is not advised to manage a group of EC2 machines from your laptop. The best way is to connect
to a management node inside Ec2 first and then execute Ansible from there.
27. Is it possible to increase the Ansible reboot module to more than 600
seconds?
Yes, it is possible to increase the Ansible reboot module to specific values using the below syntax:
reboot:
reboot_timeout: 1000
29. Explain how you will copy files recursively onto a target host?
The copy file in Ansible has a recursive parameter. If you have to copy files for a large number of
files, then the synchronizing module is the best choice for it.
- synchronize:
src: /first/absolute/path
dest: /second/absolute/path
delegate_to: "{{ inventory_hostname }}"
If cowsay is installed then executing your playbooks within Ansible is very smooth.
Even if you think that you want to work in a professional cow free environment, then you will have
two options:
1. Uninstall cowsay
2. Setting up value for the environment variable, like below
Export ANSIBLE_NOCOWS=1
By default, Ansible gathers facts under machines under management. Further, these facts are
accessed in Playbooks and in templates. One of the best ways to view a list of all the facts that
are available in a machine, then need to run the setup module in the ad-hoc way:
Once this statement is executed, it will print out a dictionary of all the facts that are available for
that particular host. This is the best way to access the list of Ansible_variables.
32. How can you see all the variables specific to my host?
To see all the host-specific variables, that include all facts and other resources are:
{{ hostvars[inventory_hostname]['ansible_' + which_interface]['ipv4']['address'] }}
'inventory_hostname' is a variable that represents the present host you are looping over.
34. How to configure a jump host for accessing servers that have no direct
access?
For example,
[gatewayed]
foo ansible_host=192.0.2.1
bar ansible_host=192.0.2.2
When connecting to any hosts in the group gatewayed, Ansible will append these arguments to
the command line.
35. Explain how you can generate encrypted passwords for the user module?
The mkpasswd utility available on the Linux systems is also the best option:
mkpasswd --method=sha-512
Yes. If any task that you want to keep secret in the playbook when using -v (verbose) mode, the
following playbook attribute will be helpful:
It hides sensitive information from others and provides the verbose output.
Idempotence is an essential feature of Ansible, which helps you to execute one or more tasks on a
server as many times as needed, but without changing the result beyond the initial application.
Yes, using the 'ansible-vault create' command, we can create encrypted files
40. How will you get access to the ansible host when I delegate a task?
We can access it through host variables and even works for all the overridden variables like
ansible_port, ansible_user, etc.
A tag is an attribute that sets the ansible structure(plays, tasks, roles). When there's an extensive
playbook needed, it's more useful to run just a part of it as opposed to the entire thing. That's
where tags usage is required.
In Ansible, handlers are just like normal tasks in a playbook but run when tasks include the notify
directive and also indicate that it changed something. It runs only once after all the tasks
executed in a particular play. It automatically loads through
roles/<role_name>/handlers/main.yaml.
They are used to trigger the status of a service, such as restarting or stopping a service.
Using the command "sudo pip install ansible==<version-number>", we can easily upgrade
Ansible.
Ansible Chef
Ansible uses YAML (Python) for Chef uses DSL (Ruby) for managing
managing configurations configurations
They are several reasons for not shipping in X format. In general, it caters to maintainability.
Within the market, they are tons of different ways to ship software and it is very tedious to
support all of them.
47. What is Ansible can do?
1. Configuration management
2. Application deployment
3. Task automation
4. IT orchestration.
Ansible Galaxy refers to the website Galaxy where the users will be able to share all the roles to a
CLI ( Command Line Interface) where the installation, creation, and management of roles happen.
49. Can you explain how to handle various machines requiring different user
accounts or ports to log in?
Just by setting inventories in the inventory file, we can handle various machines requiring
different user accounts or ports to log in.
For example, the following hosts have different ports and usernames:
[webservers]
asdf.example.com ansible_port=5000 ansible_user=alice
jkl.example.com ansible_port=5001 ansible_user=bob
[testcluster]
localhost ansible_connection=local
/path/to/chroot1 ansible_connection=chroot
foo.example.com ansible_connection=paramiko
Ansible and Ansible Tower by Red Hat, both are an end to end-complete automation platforms
which are capable of providing the following features or functionalities:
Provisioning
Deploying applications
Orchestrating workflows
Manage IT systems
Configuration of IT systems
Networks
Applications.
All of these activities are dealt with by Ansible which it can help the business to solve real-time
business problems.
Yes, Ansible is an open-source tool that is powerful automation software tool that one can use.
53. Why you have to learn Ansible?
Ansible is more a tool for servers but does it have anything for networking? If you closely look into
it, there is some support available in the market for networking devices. Using this tool, it will
give you an overall view of your environment and also knowledge of how it works when it comes
to network automation.
You need to have a virtual machine with Linux installed, which has Python 2.6 version or higher.
yum update
Once, Ansible is installed and the basic setup has been completed, an inventory is created. This
would be the base and one can start testing ansible. To connect to a different device then you
have to use the “Ping module”. This can be used as a simple connection test.
It is an open-source tool that primarily works on Python. If you are good at programming in
Python you can start creating your own modules in few hours from scratch and you don't need to
have any prior knowledge of the same.
After completing the basic setup, one has to make sure to find out the module called the “setup”
module. Using this setup module, you will be able to find out a lot of information.
The term “Facts” is commonly used in an Ansible environment. They are described in the playbook
areas where it displays known and discovered variables about the system. Facts are used to
implement conditional executions and also used for getting ad-hoc information of information.
So if you want to extract only a certain part of the information then you can use the “setup”
module where you will have an option to filter out the output and just get hold of the fact that you
are in need of.
60. What is ask_pass in ansible?
This controls whether ansible-playbook prompts a password by default. Usually, the default
behavior is no:
If you are using SSH keys for authentication purposes then you really don’t have to change this
setting at all.
The ask_sudo_pass controls the Ansible Playbook to prompt a Sudo password. Usually, the default
behavior is no:
ask_sudo_pass= True
One has to make sure and change this setting where the sudo passwords are enabled most of the
time.
Using this control we can determine whether Ansible Playbook should prompt a password for the
vault password by default. As usual, the default behavior is no
ask_vault_pass= True
Callbacks are explained as a piece of code in ansible environments where to get is used to call a
specific event and permit the notifications.
This is more sort of a developer-related feature and allows low-level extensions around ansible so
that they can be loaded from different locations without any problem.
Ansible provides a wide variety of module utilities that help developers while developing their
own modules. The basic.py is a module that provides the main entry point for accessing the
Ansible library and using those as basics one can start off working.
Unit tests for all the modules are available in .test/units/modules. Firstly you have to set up your
testing environment
Well, ad-hoc commands are nothing but a command which is used to do something quickly and it
is more sort of one-time use. Unlike, the playbook is used for repeated action which is something
that is very useful in the Ansible environment. But there might be scenarios where we want to use
ad-hoc commands which can simply do the required activity and it is a non repetitive activity.
KUBERNETES
Containers are a technology for collecting the compiled code for an application when it is required
at run-time. Each container allows you to run repeatable, standard dependencies and the same
behavior whenever the container runs. It divides the application from the underlying host
infrastructure to make the deployment much easier in cloud or OS platforms.
A node is a worker machine or VM depending on the cluster. Each node contains services to run
the pods and the pods are managed by the master components.
Q4. What are the services that a node gives and its responsibilities?
Container run-time
Kubelet
Kube-proxy
The Container run-time is responsible to start and manage the containers. The kubelet is
responsible for running the state of each node and receives commands from the master to work
on it and it is also responsible for the metric collection of pods. The Kube proxy is a component
that manages the subnets and makes services available for all other components.
A master node is a node that controls and manages the set of worker nodes and resembles a
cluster in Kubernetes.
The main components of the master node that help to manage worker nodes are as follows:
Kube-server: It acts as a front end of the cluster and communicates with the cluster
Kube controller: It implements governance across the cluster and runs the set of
Kube scheduler: It schedules the activities of the nodes and holds the node resource to
2. Pods that can run with multiple containers when it is required to work together
There are three different types of multi-container pods. They are as follows:
Sidecar: The Sidecar pattern is a single node pattern made of two containers of the
application. It contains the core logic of the application and it sends the logic files to the
bucket.
Adapter: It is used to standardize and normalize the output application or monitor data for
aggregation. It performs restructuring, and reformatting and can write the correct
Ambassador: It is a proxy pattern that allows connecting other containers with a port on
the localhost.
Q9. What is the Namespace? How many namespaces are there in Kubernetes?
A namespace is used to work with multiple teams or projects spread across. It is used to divide
the cluster resources for multiple users.
1. Default: The default namespace that when the cluster comes out of the box with no other
namespaces
3. Kune-public: The namespace that can create automatically and is visible and readable
publicly throughout the whole cluster. The public aspect of this namespace is only
Docker provides the lifecycle management of a container and the docker image builds the run-
time of a container. The containers run on multiple hosts through a link and are orchestrated
using Kubernetes. Dockers build these containers and help to communicate with multiple hosts
through Kubernetes
It cannot do Auto-
Auto-scaling It can do Auto-scaling
scaling
It is a third-party tool,
It is an in-built tool for
Logging and monitoring uses ELA stack for
logging and monitoring
logging and monitoring
Container orchestration is used to communicate with several micro-services that are placed inside
a single container of an application to perform various tasks.
There are many Container orchestration tools that provide a framework for managing
microservices and containers at scale. The popular tools for container orchestration are as
follows:
Kubernetes
Docker swarm
Apache Mesos
Q15. What are the major operations of Kubelet as a node service component in
Kubernetes?
The Kubelet is a node that communicates with master components to work on all the parts
It merges the available CPU, memory, and disk of a node into a large Kubernetes cluster.
It provides access to the controller to check and report the status of the cluster.
Pods
Deployments
Distinctive identities
Stateful sets
Daemon sets
Q17. What is the difference between the pod and the container?
Pods are the collection of containers used as the unit of replication in Kubernetes. Containers are
the set of codes to compile in a pod of the application. Containers can communicate with other
containers in the same pod.
Ans: Stateful set is a workload API object used to manage the stateful application. It is used to
manage deployments and scale the sets of pods. The state information and other resilient data of
stateful pods were stored and maintained in the disk storage that connects with the stateful set.
Replication controllers act as supervisors for all long-running pods. It ensures that the specified
number of pods are running at the run-time and also ensures that a pod or a set of pods are
homogeneous in nature. It maintains the desired number of pods if the number of pods it will
terminate the extra pod. And if there is a failed pod, the controller will automatically replace the
failed pod.
It provides an automated and advanced scheduler to launch the containers on the cluster
Replacing, rescheduling, and restarting the containers that failed while compilation
It supports rollouts and rollback for the desired state of the containerized application
Kubectl is the command-line tool used to control the Kubernetes clusters. It provides the CLI to
run the command against clusters to create and manage the Kubernetes components.
The Google Container Engine (GKE) is the open-source management for the Docker containers and
the clusters. This Kubernetes-based container engine supports only the clusters that run within
the Google public cloud service.
Cluster IP: It exposes the services on the cluster's internal IP and makes the services
Node port: It exposes the services on each node’s IP at the static port.
Load balancer: It provides services externally using a cloud provider’s load balancer. It
External name: It navigates the service to the contents of the external field by returning
Grafana
Heapster
CAdvisor
InfluxDB
Prometheus
Heapster is a performance monitoring and metric collection system. It provides cluster-wide data
aggregation by running with a kubelet on each node. It allows for the collection of metrics, pods,
workloads, containers, and other signals that are generated by the clusters.
A daemon set ensures that all the eligible nodes run a copy of the pod runs only once in a host. It
was created and scheduled by the daemon controller. It is a process that runs in the background
and does not produce any visible output.
It runs the logs collection of daemons on every node such as fluentd or filebeat.
It runs node monitoring on every node.
A Replica set is used to maintain a stable set of replica pods. It is used to specify the available
number of identical pods. It was also considered as a replacement for the replication controller
sometimes.
ETCD is the distributed key-value store. It stores and replicates the configuring data of the
Kubernetes cluster.
Kubernetes FAQs
An ingress controller is a pod that acts as an inbound traffic handler. It is responsible for reading
the ingress resource information and processing the data accordingly.
Q32. What is the based selector that is used in the replication controller?
The Replication controller uses the Equity-Based selector that allows filtering by labels key and
values. It only looks for the pods which have the same values as that of the label.
The load balancer is a way of distributing the loads, which is easy to implement at the dispatch
level. Each load balancer sits between the client devices and the backend servers. It receives and
distributes the incoming requests to all available servers.
The two different load balancers are one is an internal load balancer that balances the load and
allocates the pods automatically with the required configuration. And the other is the External
load balancer that directs the traffic from external loads to the backend pods.
Minikube is a type of tool that helps to run Kubernetes locally. It runs on a single-node
Kubernetes cluster inside a Virtual machine (VM).
Prometheus is an open-source toolkit that is used for metric-based monitoring and alerting the
application. It provides a data model and a query language and can provide details and actions of
metrics. It supports the instrumental application of language for many languages. The
Prometheus operator provides easy monitoring for deployments and k8s services, besides
Alertmanager and Grafana.
Kubernetes allows the required state management by cluster services of a specified configuration.
These cluster services run the configurations in the infrastructure. The following are steps that
are involved in this process as follows:
The deployment file contains all the configuration that is fed into the cluster
The cluster Ip is a default Kubernetes service that provides a link between the pods or map
container port and the host ports. It provides the services within the cluster and gives access to
other apps which are inside the same cluster.
The Different types of controller managers that can run on the master node are as follows:
Endpoints controller
Namespace controller
Replication controller
Node controller
Token controller
The Kubernetes architecture provides a flexible, coupled mechanism for the service. It consists of
one master node and multiple containers. The master node is responsible for managing the
clusters, API, and scheduling the pods. Each node runs on the container runtime such as Docker,
rkt along with the node that communicates with the master.
Master node
Worker node
The Kube-API is the front end of the master node that exposes all the components in the API
server. It provides communication between the Kubernetes nodes and the master components.
Q44. What are the advantages of Kubernetes?
1. Kubernetes Terminology
TermsExplanation
Clust It can be thought of as a group of physical or
er virtual servers where Kubernetes is installed.
Node There are two types of Nodes,
s
1.Master node is a physical or virtual server that is
used to control the Kubernetes cluster.
2.Worker node is the physical or virtual server
where workload runs in given container
technology.
Pods The group of containers that shares the same
network namespaces.
Label These are the key-value pairs defined by the user
s and associated with Pods.
MasteIt controls plane components to provide access
TermsExplanation
r points for admins to manage the cluster
workloads.
Servic It can be viewed as an abstraction that serves as
e a proxy for a group of Pods performing a
"service".
2. Kubernetes Commands
1. Nodes:
ShortCode = no
Commands Description
kubectl get node To list down all worker nodes.
kubectl delete nodeDelete the given node in
<node_name> cluster.
Show metrics for a given
kubectl top node node.
kubectl describe nodes |Describe all the nodes in
grep ALLOCATED -A 5 verbose.
List all pods in the current
kubectl get pods -o wide |namespace, with more
grep <node_name> details.
List all the nodes with mode
kubectl get no -o wide details.
Describe the given node in
kubectl describe no verbose.
kubectl annotate nodeAdd an annotation for the
<node_name> given node.
Commands Description
kubectl uncordon nodeMark my-node as
<node_name> schedulable.
kubectl label node Add a label to given node
2. Pods
Shortcode = po
Commands Description
kubectl get po To list the available pods in
the default namespace.
kubectl describe podTo list the detailed
<pod_name> description of pod.
kubectl delete podTo delete a pod with the
<pod_name> name.
kubectl create podTo create a pod with the
<pod_name> name.
Kubectl get pod -nTo list all the pods in a
<name_space> namespace.
Kubectl create pod
<pod_name> -nTo create a pod with the
<name_space> name in a namespace.
3. Namespaces
Shortcode = ns
Commands Description
kubectl create namespaceTo create a namespace by
<namespace_name> the given name.
kubectl get namespace To list the current
namespace in a cluster.
kubectl describeTo display the detailed
namespace state of one or more
Commands Description
<namespace_name> namespaces.
kubectl delete namespace
<namespace_name> To delete a namespace.
kubectl edit namespaceTo edit and update the
<namespace_name> definition of a namespace.
4. Services
Shortcode = services
Commands Description
kubectl get services To list one or more
services.
kubectl describe servicesTo list the detailed
<services_name> display of services.
kubectl delete services -o
wide To delete all the services.
kubectl delete service <To delete a particular
service_name> service.
5. Deployments
Commands Description
kubectl create deploymentTo create a new
<deployment_name> deployment.
kubectl get deployment To list one or more
deployments.
kubectl describe deploymentTo list a detailed state
<deployment_name> of one or more
deployments.
Commands Description
kubectl delete
deployment<deployment_name
> To delete a deployment.
6. DaemonSets
Command Description
kubectl get ds To list out all the
daemon sets.
kubectl get ds -all-namespaces To list out the
daemon sets in a
namespace.
kubectl describe dsTo list out the
[daemonset_name] detailed information
[namespace_name] for a daemon set
inside a namespace.
7. Events
Commands Description
kubectl get events To list down the recent
events for all the
resources in the system.
kubectl get events --field-
selector involvedObject.kindTo list down all the events
!= Pod except the pod events.
kubectl get events --field-To filter out normal events
selector type != Normal from a list of events.
8. Logs
Logs are useful when debugging problems and
monitoring cluster activity. They help to understand
what is happening inside the application.
Commands Description
kubectl logsTo display the logs for a Pod
<pod_name> with the given name.
kubectl logs --since=1hTo display the logs of last 1
<pod_name> hour for the pod with the
given name.
kubectl logs --tail-20To display the most recent 20
<pod_name> lines of logs.
kubectl logs -cTo display the logs for a
<container_name> container in a pod with the
<pod_name> given names.
kubectl logsTo save the logs into a file
<pod_name> pod.log named as pod.log.
9. ReplicaSets
Commands Description
kubectl get replicasets To List down the
ReplicaSets.
kubectl describeTo list down the detailed
replicasets state of one or more
<replicaset_name> ReplicaSets.
kubectl scale --replace=[x]To scale a replica set.
Commands Description
kubectl get
serviceaccounts To List Service Accounts.
kubectl describeTo list the detailed state of one
Commands Description
serviceaccounts or more service accounts.
kubectl replace
serviceaccounts To replace a service account.
kubectl delete
serviceaccounts
<name> To delete a service account.
Command Description
kubectl taint
<node_name><taint_name This is used to update the
> taints on one or more nodes.
Command Description
kubectl label podAdd or update the label
<pod_name> of a pod
You can download a PDF version of Kubernetes Cheat Sheet.
Download PDF
Commands Description
kubectl version To get the information related
to the version.
kubectl cluster-info To get the information related
to the cluster.
kubectl config g view To get the configuration
details.
kubectl describe nodeTo get the information about a
<node_name> node.
Commands Description
kubectl cp /tmp/foo_dirCopy /tmp/foo_dir local
my-pod:/tmp/bar_dir directory to /tmp/bar_dir in a
remote pod in the current
namespace.
kubectl cp /tmp/foo my-Copy /tmp/foo local file to
pod:/tmp/bar -c my-/tmp/bar in a remote pod in a
container specific container.
kubectl cp /tmp/foo my-Copy /tmp/foo local file to
namespace/my-pod:/tm /tmp/bar in a remote pod in a
p/bar specific container.
kubectl cp my-
namespace/my-pod:/tm Copy /tmp/foo from a remote
p/foo /tmp/bar pod to /tmp/bar locally.
DOCKER
2. What is Docker?
Docker can be defined as a Containerization platform that packs all your applications, and all the
necessary dependencies combined to form containers. This will not only ensure the applications
work seamlessly given any environment but also provides better efficiency to your Production-
ready applications. Docker wraps up bits and pieces of software with all the needed filesystems
containing everything that needs to run the code, provide the runtime, system tools/libraries. This
will ensure that the software is always run and executed the same, regardless of the environment.
Containers run on the same machine sharing the same Operating system Kernel, this makes it
faster – as starting the applications is the only time that is required to start your Docker container
(remember that the OS Kernel is already UP and running and uses the least of the RAM possible).
Docker is lightweight and more efficient in terms of resource uses because it uses the host
underlying kernel rather than creating its own hypervisor.
To start with, Docker is one of the upcoming and is a fresh project. Since its inception has been
done in the Cloud era, it been way better than many of the other competing Container
technologies which have ruled their way until Docker came into existence. There is an active
community that is running towards the better upbringing of Docker and it has also started
extending its support to Windows and Mac OSX environments in recent days. Other than these,
below are the best possible reasons to highlight Docker as one of the better options to choose
from than the existing Container technologies.
A Docker image can be understood as a template from which Docker containers can be created as
many as we want out of that single Docker image. Having said that, to put it in layman's terms,
Docker containers are created out of Docker images. Docker images are created with the build
command, and this produces a container that starts when it is run. Docker images are stored in
the Docker registry such as the public Docker registry (registry.hub.docker.com) as these are
designed to be constituted with layers of other images, enabling just the minimal amount of data
over the network.
This is a very important question so just make sure you don’t deviate from the topic and I will
advise you to follow the below mentioned format:
Docker containers include the application and all of its dependencies, but share the kernel
with other containers, running as isolated processes in user space on the host operating
system. Docker containers are not tied to any specific infrastructure: they run on any
Now explain how to create a Docker container, Docker containers can be created by either
creating a Docker image and then running it or you can use Docker images that are present
on the Dockerhub.
Docker hub is a cloud-based registry service that allows you to link to code repositories, build
your images and test them, store manually pushed images, and link to the Docker cloud so you
can deploy images to your hosts. It provides a centralized resource for container image discovery,
distribution and change management, user and team collaboration, and workflow automation
throughout the development pipeline.
Docker Swarm can be best understood as the native way of Clustering implementation for Docker
itself. Docker Swarm turns a pool of Docker hosts into a single and virtual Docker host. It serves
the standard Docker API or any other tool that can already communicate with a Docker daemon
and can make use of Docker Swarm to scale in a transparent way to multiple hosts. Following is a
list of some of the supported tools that will be helpful in achieving what we have discussed just
now.
Dokku
Docker Compose
Docker Machine
Jenkins.
Dockerfile is nothing but a set of instructions that have to be passed on to Docker itself so that it
can build images automatically by reading these instructions from that specified Dockerfile. A
Dockerfile is a text document that contains all the commands a user could call on the command
line to assemble an image. Using docker build users can create an automated build that executes
several command-line instructions in succession.
10. Can I use JSON instead of YAML for my compose file in Docker?
YES, you can very comfortably use JSON instead of the default YAML for your Docker compose file.
In order to use JSON file with composing, you need to specify the filename to use as the following:
docker-compose -f docker-compose.json up
11. Tell us how you have used Docker in your past position?
This is a question that you could bring upon your whole experience with Docker and if you have
used any other Container technologies before Docker. You could also explain the ease that this
technology has brought in the automation of the development to production lifecycle
management. You can also discuss any other integrations that you might have worked on along
with Docker such as Puppet, Chef, or even the most popular of all technologies – Jenkins. If you do
not have any experience with Docker itself but similar tools from this space, you could convey the
same and also show your interest in learning this leading containerization technology.
You can create a Docker Container out of any specific Docker image of your choice and the same
can be achieved using the command given below:
The command above will create the container and also starts it for you. In order to check whether
the Docker container is created and whether it is running or not, you could make use of the
following command. This command will list out all the Docker containers along with their statuses
on the host that the Docker container runs.
docker ps -a
The following command can be used to stop a certain Docker container with the container id as
CONTAINER_ID:
docker stop CONTAINER_ID
The following command can be used to restart a certain Docker container with the container id as
CONTAINER_ID:
docker restart CONTAINER_ID
Best examples in the Web deployments like Google, Twitter and best examples in the Platform
Providers like Heroku, and dotCloud run on Docker which can scale from the ranges of hundreds of
thousands to millions of containers running in parallel, given the condition that the OS and the
memory don’t run out from the hosts which runs all these innumerable containers hosting your
applications.
Docker is currently available on the following platforms and also on the following Vendors or
Linux:
Docker is currently available and also is able to run on the following Cloud environment setups
given below:
Amazon EC2
Google Compute Engine
Microsoft Azure
Rackspace
Docker is extending its support to Windows and Mac OSX environments and support on Windows
has been on the growth in a very drastic manner.
There is no loss of data when any of your Docker containers exits as any of the data that your
application writes to the disk in order to preserve it. This will be done until the container is
explicitly deleted. The file system for the Docker container persists even after the Docker
container is halted.
17. What, in your opinion, is the most exciting potential use for Docker?
The most exciting potential use of Docker that I can think of is its build pipeline. Most of the
Docker professionals are seen using hyper-scaling with containers, and indeed get a lot of
containers on the host that it actually runs on. These are also known to be blatantly fast. Most of
the development–test build pipeline is completely automated using the Docker framework.
18. Why is Docker the new craze in virtualization and cloud computing?
Docker is the newest and the latest craze in the world of Virtualization and also Cloud computing
because it is an ultra-lightweight containerization app that is brimming with potential to prove its
mettle.
A docker-compose stop will attempt to stop a specific Docker container by sending a SIGTERM
message. Once this message is delivered, it waits for the default timeout period of 10 seconds
and once the timeout period is crossed, it then sends out a SIGKILL message to the container – in
order to kill it forcefully. If you are actually waiting for the timeout period, then it means that the
containers are not shutting down on receiving SIGTERM signals/messages.
In an attempt to solve this issue, the following is what you can do:
You can ensure that you are using the JSON form of the CMD and also the ENTRYPOINT in
Using the string form, makes Docker run the process using a bash that can’t handle signals
If it is possible then modify the application which you intend to run by adding an explicit
Also, set the stop_signal to a proper signal that the application can understand and also
20. How do I run multiple copies of a Compose file on the same host?
Docker’s compose makes use of the Project name to create unique identifiers for all of the
project’s containers and resources. In order to run multiple copies of the same project, you will
need to set a custom project name using the –p command-line option or you could use
the COMPOSE_PROJECT_NAME environment variable for this purpose.
Using the docker-compose run command, we will be able to run the one-off or the ad-hoc tasks
that are required to be run as per the Business needs and requirements. This requires the service
name to be provided which you would want to run and based on that, it will only start those
containers for the services that the running service depends on. Using the run command, you can
run your tests or perform any of the administrative tasks like removing/adding data to the data
volume container. It is also very similar to the docker run –ti command, which opens up an
interactive terminal to the containers an exit status that matches with the exit status of the
process in the container.
Using the docker-compose start command, you can only restart the containers that were
previously created and were stopped. This command never creates any new Docker containers on
its own.
Dockerizing enterprise environments helps teams to leverage the Docker containers to form a
service platform like CaaS (Container as a Service). It gives teams the necessary agility, and
portability and also lets them control staying within their own network/environment.
Most of the developers opt to use Docker and Docker alone because of the flexibility and also the
ability that it provides to quickly build and ship applications to the rest of the world. Docker
containers are portable and these can run in any environment without making any additional
changes when the application developers have to move between Developer, Staging, and
Production environments. This whole process is seamlessly implemented without the need of
performing any recoding activities for any of the environments. These not only help reduce the
time between these lifecycle states but also ensures that the whole process is performed with
utmost efficiency. There is every possibility for the Developers to debug any certain issue, fix it
and also update the application with it and propagate this fix to the higher environments with the
utmost ease.
The operations teams can handle the security of the environments while also allowing the
developers to build and ship the applications in an independent manner. The CaaS platform that is
provided by the Docker framework can deploy on-premise and is also loaded with full of
enterprise-level security features such as role-based access control, integration with LDAP or any
Active Directory, image signing and etc. Operations teams have heavily relied on the scalability
provided by Docker and can also leverage the Dockerized applications across any environment.
Docker containers are so portable that it allows teams to migrate workloads that run on an
Amazon’s AWS environment to Microsoft Azure without even having to change its code and also
with no downtime at all. Docker allows teams to migrate these workloads from their cloud
environments to their physical datacenters and vice versa. This also enables the Organizations to
focus on the infrastructure from the gained advantages both monetarily and also the self-
reliability over Docker. The lightweight nature of Docker containers compared to traditional tools
like virtualization, combined with the ability for Docker containers to run within VMs, allows teams
to optimize their infrastructure by 20X, and save money in the process.
Depending on the environment where Docker is going to host the containers, there can be as
many containers as the environment supports. The application size, and available resources (like
CPU, and memory) will decide on the number of containers that can run on an environment.
Though containers create newer CPUs on their own they can definitely provide efficient ways of
utilizing the resources. The containers themselves are super lightweight and only last as long as
the process they are running.
Having said that, you might come across situations where you’ll need to use both approaches. You
can have the image include the code using a COPY, and use a volume in your Compose file to
include the code from the host during development. The volume overrides the directory contents
of the image.
Docker containers are gaining popularity each passing day and definitely will be a quintessential
part of any professional Continuous Integration / Continuous Development pipelines. Having said
that there is equal responsibility on all the key stakeholders at each Organization to take up the
challenge of weighing the risks and gains on adopting technologies that are budding up on a daily
basis. In my humble opinion, Docker will be extremely effective in Organizations that appreciate
the consequences of Containerization.
We can identify the status of a Docker container by running the command ‘docker ps –a’, which
will in turn list down all the available docker containers with its corresponding statuses on the
host. From there we can easily identify the container of interest to check its status
correspondingly.
27. What are the differences between the ‘docker run’ and the ‘docker create’?
The most important difference that can be noted is that, by using the ‘docker create’ command we
can create a Docker container in the Stopped state. We can also provide it with an ID that can be
stored for later usages as well.
This can be achieved by using the command ‘docker run’ with the option –cidfile FILE_NAME as like
this:
‘docker run –cidfile FILE_NAME’
28. What are the various states that a Docker container can be in at any given point in
time?
There are four states that a Docker container can be in, at any given point in time. Those states
are as given as follows:
• Running
• Paused
• Restarting
• Exited
To answer this question blatantly, No, it is not possible to remove a container from Docker that is
just paused. It is a must that a container should be in the stopped state before it can be removed
from the Docker container.
30. Is there a possibility that a container can restart all by itself in Docker?
To answer this question blatantly, No, it is not possible. The default –restart flag is set to never
restart on its own. If you want to tweak this, then you may give it a try.
31. What is the preferred way of removing containers - ‘docker rm -f’ or ‘docker stop’
then followed by a ‘docker rm’?
The best and the preferred way of removing containers from Docker is to use the ‘docker stop’, as
it will allow sending a SIG_HUP signal to its recipients giving them the time that is required to
perform all the finalization and cleanup tasks. Once this activity is completed, we can then
comfortably remove the container using the ‘docker rm’ command from Docker and thereby
update the docker registry as well.
Docker Image doesn't have a state and its state never changes as it is just a set of files whereas
the docker container has its execution state.