0% found this document useful (0 votes)
24 views36 pages

Interview Questions

IQ

Uploaded by

raghuhera79
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views36 pages

Interview Questions

IQ

Uploaded by

raghuhera79
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 36

What is Terraform?

Answer: Terraform is open-source communication as a system software tool


created by HashiCorp. It is an instrument for building, altering, and versioning
transportation safely and professionally. Terraform can direct existing and
accepted service providers as well as convention in-house solutions.
What do you mean by Terraform init?
Answer: Terraform initialises the code with the command terraform init. This
command is used to set up the working directory for Terraform configuration
files. It is safe to run this command multiple times.
You can use the init command for:

1. Installing Plugins
2. Installation of a Child Module
3. Initialization of the backend
What is Terraform provider?
Answer: Terraform is a tool for managing and informing infrastructure resources
such as physical machines, virtual machines (VMs), network switches,
containers, and more. A provider is responsible for API interactions that are
thoughtful and reveal resources. Terraform is compatible with a wide range of
cloud providers.

How does Terraform work?


Answer: Terraform creates an implementation plan, defines what it will do to
achieve the desired state, and then executes it to build the infrastructure
described. Terraform is capable of determining what changed and generating
incremental execution plans that are practical as the configuration changes.

What are the key features of Terraform?


Answer: Following are the key features of Terraform:
 Infrastructure as Code: Terraform’s high-level configuration language is used
to define your infrastructure in human-readable declarative configuration
files.
 You may now create an editable, shareable, and reusable blueprint.
 Terraform generates an execution plan that specifies what it will do and asks
for your approval before making any infrastructure alterations. You can
assess the modifications before Terraform creates, updates, or destroys
infrastructure.
 Terraform creates a resource graph while simultaneously developing or
altering non-dependent resources. Terraform can now build resources as
quickly as possible while also giving you more information about your
infrastructure.
 Terraform’s the automation of change allows you to apply complex
changesets to your infrastructure with little to no human interaction.
Terraform recognizes

What are the most useful Terraform


commands?
Common commands:

 terraform init: Prepare your working directory for other commands


 terraform plan: Show changes required by the current configuration
 terraform apply: Create or update infrastructure
 terraform destroy: Destroy previously-created infrastructure

What does the following command do?


Answer:
 Terraform -version – to check the installed version of terraform
 Terraform fmt– it is used to rewrite configuration files in canonical styles and
format
 Terraform providers – it gives information of providers working in the current
configuration.

How would you recover from a failed apply


in Terraform?
Answer: You can save your configuration in version control and commit it before
making any changes, and then use the features of your version control system
to revert to an earlier configuration if necessary. You must always recommit the
previous version code in order for it to be the new version in the version control
system.

What do you mean by Terragrunt, list some


of its use cases?
Answer: Terragrunt is a lightweight wrapper that adds tools for maintaining DRY
configurations, working with multiple Terraform modules, and managing remote
states.
Use cases:

 Keep your Terraform code DRY


 Maintain a DRY remote state configuration.
 Keep your CLI flags DRY
 Run Terraform commands on multiple modules at the same time.
 Use multiple AWS accounts.

What is State File Locking?


Answer: State file locking is a Terraform mechanism that prevents operations on
a specific state file from being performed by multiple users at the same time.
Once the lock from one user is released, any other user who has taken a lock on
that state file can operate on it. This aids in the prevention of state file
corruption. The acquiring of a lock on a state file in the backend is a backend
operation. If acquiring a lock on the state file takes longer than expected, you
will receive a status message as an output.

What is a Remote Backend in Terraform?


Answer: Terraform remote backend is used to store Terraform’s state and can
also run operations in Terraform Cloud. Multiple terraform commands such as
init, plan, apply, destroy (terraform version >= v0.11.12), get, output,
providers, state (sub-commands: list, mv, pull, push, rm, show), taint, untaint,
validate, and many more are available via remote backend. It is compatible with
a single remote Terraform cloud workspace or multiple workspaces. You can use
terraform cloud’s run environment to run remote operations such as terraform
plan or terraform apply.

What is Terraform D?
Answer: Terraform D is a plugin used on most in-service systems and Windows.
Terraform init by default searches next directories for plugins.
How will you upgrade plugins on Terraform?
Answer: Run ‘terraform init’ with ‘-upgrade’ option. This command rechecks the
releases.hashicorp.com to find new acceptable provider versions. It also
downloads available provider versions. “.terraform/plugins/<OS>_<ARCH>” is
the automatic downloads directory.

 terraform init: In order to prepare the working directory for use with
Terraform, the terraform init command performs Backend Initialization, Child
Module Installation, and Plugin Installation.
 terraform apply: The terraform apply command executes the actions
proposed in a Terraform plan
 terraform apply –auto-approve: Skips interactive approval of plan before
applying.
 terraform destroy: The terraform destroy command is a convenient way to
destroy all remote objects managed by a particular Terraform configuration.
 terraform fmt: The terraform fmt command is used to rewrite Terraform
configuration files to a canonical format and style
 terraform show: The terraform show command is used to provide human-
readable output from a state or plan file.

ANSIBLE

1. What is Ansible?

Ansible is developed in Python. It is a software tool. It is useful while deploying any application
using ssh without any downtime. Using this tool one can manage and configure software
applications very easily.

2. Ansible Playbooks vs Roles

Roles Playbooks

Roles are reusable subsets of a


Playbooks contain Plays.
play.

A set of tasks for accomplishing a


Mapps among hosts and roles.
certain role.

Example: site.yml, fooservers.yml,


Example: common, webservers.
webservers.yml.

3. What are the advantages of using Ansible?

The main three advantages of using this tool are,i.e. Ansible

1. Agentless
2. Very low overhead
3. Good performance.

4. Compare Ansible VS Puppet

Ansible Puppet
Simplest Technology Complex Technology

Written in YAML language Written in Ruby language

Automated workflow for


Visualization and reporting
Continuous Delivery

Agent-less install and deploy Easy install

No support for Windows Support for all major OS’s

GUI -work under progress Good GUI

CLI accepts commands in almost


Must learn the Puppet DSL
any language

5. How does Ansible Works?

There are many similar automation tools available like Puppet, Capistrano, Chef, Salt, Space Walk,
etc, but Ansible categorizes into two types of servers: controlling machines and nodes.

The controlling machine, where Ansible is installed and Nodes are managed by this controlling
machine over SSH. The location of nodes is specified by the controlling machine through its
inventory.

The controlling machine (Ansible) deploys modules to nodes using SSH protocol and these
modules are stored temporarily on remote nodes and communicate with the Ansible machine
through a JSON connection over the standard output.

Ansible is agent-less, which means no need for any agent installation on remote nodes, so it
means there are no background daemons or programs executing for Ansible when it’s not
managing any nodes.

Ansible can handle 100’s nodes from a single system over an SSH connection and the entire
operation can be handled and executed by one single command ‘ansible’. But, in some cases,
where you are required to execute multiple commands for a deployment, here we can build
playbooks.

Playbooks are a bunch of commands which can perform multiple tasks and each playbook are in
YAML file format.

6. What’s the Use of Ansible?

Ansible can be used in IT Infrastructure to manage and deploy software applications to remote
nodes. For example, let’s say you need to deploy a single software or multiple software to 100’s
of nodes by a single command, here ansible comes into the picture, with the help of Ansible you
can deploy as many applications to many nodes with one single command, but you must have a
little programming knowledge for understanding the ansible scripts.

We’ve compiled a series on Ansible, title ‘Preparation for the Deployment of your IT Infrastructure
with Ansible IT Automation Tool‘, through parts 1-4 and covers the following topics.

7. Explain Ansible architecture?


Ansible automation engine is the main component of Ansible, which interacts directly with the
configuration management database, cloud services, and various users who write playbooks to
execute it.

The below figure depicts the Ansible architecture:

The following are the components of the Ansible Automation engine:

 Modules: Ansible works effectively by connecting nodes and pushing out scripts called

"Ansible modules". It helps to manage packages, system resources, files, libraries, etc.

 Inventories: These are the lists of nodes or hosts containing their databases, servers, IP

addresses, etc.

 APIs: These are used for commuting public or private cloud services.

 Plugins: Plugins augment Ansible's core functionality. Also offers extensions and options

for the core features of Ansible - transforming data, connecting to inventory, logging

output, and more.

 Playbooks: Describes the tasks that need to be executed. They are simple code files

written in YAML format and can be used to declare configurations, automating tasks, etc.

 Hosts: Hosts are node systems that are automated by Ansible on any machine like Linux,

RedHat, Windows, etc.

 Networking: Ansible can be used to automate multiple networks and services. It uses a

secure and simple automation framework for IT operations and development.

 Cloud: A system of remote servers that allows you to store, manage, and process data,

rather than a local server.

 CMDB: It is a type of repository which acts as a data warehouse for IT installations.

8. What is CI/CD? And how Ansible is related to it?

CI/CD is one of the best software development practices to implement and develop code
effectively. CI stands for Continuous Integration, and CD stands for continuous delivery.
Continuous Integration is a collection of practices that drive developers to implement and check in
code to version control repositories. Continuous delivery picks up where continuous Integration
ends. This process builds software in such a way that software will be released into production at
any given time.
Ansible is an excellent tool for CI/CD processes, which provides a stable infrastructure to a
provision target environment and then deploys the application to it.

9. Can you create reusable content with Ansible?

Yes, Ansible has the concept of roles that helps to create reusable content. To create a role, you
need to follow Ansible's conventions of structuring directories and naming files.

10. Is Ansible a Configuration management tool?

Configuration management is the practice to handle updates and manage the consistency of a
product's performance over a particular period of time. Ansible is an open-source IT Configuration
Management tool, which automates a wide variety of challenges in complex multi-tier IT
application environments.

11. What are the differences between the variable name and environment
variables?

Variable Names Environment Variables

By adding strings, we can build By accessing existing variables, we


variable names can access environment variables

The advanced playbooks section


Supports adding more strings
sets the environment variables.

Use
Use the IPV4 address for variable
{{ansible_env.SOME_VARIABLE}}
names.
for remote environment variables

12. How to create an empty file with Ansible?

To create an empty file, Ansible uses a file module. For this, we need to set up two parameters.

1. Path - This place represents the location where the file gets created, either the relative or
an absolute path. Also, the name of the file includes here.
2. State - For creating a new file, this parameter should be set to touch.

13. How will you set the environment variable or any path for a task or entire
playbook?

To set the environment variables, we use the environment keyword. We'll use it at the task or
other levels in the play:

environment:
PATH: "{{ ansible_env.PATH }}:/thingy/bin"
SOME: value

14. How would you describe yourself in terms of what you do and how you’d like
to be remembered?

Obviously, I’d like to be remembered as a master of prose who forever changed the face of
literature as we know it, but I’m going to have to settle for being remembered as a science fiction
writer (and, more and more, critic) who wrote the occasional funny line and picked up a few
awards.

15. Why are you attracted to science and science fiction?


Early imprinting, maybe, for science fiction. When I was quite small a family friend let me read his
1950s run of ‘Galaxy’ magazine. My favorite aunt pressed John Wyndham’s ‘The Day of the
Triffids’ on me; a more terrifying great-aunt gave me G.K. Chesterton’s fantastic novels; and so
on.

The incurable addiction had begun. Meanwhile, science classes just seemed to be the part of a
school that made the most sense, and I fell in love with Pelican pop-maths titles – especially
Kasner’s and Newman’s ‘Mathematics and the Imagination’ and all those books of Martin
Gardner’s ‘Scientific American’ columns.

16. Tell us about your software company and what sort of software it
produced(s).

This goes back to the 1980s and the Apricot home computers, the early, pretty, and non-PC-
compatible ones. My pal Chris Priest and I both used them for word processing, and he persuaded
me to put together a disk of utilities to improve the bundled ‘SuperWriter’ w/p, mostly written in
Borland Turbo Pascal 3 and later 4: two-column printing, automated book index preparation,
cleaning the crap out of the spellcheck dictionary, patching SuperWriter to produce dates in UK
format, and so on.

Then I redid the indexing software (‘AnsibleIndex’) in CP/M for the Amstrad PCW and its Locoscript
word processors. When the Apricot market collapsed, I wrote an Apricot emulator in assembler so
that people could keep using their horrible but familiar old software on a PC. Eventually, in a fit of
nostalgia, I collected all my columns for ‘Apricot File’ and various Amstrad PCW magazines as
books unoriginally titled ‘The Apricot Files’ and ‘The Limbo Files’. (That’s probably enough self-
promotion, but there’s lots more at https://ansible.uk/.)

17. Describe your newsletter Ansible and who it’s aimed at.

It appears monthly and has been called the ‘Private Eye’ of science fiction, but isn’t as cruel and
doesn’t (I hope) recycle old jokes quite as relentlessly. Though I feel a certain duty to list some
bread-and-butter material like conventions, award winners, and deaths in the field, ‘Ansible’ skips
the most boring SF news – the long lists of books acquired, books published, book sales figures,
major new remainders – in favor of quirkier items and poking fun at SF notables. The most popular
departments quote terrible lines from published SF/fantasy and bizarre things said about SF by
outsiders (‘As Others See Us’). All the back issues of ‘Ansible’ since it started in 1979 can be read
online.

18. So how does Ansible work? Please explain in detail.

Within the market, they are many automation tools like Puppet, Capistrano, Chef, Salt, Space
Walk, etc.

 When it comes to Ansible, this tool is categorized into two types of servers:

1. Controlling machines

2. Nodes.

 Ansible is an agentless tool so it doesn’t require any mandatory installations on remote

nodes. So there are no background programs that are executed while it is managing any

nodes.

 Ansible is able to handle a lot of nodes from a single system over an SSH connection.

 Playbooks are defined as a bunch of commands where that are capable of performing

multiple tasks and they are in YAML file format.

Ansible Scenario-Based Interview Questions

19. What does Ansible offer?


Ansible offers:

 Security and Compliance policy integration


 Automated workflow for Continuous Delivery
 Simplified orchestration
 App deployment
 Configuration management
 Streamlined provisioning.

20. Can we manage Windows Nano Server using Ansible?

No, it is not possible to manage Windows Nano Server using Ansible as it doesn't have full access
to the .Net framework, which is primarily used by internal components and modules.

21. Do we have any Web Interface/ Rest API etc fo

Yes, Ansible Inc makes a great efficient tool. It is easy to use.

22. What is Ansible Tower?

Ansible is classified as a web-based solution which makes Ansible very easy to use. It is
considered to be or acts like a hub for all of your automation tasks. The tower is free for usage till
10 nodes.

[ Related Article: Learn Ansible Tower Tutorial ]

23. What are the features of the Ansible Tower?

Features of the Ansible Tower are:

 Ansible Dashboard
 Real-time job status updates
 Multi-playbook workflows
 Who Ran What Job When
 Scale capacity with tower clusters
 Integrated notifications
 Schedule ansible jobs
 Manage and track inventory
 Remote command execution
 REST API & Tower CLI Tool.

24. How do change the documentation and submit it?

Usually, the documentation is kept in the main project folder in the git repository. Complete
instructions on this can be available in docs.

25. How do you access Shell Environment Variables?

If you are just looking to access the existing variables then you can use the “env” lookup plugin.

For example:

Accessing the value of Home environment variable on the management machine:

local_home:”{{lookup(‘env’,’HOME’)}}”

26. How can you speed up management inside EC2?

It is not advised to manage a group of EC2 machines from your laptop. The best way is to connect
to a management node inside Ec2 first and then execute Ansible from there.

27. Is it possible to increase the Ansible reboot module to more than 600
seconds?
Yes, it is possible to increase the Ansible reboot module to specific values using the below syntax:

- name: Reboot a Linux system

reboot:

reboot_timeout: 1000

28. How can you use docker modules in Ansible?

Docker modules require docker-py installed on the host running Ansible.

$ pip install 'docker-py>=1.7.0'

The docker_service module also requires docker-compose

$ pip install 'docker-compose>=1.7.0'

29. Explain how you will copy files recursively onto a target host?

The copy file in Ansible has a recursive parameter. If you have to copy files for a large number of
files, then the synchronizing module is the best choice for it.

- synchronize:
src: /first/absolute/path
dest: /second/absolute/path
delegate_to: "{{ inventory_hostname }}"

30. How can you disable cowsay?

If cowsay is installed then executing your playbooks within Ansible is very smooth.

Even if you think that you want to work in a professional cow free environment, then you will have
two options:

1. Uninstall cowsay
2. Setting up value for the environment variable, like below

Export ANSIBLE_NOCOWS=1

31. How can you access a list of Ansible_Variables?

By default, Ansible gathers facts under machines under management. Further, these facts are
accessed in Playbooks and in templates. One of the best ways to view a list of all the facts that
are available in a machine, then need to run the setup module in the ad-hoc way:

Ansible- m setup hostname

Once this statement is executed, it will print out a dictionary of all the facts that are available for
that particular host. This is the best way to access the list of Ansible_variables.

32. How can you see all the variables specific to my host?

To see all the host-specific variables, that include all facts and other resources are:

Ansible - m debug- a “var=hostvars[‘hostname’]” localhost

33. How do you access a variable name programmatically?


By adding strings together, the variables names are built programmatically like below format:

{{ hostvars[inventory_hostname]['ansible_' + which_interface]['ipv4']['address'] }}

'inventory_hostname' is a variable that represents the present host you are looping over.

34. How to configure a jump host for accessing servers that have no direct
access?

We should set a ProxyCommand in the ansible_ssh_common_args inventory variable. For


connecting to the relevant host, arguments defined in this variable are added to scp/ssh/sftp
command line.

For example,

[gatewayed]
foo ansible_host=192.0.2.1
bar ansible_host=192.0.2.2

With the following contents, create the group_vars/gatewayed.yml

ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q [email protected]"'

When connecting to any hosts in the group gatewayed, Ansible will append these arguments to
the command line.

35. Explain how you can generate encrypted passwords for the user module?

Ansible ad-hoc command is the easiest option:

ansible all -i localhost, -m debug -a "msg={{ 'mypassword' | password_hash('sha512', 'mysecret') }}"

The mkpasswd utility available on the Linux systems is also the best option:

mkpasswd --method=sha-512

36. Can you keep data secret in the playbook?

Yes. If any task that you want to keep secret in the playbook when using -v (verbose) mode, the
following playbook attribute will be helpful:

- name: secret task


shell: /usr/bin/do_something --value={{ secret_value }}
no_log: True

It hides sensitive information from others and provides the verbose output.

37. What is idempotency?

Idempotence is an essential feature of Ansible, which helps you to execute one or more tasks on a
server as many times as needed, but without changing the result beyond the initial application.

38. Can you create encrypted files with Ansible?

Yes, using the 'ansible-vault create' command, we can create encrypted files

$ ansible-vault create filename.yaml

39. What is the difference between a playbook and a play?


A playbook is a list of plays. A play is a set of tasks and roles that run on one or more managed
hosts. Play includes one or more tasks.

Ansible Advanced Interview Questions

40. How will you get access to the ansible host when I delegate a task?

We can access it through host variables and even works for all the overridden variables like
ansible_port, ansible_user, etc.

original_host: "{{ hostvars[inventory_hostname]['ansible_host'] }}"

41. Explain the Ansible Tag's usage?

A tag is an attribute that sets the ansible structure(plays, tasks, roles). When there's an extensive
playbook needed, it's more useful to run just a part of it as opposed to the entire thing. That's
where tags usage is required.

42. How can you filter out tasks in tags?

 Use –tags or –skip-tags options on the command line


 Use the TAGS_RUN and TAGS_SKIP options, If you're in Ansible configuration settings.

43. What are handlers?

In Ansible, handlers are just like normal tasks in a playbook but run when tasks include the notify
directive and also indicate that it changed something. It runs only once after all the tasks
executed in a particular play. It automatically loads through
roles/<role_name>/handlers/main.yaml.

They are used to trigger the status of a service, such as restarting or stopping a service.

44. How will you upgrade Ansible?

Using the command "sudo pip install ansible==<version-number>", we can easily upgrade
Ansible.

45. Ansible vs Chef?

Ansible Chef

Ansible is easier to set up and Compared to Ansible, Chef is not


provides faster performance very easy to set up

Ansible uses YAML (Python) for Chef uses DSL (Ruby) for managing
managing configurations configurations

Highly scalable Highly scalable

Chef Automate charges an annual


Ansible charges annually $10,000
fee of $13700

[ Related Article: Ansible vs Chef ]

46. Why don’t you ship in X format?

They are several reasons for not shipping in X format. In general, it caters to maintainability.
Within the market, they are tons of different ways to ship software and it is very tedious to
support all of them.
47. What is Ansible can do?

Ansible can do the following for us:

1. Configuration management
2. Application deployment
3. Task automation
4. IT orchestration.

48. Please define what is Ansible Galaxy.

Ansible Galaxy refers to the website Galaxy where the users will be able to share all the roles to a
CLI ( Command Line Interface) where the installation, creation, and management of roles happen.

49. Can you explain how to handle various machines requiring different user
accounts or ports to log in?

Just by setting inventories in the inventory file, we can handle various machines requiring
different user accounts or ports to log in.

For example, the following hosts have different ports and usernames:

[webservers]
asdf.example.com ansible_port=5000 ansible_user=alice
jkl.example.com ansible_port=5001 ansible_user=bob

You can specify the connection type to be used by:

[testcluster]
localhost ansible_connection=local
/path/to/chroot1 ansible_connection=chroot
foo.example.com ansible_connection=paramiko

File them in a group_vars/<group-name> file.

Visit here to learn Ansible Training in Bangalore

50. Do you know what language Ansible is written in?

Ansible is written in Python and PowerShell.

51. Please explain what is Red Hat Ansible.

Ansible and Ansible Tower by Red Hat, both are an end to end-complete automation platforms
which are capable of providing the following features or functionalities:

 Provisioning
 Deploying applications
 Orchestrating workflows
 Manage IT systems
 Configuration of IT systems
 Networks
 Applications.

All of these activities are dealt with by Ansible which it can help the business to solve real-time
business problems.

52. Is Ansible is an open-source tool?

Yes, Ansible is an open-source tool that is powerful automation software tool that one can use.
53. Why you have to learn Ansible?

Ansible is more a tool for servers but does it have anything for networking? If you closely look into
it, there is some support available in the market for networking devices. Using this tool, it will
give you an overall view of your environment and also knowledge of how it works when it comes
to network automation.

It is one of those tools where it is considered to be good to explore a new tool.

54. What are Ansible server requirements?

You need to have a virtual machine with Linux installed, which has Python 2.6 version or higher.

55. How to install Ansible on CentOS?

Step 1: Update your Control Node

yum update

Step 2: Install the EPEL Repository

yum install epel-release

Step 3: Install Ansible

yum install Ansible

56. How can you connect to other devices within Ansible?

Once, Ansible is installed and the basic setup has been completed, an inventory is created. This
would be the base and one can start testing ansible. To connect to a different device then you
have to use the “Ping module”. This can be used as a simple connection test.

Ansible - m ping all

57. Can you build your own modules with Ansible?

Yes, we can create or own modules within Ansible.

It is an open-source tool that primarily works on Python. If you are good at programming in
Python you can start creating your own modules in few hours from scratch and you don't need to
have any prior knowledge of the same.

58. How can you find information in Ansible?

After completing the basic setup, one has to make sure to find out the module called the “setup”
module. Using this setup module, you will be able to find out a lot of information.

59. What does Fact mean in Ansible?

The term “Facts” is commonly used in an Ansible environment. They are described in the playbook
areas where it displays known and discovered variables about the system. Facts are used to
implement conditional executions and also used for getting ad-hoc information of information.

You can see all the facts via:

$ ansible all- m setup

So if you want to extract only a certain part of the information then you can use the “setup”
module where you will have an option to filter out the output and just get hold of the fact that you
are in need of.
60. What is ask_pass in ansible?

The ask_pass is controlled in Ansible Playbook.

This controls whether ansible-playbook prompts a password by default. Usually, the default
behavior is no:

It is always set to ask_pass=True

If you are using SSH keys for authentication purposes then you really don’t have to change this
setting at all.

61. Explain What is ask_sudo_pass?

This control is very similar to ask_pass

The ask_sudo_pass controls the Ansible Playbook to prompt a Sudo password. Usually, the default
behavior is no:

ask_sudo_pass= True

One has to make sure and change this setting where the sudo passwords are enabled most of the
time.

62. Explain what is ask_vault_pass?

Using this control we can determine whether Ansible Playbook should prompt a password for the
vault password by default. As usual, the default behavior is no

ask_vault_pass= True

63. Explain Callback_plugin in Ansible?

Callbacks are explained as a piece of code in ansible environments where to get is used to call a
specific event and permit the notifications.

This is more sort of a developer-related feature and allows low-level extensions around ansible so
that they can be loaded from different locations without any problem.

64. Explain Module utilities in Ansible?

Ansible provides a wide variety of module utilities that help developers while developing their
own modules. The basic.py is a module that provides the main entry point for accessing the
Ansible library and using those as basics one can start off working.

65. Where is the unit testing is available in Ansible?

Unit tests for all the modules are available in .test/units/modules. Firstly you have to set up your
testing environment

66. Explain in detail ad-hoc commands?

Well, ad-hoc commands are nothing but a command which is used to do something quickly and it
is more sort of one-time use. Unlike, the playbook is used for repeated action which is something
that is very useful in the Ansible environment. But there might be scenarios where we want to use
ad-hoc commands which can simply do the required activity and it is a non repetitive activity.

KUBERNETES

Q1. What is Kubernetes?


Kubernetes is an open-source container orchestration system for deploying, scaling and managing
automated applications. It offers an excellent community and works with all cloud providers.
Hence, it is a multi-container management solution.

If you would like to become a Certified Kubernetes Administrator,


then visit Mindmajix - A Global online training platform: “Kubernetes
Training".This course will help you to achieve excellence in this domain.

Q2. What is a container?

Containers are a technology for collecting the compiled code for an application when it is required
at run-time. Each container allows you to run repeatable, standard dependencies and the same
behavior whenever the container runs. It divides the application from the underlying host
infrastructure to make the deployment much easier in cloud or OS platforms.

Q3. What are the nodes that run inside Kubernetes?

A node is a worker machine or VM depending on the cluster. Each node contains services to run
the pods and the pods are managed by the master components.

Q4. What are the services that a node gives and its responsibilities?

The services that include in a node are as follows:

 Container run-time

 Kubelet

 Kube-proxy

The Container run-time is responsible to start and manage the containers. The kubelet is
responsible for running the state of each node and receives commands from the master to work
on it and it is also responsible for the metric collection of pods. The Kube proxy is a component
that manages the subnets and makes services available for all other components.

Q5. What is a master node in Kubernetes?

A master node is a node that controls and manages the set of worker nodes and resembles a
cluster in Kubernetes.

Related Article: What is Kubernetes

Q6. What are the main components of the master node?

The main components of the master node that help to manage worker nodes are as follows:

 Kube-server: It acts as a front end of the cluster and communicates with the cluster

through the API server.

 Kube controller: It implements governance across the cluster and runs the set of

controllers for the running cluster.

 Kube scheduler: It schedules the activities of the nodes and holds the node resource to

determine the proper action for triggering events.

Q7. What is a pod and what does it do?


A pod is a group of containers that are deployed together on the same host. It is the basic
execution unit of the Kubernetes application that can create or deploy the Kubernetes unit of
object models.

Kubernetes pods can be used in two ways. they are as follows:

1. Pods that can run in a single container

2. Pods that can run with multiple containers when it is required to work together

Q8. What are the different types of multiple-container pods?

There are three different types of multi-container pods. They are as follows:

 Sidecar: The Sidecar pattern is a single node pattern made of two containers of the

application. It contains the core logic of the application and it sends the logic files to the

bucket.

 Adapter: It is used to standardize and normalize the output application or monitor data for

aggregation. It performs restructuring, and reformatting and can write the correct

formatted output for the application.

 Ambassador: It is a proxy pattern that allows connecting other containers with a port on

the localhost.

Q9. What is the Namespace? How many namespaces are there in Kubernetes?

A namespace is used to work with multiple teams or projects spread across. It is used to divide
the cluster resources for multiple users.

Q10. Mention different kinds of Namespaces in Kubernetes.

The namespaces are of three kinds. They are:

1. Default: The default namespace that when the cluster comes out of the box with no other

namespaces

2. Kube-system: The namespace for objects created by Kubernetes.

3. Kune-public: The namespace that can create automatically and is visible and readable

publicly throughout the whole cluster. The public aspect of this namespace is only

convenient and reserved for cluster usage.

Q11. How are Kubernetes related to docker?

Docker provides the lifecycle management of a container and the docker image builds the run-
time of a container. The containers run on multiple hosts through a link and are orchestrated
using Kubernetes. Dockers build these containers and help to communicate with multiple hosts
through Kubernetes

Q12. Mention the difference between Kubernetes and a docker?

Features Kubernetes Docker


The installation process
The installation is very
Installation and cluster is very complicated but
simple, but it does not
configuration once it has been done,
have a robust cluster.
the Cluster is robust.

It cannot do Auto-
Auto-scaling It can do Auto-scaling
scaling

It can store data only


It can store data on any
Data volumes with other containers on
other containers
the same pod

It is a third-party tool,
It is an in-built tool for
Logging and monitoring uses ELA stack for
logging and monitoring
logging and monitoring

Q13. Why do we need Container orchestration in Kubernetes?

Container orchestration is used to communicate with several micro-services that are placed inside
a single container of an application to perform various tasks.

The use of container orchestration is as follows:

 It controls and automates various tasks such as deployment, scaling, etc.,

 Reduces the complexity of running time

 Scaling becomes easy

 It is used to deploy and manage complex containerized applications

 Reduces manual setting up services

Q14. What are the tools of container orchestration?

There are many Container orchestration tools that provide a framework for managing
microservices and containers at scale. The popular tools for container orchestration are as
follows:

 Kubernetes

 Docker swarm

 Apache Mesos

Q15. What are the major operations of Kubelet as a node service component in
Kubernetes?

The major operations that the Kubelet do as follows:

 The Kubelet is a node that communicates with master components to work on all the parts

of the Kubernetes cluster.

 It merges the available CPU, memory, and disk of a node into a large Kubernetes cluster.

 It provides access to the controller to check and report the status of the cluster.

 It is responsible for the collection of metric pods


Kubernetes Interview Questions For Experienced

Q16. Mention the list of objects of Kubernetes.

The following is the list of objects used to define the workloads.

 Pods

 Replication sets and controllers

 Deployments

 Distinctive identities

 Stateful sets

 Daemon sets

 Jobs and cron jobs

Q17. What is the difference between the pod and the container?

Pods are the collection of containers used as the unit of replication in Kubernetes. Containers are
the set of codes to compile in a pod of the application. Containers can communicate with other
containers in the same pod.

Q18. Explain Stateful sets in Kubernetes.

Ans: Stateful set is a workload API object used to manage the stateful application. It is used to
manage deployments and scale the sets of pods. The state information and other resilient data of
stateful pods were stored and maintained in the disk storage that connects with the stateful set.

Q19. How to determine the status of deployment?

To determine the status of the deployment, use the command below:

kubectl rollout status

If the output runs, then the deployment is successfully completed.

Q20. Explain Replication controllers.

Replication controllers act as supervisors for all long-running pods. It ensures that the specified
number of pods are running at the run-time and also ensures that a pod or a set of pods are
homogeneous in nature. It maintains the desired number of pods if the number of pods it will
terminate the extra pod. And if there is a failed pod, the controller will automatically replace the
failed pod.

Visit here to learn Kubernetes Online Course in Hyderabad

Q21. What are the features of Kubernetes?

The features of Kubernetes are as follows:

 It provides an automated and advanced scheduler to launch the containers on the cluster

 Replacing, rescheduling, and restarting the containers that failed while compilation

 It supports rollouts and rollback for the desired state of the containerized application

 It can scale up and scale down as per the requirements.


Q22. What is kubectl?

Kubectl is the command-line tool used to control the Kubernetes clusters. It provides the CLI to
run the command against clusters to create and manage the Kubernetes components.

Q23. What is the Google container engine?

The Google Container Engine (GKE) is the open-source management for the Docker containers and
the clusters. This Kubernetes-based container engine supports only the clusters that run within
the Google public cloud service.

Q24. What are the different types of services in Kubernetes?

The different types of services that support Kubernetes are as follows:

 Cluster IP: It exposes the services on the cluster's internal IP and makes the services

reachable within the cluster only.

 Node port: It exposes the services on each node’s IP at the static port.

 Load balancer: It provides services externally using a cloud provider’s load balancer. It

creates the service to route the external load balancer automatically.

 External name: It navigates the service to the contents of the external field by returning

the CNAME record by its value.

Q25. Mention the various container resource monitoring tools.

The various container monitoring tools are as follows:

 Grafana

 Heapster

 CAdvisor

 InfluxDB

 Prometheus

Q26. What is Heapster?

Heapster is a performance monitoring and metric collection system. It provides cluster-wide data
aggregation by running with a kubelet on each node. It allows for the collection of metrics, pods,
workloads, containers, and other signals that are generated by the clusters.

Q27. Explain Daemon sets.

A daemon set ensures that all the eligible nodes run a copy of the pod runs only once in a host. It
was created and scheduled by the daemon controller. It is a process that runs in the background
and does not produce any visible output.

Q28. What are the uses of Daemon sets?

The uses of Daemon sets are as follows:

 It runs cluster storage such as ceph, and glusterd on each node.

 It runs the logs collection of daemons on every node such as fluentd or filebeat.
 It runs node monitoring on every node.

Q29. Explain the Replica set.

A Replica set is used to maintain a stable set of replica pods. It is used to specify the available
number of identical pods. It was also considered as a replacement for the replication controller
sometimes.

Q30. What is ETCD in Kubernetes?

ETCD is the distributed key-value store. It stores and replicates the configuring data of the
Kubernetes cluster.

Kubernetes FAQs

Q31. Explain the Ingress controller.

An ingress controller is a pod that acts as an inbound traffic handler. It is responsible for reading
the ingress resource information and processing the data accordingly.

Q32. What is the based selector that is used in the replication controller?

The Replication controller uses the Equity-Based selector that allows filtering by labels key and
values. It only looks for the pods which have the same values as that of the label.

Q33. Explain the Load balancer in Kubernetes.

The load balancer is a way of distributing the loads, which is easy to implement at the dispatch
level. Each load balancer sits between the client devices and the backend servers. It receives and
distributes the incoming requests to all available servers.

Q34. Explain the two different types of load balancers.

The two different load balancers are one is an internal load balancer that balances the load and
allocates the pods automatically with the required configuration. And the other is the External
load balancer that directs the traffic from external loads to the backend pods.

Q35. What is Minikube?

Minikube is a type of tool that helps to run Kubernetes locally. It runs on a single-node
Kubernetes cluster inside a Virtual machine (VM).

Q36. What are the uses of the Google Kubernetes Engine?

The uses of Google Kubernetes Engine are as follows:

 It creates the Docker container cluster

 It resizes the application controllers

 It creates the containers pods, load balancer, services, replication controller

 It updates and upgrades the container cluster

 It helps to debug the container cluster

Q37. Explain Prometheus in Kubernetes.

Prometheus is an open-source toolkit that is used for metric-based monitoring and alerting the
application. It provides a data model and a query language and can provide details and actions of
metrics. It supports the instrumental application of language for many languages. The
Prometheus operator provides easy monitoring for deployments and k8s services, besides
Alertmanager and Grafana.

Q38. What is the role of clusters in Kubernetes?

Kubernetes allows the required state management by cluster services of a specified configuration.
These cluster services run the configurations in the infrastructure. The following are steps that
are involved in this process as follows:

 The deployment file contains all the configuration that is fed into the cluster

 These deployments are fed into the API server

 The cluster services will schedule the pods in the environment

 It also ensures the right number of pods were running

Q39. What is the Cluster IP?

The cluster Ip is a default Kubernetes service that provides a link between the pods or map
container port and the host ports. It provides the services within the cluster and gives access to
other apps which are inside the same cluster.

Q40. What are the types of controller managers?

The Different types of controller managers that can run on the master node are as follows:

 Endpoints controller

 Namespace controller

 Service account controller

 Replication controller

 Node controller

 Token controller

Q41. What is Kubernetes architecture?

The Kubernetes architecture provides a flexible, coupled mechanism for the service. It consists of
one master node and multiple containers. The master node is responsible for managing the
clusters, API, and scheduling the pods. Each node runs on the container runtime such as Docker,
rkt along with the node that communicates with the master.

Q42. What are the main components of Kubernetes architecture?

The two main components of the Kubernetes architecture are as follows:

 Master node

 Worker node

Each node contains the individual components in it

Q43. Define Kube-api server?

The Kube-API is the front end of the master node that exposes all the components in the API
server. It provides communication between the Kubernetes nodes and the master components.
Q44. What are the advantages of Kubernetes?

The advantages of Kubernetes are as follows:

 Kubernetes is open-source and free

 It is highly scalable and runs in any operating system

 It provides more concepts and is more powerful than Docker swarm

 It provides a scheduler, auto-scaling, rolling upgrades, and health checks

 It has a flat network space and customized functionalities

 It is easy to make effective CI/CD pipelines

 It can improve productivity

Q45. What are the disadvantages of Kubernetes?

The disadvantages of Kubernetes are as follows:

 The installation process and configuration is highly difficult

 It is not easy to manage the services

 It takes a lot of time to run and compile

 It is more expensive than the other alternatives

 It can be overkill for simple application

1. Kubernetes Terminology

Terms that you should be familiar with before starting


off with Kubernetes are enlisted below:

TermsExplanation
Clust It can be thought of as a group of physical or
er virtual servers where Kubernetes is installed.
Node There are two types of Nodes,
s
1.Master node is a physical or virtual server that is
used to control the Kubernetes cluster.
2.Worker node is the physical or virtual server
where workload runs in given container
technology.
Pods The group of containers that shares the same
network namespaces.
Label These are the key-value pairs defined by the user
s and associated with Pods.
MasteIt controls plane components to provide access
TermsExplanation
r points for admins to manage the cluster
workloads.
Servic It can be viewed as an abstraction that serves as
e a proxy for a group of Pods performing a
"service".

Since now we have a fair understanding of what


Kubernetes is, let's now jump to the cheat sheet.

2. Kubernetes Commands

Viewing Resource Information:

1. Nodes:
ShortCode = no

A Node is a worker machine in Kubernetes and may be


either a virtual or a physical machine, depending on
the cluster. Each Node is managed by the control
plane. A Node can have multiple pods, and the
Kubernetes control plane automatically handles
scheduling the pods across the Nodes in the cluster.

Commands Description
kubectl get node To list down all worker nodes.
kubectl delete nodeDelete the given node in
<node_name> cluster.
Show metrics for a given
kubectl top node node.
kubectl describe nodes |Describe all the nodes in
grep ALLOCATED -A 5 verbose.
List all pods in the current
kubectl get pods -o wide |namespace, with more
grep <node_name> details.
List all the nodes with mode
kubectl get no -o wide details.
Describe the given node in
kubectl describe no verbose.
kubectl annotate nodeAdd an annotation for the
<node_name> given node.
Commands Description
kubectl uncordon nodeMark my-node as
<node_name> schedulable.
kubectl label node Add a label to given node

2. Pods
Shortcode = po

Pods are the smallest deployable units of computing


that you can create and manage in Kubernetes.

Commands Description
kubectl get po To list the available pods in
the default namespace.
kubectl describe podTo list the detailed
<pod_name> description of pod.
kubectl delete podTo delete a pod with the
<pod_name> name.
kubectl create podTo create a pod with the
<pod_name> name.
Kubectl get pod -nTo list all the pods in a
<name_space> namespace.
Kubectl create pod
<pod_name> -nTo create a pod with the
<name_space> name in a namespace.

3. Namespaces
Shortcode = ns

In Kubernetes, namespaces provide a mechanism for


isolating groups of resources within a single cluster.
Names of resources need to be unique within a
namespace, but not across namespaces.

Commands Description
kubectl create namespaceTo create a namespace by
<namespace_name> the given name.
kubectl get namespace To list the current
namespace in a cluster.
kubectl describeTo display the detailed
namespace state of one or more
Commands Description
<namespace_name> namespaces.
kubectl delete namespace
<namespace_name> To delete a namespace.
kubectl edit namespaceTo edit and update the
<namespace_name> definition of a namespace.

4. Services
Shortcode = services

In Kubernetes, a Service is an abstraction which


defines a logical set of Pods and a policy by which to
access them (sometimes this pattern is called a micro-
service).

Commands Description
kubectl get services To list one or more
services.
kubectl describe servicesTo list the detailed
<services_name> display of services.
kubectl delete services -o
wide To delete all the services.
kubectl delete service <To delete a particular
service_name> service.

5. Deployments

A Deployment provides declarative updates for Pods


and ReplicaSets.The typical use case of deployments
are to create a deployment to rollout a ReplicaSet,
declare the new state of the pods and rolling back to
an earlier deployment revision.

Commands Description
kubectl create deploymentTo create a new
<deployment_name> deployment.
kubectl get deployment To list one or more
deployments.
kubectl describe deploymentTo list a detailed state
<deployment_name> of one or more
deployments.
Commands Description
kubectl delete
deployment<deployment_name
> To delete a deployment.

6. DaemonSets

A DaemonSet ensures that all (or some) Nodes run a


copy of a Pod. As nodes are added to the cluster, Pods
are added to them. As nodes are removed from the
cluster, those Pods are garbage collected. Deleting a
DaemonSet will clean up the Pods it created.

Command Description
kubectl get ds To list out all the
daemon sets.
kubectl get ds -all-namespaces To list out the
daemon sets in a
namespace.
kubectl describe dsTo list out the
[daemonset_name] detailed information
[namespace_name] for a daemon set
inside a namespace.

7. Events

Kubernetes events allow us to paint a performative


picture of the clusters.

Commands Description
kubectl get events To list down the recent
events for all the
resources in the system.
kubectl get events --field-
selector involvedObject.kindTo list down all the events
!= Pod except the pod events.
kubectl get events --field-To filter out normal events
selector type != Normal from a list of events.

8. Logs
Logs are useful when debugging problems and
monitoring cluster activity. They help to understand
what is happening inside the application.

Commands Description
kubectl logsTo display the logs for a Pod
<pod_name> with the given name.
kubectl logs --since=1hTo display the logs of last 1
<pod_name> hour for the pod with the
given name.
kubectl logs --tail-20To display the most recent 20
<pod_name> lines of logs.
kubectl logs -cTo display the logs for a
<container_name> container in a pod with the
<pod_name> given names.
kubectl logsTo save the logs into a file
<pod_name> pod.log named as pod.log.

9. ReplicaSets

A ReplicaSet's purpose is to maintain a stable set of


replica Pods running at any given time. As such, it is
often used to guarantee the availability of a specified
number of identical Pods.

Commands Description
kubectl get replicasets To List down the
ReplicaSets.
kubectl describeTo list down the detailed
replicasets state of one or more
<replicaset_name> ReplicaSets.
kubectl scale --replace=[x]To scale a replica set.

10. Service Accounts

A service account provides an identity for processes


that run in a Pod.

Commands Description
kubectl get
serviceaccounts To List Service Accounts.
kubectl describeTo list the detailed state of one
Commands Description
serviceaccounts or more service accounts.
kubectl replace
serviceaccounts To replace a service account.
kubectl delete
serviceaccounts
<name> To delete a service account.

3. Changing Resource Attributes

Taints: They ensure that pods are not placed on


inappropriate nodes.

Command Description
kubectl taint
<node_name><taint_name This is used to update the
> taints on one or more nodes.

Labels: They are used to identify pods.

Command Description
kubectl label podAdd or update the label
<pod_name> of a pod
You can download a PDF version of Kubernetes Cheat Sheet.

Download PDF

4. For Cluster Introspection

Commands Description
kubectl version To get the information related
to the version.
kubectl cluster-info To get the information related
to the cluster.
kubectl config g view To get the configuration
details.
kubectl describe nodeTo get the information about a
<node_name> node.

5. Interacting with Deployments and Services


Commands Description
kubectl logs
deploy/my- Dump Pod logs for a Deployment
deployment (single-container case).
kubectl logs
deploy/my-
deployment -c my-dump Pod logs for a Deployment
contain (multi-container case).
kubectl port-forwardTo listen on local port 5000 and
svc/my-service 5000 forward to port 5000 on Service
backend.
kubectl port-forward
deploy/my- To listen on local port 5000 and
deployment forward to port 6000 on a Pod
5000:6000 created by <my-deployment>.
kubectl execTo run command in first Pod and
deploy/my- first container in Deployment
deployment -- ls (single- or multi-container cases).

6. Copy files and directories to and from containers

Commands Description
kubectl cp /tmp/foo_dirCopy /tmp/foo_dir local
my-pod:/tmp/bar_dir directory to /tmp/bar_dir in a
remote pod in the current
namespace.
kubectl cp /tmp/foo my-Copy /tmp/foo local file to
pod:/tmp/bar -c my-/tmp/bar in a remote pod in a
container specific container.
kubectl cp /tmp/foo my-Copy /tmp/foo local file to
namespace/my-pod:/tm /tmp/bar in a remote pod in a
p/bar specific container.
kubectl cp my-
namespace/my-pod:/tm Copy /tmp/foo from a remote
p/foo /tmp/bar pod to /tmp/bar locally.

DOCKER

1. Docker Vs VM (Virtual Machine)


Virtual Machines Docker Containers

Need more resources Less resources are used

Process isolation is done at the Process Isolation is done at


hardware level Operating System-level

Separate Operating System for Operating System resources can be


each VM shared within Docker

VMs can be customized Custom container setup is easy

Takes time to create a Virtual The creation of docker is very


Machine quick

Booting takes minutes Booting is done within seconds.

2. What is Docker?

Docker can be defined as a Containerization platform that packs all your applications, and all the
necessary dependencies combined to form containers. This will not only ensure the applications
work seamlessly given any environment but also provides better efficiency to your Production-
ready applications. Docker wraps up bits and pieces of software with all the needed filesystems
containing everything that needs to run the code, provide the runtime, system tools/libraries. This
will ensure that the software is always run and executed the same, regardless of the environment.

Containers run on the same machine sharing the same Operating system Kernel, this makes it
faster – as starting the applications is the only time that is required to start your Docker container
(remember that the OS Kernel is already UP and running and uses the least of the RAM possible).

3. What is the advantage of Docker over hypervisors?

Docker is lightweight and more efficient in terms of resource uses because it uses the host
underlying kernel rather than creating its own hypervisor.

4. How is Docker different from other container technologies?

To start with, Docker is one of the upcoming and is a fresh project. Since its inception has been
done in the Cloud era, it been way better than many of the other competing Container
technologies which have ruled their way until Docker came into existence. There is an active
community that is running towards the better upbringing of Docker and it has also started
extending its support to Windows and Mac OSX environments in recent days. Other than these,
below are the best possible reasons to highlight Docker as one of the better options to choose
from than the existing Container technologies.

 There is no limitation on running Docker as the underlying infrastructure can be your


laptop or else your Organization’s Public / Private cloud space
 Docker with its Container HUB forms the repository of all the containers that you are ever
going to work, use and download. Sharing of applications is as well possible with the
Containers that you create.
 Docker is one of the best-documented technologies available in the Containerization space.

5. What is Docker's image?

A Docker image can be understood as a template from which Docker containers can be created as
many as we want out of that single Docker image. Having said that, to put it in layman's terms,
Docker containers are created out of Docker images. Docker images are created with the build
command, and this produces a container that starts when it is run. Docker images are stored in
the Docker registry such as the public Docker registry (registry.hub.docker.com) as these are
designed to be constituted with layers of other images, enabling just the minimal amount of data
over the network.

6. What is a Docker container?

This is a very important question so just make sure you don’t deviate from the topic and I will
advise you to follow the below mentioned format:

 Docker containers include the application and all of its dependencies, but share the kernel

with other containers, running as isolated processes in user space on the host operating

system. Docker containers are not tied to any specific infrastructure: they run on any

computer, on any infrastructure, and in any cloud.

 Now explain how to create a Docker container, Docker containers can be created by either

creating a Docker image and then running it or you can use Docker images that are present

on the Dockerhub.

 Docker containers are basically runtime instances of Docker images.

7. What is a Docker hub?

Docker hub is a cloud-based registry service that allows you to link to code repositories, build
your images and test them, store manually pushed images, and link to the Docker cloud so you
can deploy images to your hosts. It provides a centralized resource for container image discovery,
distribution and change management, user and team collaboration, and workflow automation
throughout the development pipeline.

8. What is Docker Swarm?

Docker Swarm can be best understood as the native way of Clustering implementation for Docker
itself. Docker Swarm turns a pool of Docker hosts into a single and virtual Docker host. It serves
the standard Docker API or any other tool that can already communicate with a Docker daemon
and can make use of Docker Swarm to scale in a transparent way to multiple hosts. Following is a
list of some of the supported tools that will be helpful in achieving what we have discussed just
now.

 Dokku
 Docker Compose
 Docker Machine
 Jenkins.

Check out Docker Swarm Architecture

9. What is Dockerfile used for?

Dockerfile is nothing but a set of instructions that have to be passed on to Docker itself so that it
can build images automatically by reading these instructions from that specified Dockerfile. A
Dockerfile is a text document that contains all the commands a user could call on the command
line to assemble an image. Using docker build users can create an automated build that executes
several command-line instructions in succession.

10. Can I use JSON instead of YAML for my compose file in Docker?

YES, you can very comfortably use JSON instead of the default YAML for your Docker compose file.
In order to use JSON file with composing, you need to specify the filename to use as the following:
docker-compose -f docker-compose.json up

11. Tell us how you have used Docker in your past position?
This is a question that you could bring upon your whole experience with Docker and if you have
used any other Container technologies before Docker. You could also explain the ease that this
technology has brought in the automation of the development to production lifecycle
management. You can also discuss any other integrations that you might have worked on along
with Docker such as Puppet, Chef, or even the most popular of all technologies – Jenkins. If you do
not have any experience with Docker itself but similar tools from this space, you could convey the
same and also show your interest in learning this leading containerization technology.

Docker Advanced Interview Questions


12. How to create Docker container?

You can create a Docker Container out of any specific Docker image of your choice and the same
can be achieved using the command given below:

docker run -t -i command name

The command above will create the container and also starts it for you. In order to check whether
the Docker container is created and whether it is running or not, you could make use of the
following command. This command will list out all the Docker containers along with their statuses
on the host that the Docker container runs.
docker ps -a

13. How to stop and restart the Docker container?

The following command can be used to stop a certain Docker container with the container id as

CONTAINER_ID:
docker stop CONTAINER_ID

The following command can be used to restart a certain Docker container with the container id as

CONTAINER_ID:
docker restart CONTAINER_ID

14. How far do Docker containers scale?

Best examples in the Web deployments like Google, Twitter and best examples in the Platform
Providers like Heroku, and dotCloud run on Docker which can scale from the ranges of hundreds of
thousands to millions of containers running in parallel, given the condition that the OS and the
memory don’t run out from the hosts which runs all these innumerable containers hosting your
applications.

15. What platforms does Docker run on?

Docker is currently available on the following platforms and also on the following Vendors or
Linux:

 Ubuntu 12.04, 13.04


 Fedora 19/20+
 RHEL 6.5+
 CentOS 6+
 Gentoo
 ArchLinux
 openSUSE 12.3+
 CRUX 3.0+

Docker is currently available and also is able to run on the following Cloud environment setups
given below:

 Amazon EC2
 Google Compute Engine
 Microsoft Azure
 Rackspace

Docker is extending its support to Windows and Mac OSX environments and support on Windows
has been on the growth in a very drastic manner.

16. Do I lose my data when the Docker container exits?

There is no loss of data when any of your Docker containers exits as any of the data that your
application writes to the disk in order to preserve it. This will be done until the container is
explicitly deleted. The file system for the Docker container persists even after the Docker
container is halted.

17. What, in your opinion, is the most exciting potential use for Docker?

The most exciting potential use of Docker that I can think of is its build pipeline. Most of the
Docker professionals are seen using hyper-scaling with containers, and indeed get a lot of
containers on the host that it actually runs on. These are also known to be blatantly fast. Most of
the development–test build pipeline is completely automated using the Docker framework.

18. Why is Docker the new craze in virtualization and cloud computing?

Docker is the newest and the latest craze in the world of Virtualization and also Cloud computing
because it is an ultra-lightweight containerization app that is brimming with potential to prove its
mettle.

19. Why do my services take 10 seconds to recreate or stop?

A docker-compose stop will attempt to stop a specific Docker container by sending a SIGTERM
message. Once this message is delivered, it waits for the default timeout period of 10 seconds
and once the timeout period is crossed, it then sends out a SIGKILL message to the container – in
order to kill it forcefully. If you are actually waiting for the timeout period, then it means that the
containers are not shutting down on receiving SIGTERM signals/messages.

In an attempt to solve this issue, the following is what you can do:

 You can ensure that you are using the JSON form of the CMD and also the ENTRYPOINT in

your docker file.

 Use [“program”, “argument1”, “argument2”] instead of sending it as a plain string as like

this – “program argument1 argument2”.

 Using the string form, makes Docker run the process using a bash that can’t handle signals

properly. Compose always uses the JSON form.

 If it is possible then modify the application which you intend to run by adding an explicit

signal handler for the SIGTERM signal

 Also, set the stop_signal to a proper signal that the application can understand and also

know how to handle it.

20. How do I run multiple copies of a Compose file on the same host?

Docker’s compose makes use of the Project name to create unique identifiers for all of the
project’s containers and resources. In order to run multiple copies of the same project, you will
need to set a custom project name using the –p command-line option or you could use
the COMPOSE_PROJECT_NAME environment variable for this purpose.

21. What’s the difference between up, run, and start?


In any given scenario, you would always want your docker-compose up. Using the command UP,
you can start or restart all the services that are defined in a docker-compose.yml file. In the
“attached” mode, which is also the default mode – we will be able to see all the log files from all
the containers. In the “detached” mode, it exits after starting all the containers, which continue
to run in the background showing nothing over in the foreground.

Using the docker-compose run command, we will be able to run the one-off or the ad-hoc tasks
that are required to be run as per the Business needs and requirements. This requires the service
name to be provided which you would want to run and based on that, it will only start those
containers for the services that the running service depends on. Using the run command, you can
run your tests or perform any of the administrative tasks like removing/adding data to the data
volume container. It is also very similar to the docker run –ti command, which opens up an
interactive terminal to the containers an exit status that matches with the exit status of the
process in the container.

Using the docker-compose start command, you can only restart the containers that were
previously created and were stopped. This command never creates any new Docker containers on
its own.

22. What’s the benefit of “Dockerizing?”

Dockerizing enterprise environments helps teams to leverage the Docker containers to form a
service platform like CaaS (Container as a Service). It gives teams the necessary agility, and
portability and also lets them control staying within their own network/environment.

Most of the developers opt to use Docker and Docker alone because of the flexibility and also the
ability that it provides to quickly build and ship applications to the rest of the world. Docker
containers are portable and these can run in any environment without making any additional
changes when the application developers have to move between Developer, Staging, and
Production environments. This whole process is seamlessly implemented without the need of
performing any recoding activities for any of the environments. These not only help reduce the
time between these lifecycle states but also ensures that the whole process is performed with
utmost efficiency. There is every possibility for the Developers to debug any certain issue, fix it
and also update the application with it and propagate this fix to the higher environments with the
utmost ease.

The operations teams can handle the security of the environments while also allowing the
developers to build and ship the applications in an independent manner. The CaaS platform that is
provided by the Docker framework can deploy on-premise and is also loaded with full of
enterprise-level security features such as role-based access control, integration with LDAP or any
Active Directory, image signing and etc. Operations teams have heavily relied on the scalability
provided by Docker and can also leverage the Dockerized applications across any environment.

Docker containers are so portable that it allows teams to migrate workloads that run on an
Amazon’s AWS environment to Microsoft Azure without even having to change its code and also
with no downtime at all. Docker allows teams to migrate these workloads from their cloud
environments to their physical datacenters and vice versa. This also enables the Organizations to
focus on the infrastructure from the gained advantages both monetarily and also the self-
reliability over Docker. The lightweight nature of Docker containers compared to traditional tools
like virtualization, combined with the ability for Docker containers to run within VMs, allows teams
to optimize their infrastructure by 20X, and save money in the process.

Docker Interview Questions For Experienced


23. How many containers can run per host?

Depending on the environment where Docker is going to host the containers, there can be as
many containers as the environment supports. The application size, and available resources (like
CPU, and memory) will decide on the number of containers that can run on an environment.
Though containers create newer CPUs on their own they can definitely provide efficient ways of
utilizing the resources. The containers themselves are super lightweight and only last as long as
the process they are running.

24. Is there a possibility to include a specific code with COPY/ADD or a volume?


This can be easily achieved by adding either the COPY or the ADD directives in your docker file.
This will count to be useful if you want to move your code along with any of your Docker images,
for example, sending your code an environment up the ladder – The development environment to
the Staging environment or from the Staging environment to the Production environment.

Having said that, you might come across situations where you’ll need to use both approaches. You
can have the image include the code using a COPY, and use a volume in your Compose file to
include the code from the host during development. The volume overrides the directory contents
of the image.

25. Will cloud automation overtake containerization any sooner?

Docker containers are gaining popularity each passing day and definitely will be a quintessential
part of any professional Continuous Integration / Continuous Development pipelines. Having said
that there is equal responsibility on all the key stakeholders at each Organization to take up the
challenge of weighing the risks and gains on adopting technologies that are budding up on a daily
basis. In my humble opinion, Docker will be extremely effective in Organizations that appreciate
the consequences of Containerization.

26. Is there a way to identify the status of a Docker container?

We can identify the status of a Docker container by running the command ‘docker ps –a’, which
will in turn list down all the available docker containers with its corresponding statuses on the
host. From there we can easily identify the container of interest to check its status
correspondingly.

27. What are the differences between the ‘docker run’ and the ‘docker create’?

The most important difference that can be noted is that, by using the ‘docker create’ command we
can create a Docker container in the Stopped state. We can also provide it with an ID that can be
stored for later usages as well.
This can be achieved by using the command ‘docker run’ with the option –cidfile FILE_NAME as like
this:
‘docker run –cidfile FILE_NAME’

28. What are the various states that a Docker container can be in at any given point in
time?

There are four states that a Docker container can be in, at any given point in time. Those states
are as given as follows:

• Running
• Paused
• Restarting
• Exited

29. Can you remove a paused container from Docker?

To answer this question blatantly, No, it is not possible to remove a container from Docker that is
just paused. It is a must that a container should be in the stopped state before it can be removed
from the Docker container.

30. Is there a possibility that a container can restart all by itself in Docker?

To answer this question blatantly, No, it is not possible. The default –restart flag is set to never
restart on its own. If you want to tweak this, then you may give it a try.

31. What is the preferred way of removing containers - ‘docker rm -f’ or ‘docker stop’
then followed by a ‘docker rm’?

The best and the preferred way of removing containers from Docker is to use the ‘docker stop’, as
it will allow sending a SIG_HUP signal to its recipients giving them the time that is required to
perform all the finalization and cleanup tasks. Once this activity is completed, we can then
comfortably remove the container using the ‘docker rm’ command from Docker and thereby
update the docker registry as well.

32. Difference between Docker Image and container?

Docker container is the runtime instance of the docker image.

Docker Image doesn't have a state and its state never changes as it is just a set of files whereas
the docker container has its execution state.

 docker version – Echoes Client’s and Server’s Version of Docker


 docker images – List all Docker images
 docker build <image> – Builds an image form a Docker file
 docker save <path> <image> – Saves Docker image to .tar file specified by
path
 docker run – Runs a command in a new container.
 docker start – Starts one or more stopped containers
 docker stop <container_id> – Stops container
 docker rmi <image> – Removes Docker image
 docker rm <container_id> – Removes Container
 docker pull – Pulls an image or a repository from a registry
 docker push – Pushes an image or a repository to a registry
 docker export – Exports a container’s filesystem as a tar archive
 docker exec – Runs a command in a run-time container
 docker ps – Show running containers
 docker ps -a – Show all containers
 docker ps -l – Show latest created container
 docker search – Searches the Docker Hub for images
 docker attach – Attaches to a running container
 docker commit – Creates a new image from a container’s changes

You might also like