Tech Interview Playbook_ Terraform, DevOps, K8s, Azure
Tech Interview Playbook_ Terraform, DevOps, K8s, Azure
Terraform uses a state le to track real-world resources and map them to your con guration. In
production environments, it is best practice to store the state le remotely with locking enabled.
For example, you might con gure a remote backend such as an Azure Storage Account, AWS S3
(often with DynamoDB for locking), or HashiCorp’s Terraform Cloud. In a CI/CD pipeline, you
would:
• Con gure the backend in your Terraform con guration so that state is stored securely off
the build agent.
• Ensure proper access controls are set up so that the pipeline account has the necessary
credentials to read/write the state.
• Integrate state management tasks (like terraform init, plan, and apply) into
your pipeline YAML de nitions, ensuring that error handling and rollbacks are con gured
appropriately.
When you have resources that were created manually (e.g., via a cloud provider’s portal) but you
want to manage them with Terraform, you use the terraform import command.
This command brings the existing resource into your Terraform state le. The steps are:
• Write the resource con guration in your Terraform les as if you were creating it.
• After import, run terraform plan to verify that no further changes are pending,
ensuring that Terraform now acknowledges and manages the external resource.
• It downloads provider plugins, sets up the backend con guration, and prepares your
Terraform environment for subsequent commands.
• Terraform Refresh: This command updates the state le with the latest real-world resource
status. It ensures that the state re ects any manual changes or drift by querying the current
infrastructure.
While init is setup-related, refresh ensures state accuracy and can be particularly
useful before planning or applying changes.
fi
fi
fi
fi
fi
fi
fl
fi
fi
fi
fi
fi
fi
fi
fi
fi
4. Key Terraform Commands
• terraform plan: Creates an execution plan, showing which actions Terraform will
take to reach the desired state.
• terraform apply: Applies the changes required to reach the desired state of the
con guration.
Terraform modules are containers for multiple resources that are used together. They allow you to:
• Encapsulate con guration for reusability, which makes it easier to manage repeated
patterns.
• Share and version modules internally and externally, improving collaboration and reducing
duplicated efforts in infrastructure management.
A well-de ned branching strategy is crucial for managing code and deployments. Common
strategies include:
• Git ow: Uses feature branches, develop and master branches, and supports parallel
development.
• Using separate pipelines or environments in your CI/CD tool to automate builds, tests, and
deployments across different stages.
Both Azure DevOps and GitHub Actions can integrate with Azure through:
• Using tasks such as the Azure CLI, ARM deployment tasks, or pre-built GitHub Actions to
interact with Azure resources.
• Establishing secure authentication and authorization measures so that pipelines can deploy
infrastructure and applications directly into Azure.
• Stages: Logical groupings such as build, test, staging, and production deployments.
• Steps/Tasks: Individual commands or scripts that perform actions (e.g., compiling code,
running tests, deploying artifacts).
Each pipeline is written using YAML (or a visual designer) where you specify the sequence,
dependencies, environment variables, and conditions under which certain jobs execute.
Secrets should never be hard-coded in your pipeline or code. Best practices include:
• Using secured secret stores like Azure Key Vault, or secret management provided by your
CI/CD system (e.g., GitHub Secrets, Azure Pipelines Library).
• Using RBAC and encryption to ensure that only authorized processes and users have access
to these secrets.
For deployments to staging or production, manual approvals ensure quality control and risk
mitigation. You can con gure:
• Pre-deployment gates that pause the pipeline until an authorized person approves the
change.
• Release pipelines with explicit approval steps that trigger deployments only after validation
checks are passed.
fi
fi
fi
• Noti cations and audit trails so every approval is tracked and logged.
Azure Boards is an agile project management tool integrated with Azure DevOps. It provides:
• This tool helps teams to plan, track, and discuss work across the entire development cycle.
In many DevOps environments, ticketing systems such as Jira, ServiceNow, or Azure Boards are
used to manage work. Experience with these systems involves:
• Integrating issue trackers with source code management and CI/CD pipelines to maintain
traceability from commit to deployment.
Kubernetes
1. Application Deployment Using Kubernetes
Applications in Kubernetes are typically deployed using declarative manifests (YAML les) such as
Deployments, Services, and Con gMaps. This can be done via:
• Using package managers like Helm to template and manage complex deployments.
2. What is a Deployment?
• Manages ReplicaSets, ensuring that a speci ed number of pod replicas are running at any
given time.
• Works in conjunction with an Ingress Controller (like Nginx or Trae k) that enforces the
rules and con gurations.
• In some projects, integration with service meshes (e.g., Istio) has been implemented for
advanced traf c management and observability.
Kubernetes provides a native way to handle secrets via Secret objects. Best practices include:
• Enabling encryption at rest for the secret data in the etcd cluster.
• Limiting access using RBAC to ensure only the necessary pods or services have access.
When a pod fails, several tools and commands come into play:
• Run kubectl describe pod <pod-name> for event and error details.
• Check logs using kubectl logs <pod-name> to troubleshoot the speci c error.
• Often, centralized logging solutions (like ELK or Fluentd) and monitoring systems (like
Prometheus/Grafana) are set up for real-time alerting and historical analysis.
fi
fi
fi
fi
fi
fi
fl
fi
fi
fi
fi
Azure Services
1. Working on an ADF Project
An ADF (Azure Data Factory) project involves orchestrating data work ows across various data
sources. A typical con guration includes:
• De ning pipelines that orchestrate the data movement and transformation processes.
• Using activities that perform data extraction, transformation, and loading (ETL).
• Integrating with both on-premises and cloud data sources through self-hosted or Azure
integration runtimes.
ADF pipelines may involve both self-hosted and Azure cloud sources.
• The Azure Integration Runtime facilitates data movement and transformation using cloud-
based compute resources.
The selection depends on where the source data resides and the security requirements.
• Linked Services: These are connection strings or con guration details required for ADF to
connect to a data source (e.g., a SQL Server, Blob Storage, etc.).
• Datasets: These describe the data structure and location within the data source that ADF
will work with. In short, linked services provide connectivity, while datasets provide the
schema and details about the actual data.
• Pipeline Parameters allow you to pass different values into pipelines dynamically at
runtime.
• They can be used in activities, linked services, and datasets, enabling a single pipeline
de nition to work across multiple scenarios.
• The output of parameterized runs can be fed into subsequent activities for further processing
or decision-making.
Azure Key Vault is the recommended way to store and manage sensitive con guration details. To
access key vault secrets securely:
fi
fi
fi
fi
fi
fl
fl
fi
• Use Managed Identities to give Azure services identity and permission to access the Key
Vault without hardcoding credentials.
• Reference key vault secrets in your pipelines or resource con gurations so that they are
pulled in securely at runtime.
For a highly available web app, Azure App Service is a common choice. It offers:
• Built-in auto-scaling
• Regional redundancy
• Integration with Azure Front Door or Traf c Manager to route users to the healthiest
available endpoint
• Enabling Managed Identity Authentication so that the web app can securely connect to
SQL without storing credentials.
• Employing encryption (both at rest and in transit) and auditing to ensure data security.
To securely expose a web app across multiple regions, you could use:
• Proper endpoint con gurations and DNS management to ensure connectivity while
enforcing security policies.
Managed Identities provide an automatically managed identity for Azure resources. They work by:
• Allowing services to authenticate to Azure Key Vault, SQL databases, and other resources
without managing credentials explicitly.
• Being assigned through Azure RBAC so that they only get the exact permissions required.
Using Azure’s RBAC, you can assign roles like the Reader role to limit permissions. This ensures
that a resource (or identity) can only view resources without making any modi cations. You can
scope these roles to a resource group, subscription, or individual resource level.
NSGs are used to lter network traf c to and from Azure resources. They function by:
• De ning inbound and outbound rules based on source/destination IP addresses, ports, and
protocols.
• Being associated with subnets or individual network interfaces, thereby providing granular
control over the traf c ow.
• Using the Azure CLI, ARM templates, Terraform, or the Azure Portal to con gure
parameters like node count, node size, and Kubernetes version.
• Setting up integration with Azure Active Directory for authentication and RBAC for
authorization.
• Con guring additional features, such as network policies, monitoring (using Azure
Monitor), and auto-scaling.
• Following security best practices, including private cluster con guration and integrating
with Azure Key Vault for secret management.
fi
fi
fi
fi
fi
fl
fi
fi
fi
fi