Terraform
Terraform
Nuthan Gatla
What is Terraform and How it works?
1
"If an AWS infrastructure was manually created and we now need Terraform to manage it
without recreating resources, we use the terraform import command. This allows us to
bring existing AWS resources under Terraform’s management."
2. Step-by-Step Process
provider "aws" {
region = "us-east-1"
Command:
terraform init
"Next, we find the resource details using AWS CLI or AWS Console. For example, if we
are importing an EC2 instance, we get its Instance ID:"
2
aws ec2 describe-instances –query
"Reservations[*].Instances[*].InstanceId"
"We write a Terraform configuration matching the resource but without unnecessary
attributes."
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
"Now, we use the terraform import command to sync the AWS resource with
Terraform’s state."
This does not modify the resource but allows Terraform to track it.
"After importing, we run terraform show to see the current state and update main.tf if
needed."
"Finally, we verify everything using terraform plan and apply changes safely."
terraform plan
terraform apply
You have multiple environments - dev, stage, prod for your application and you want to
use the same code for all of these environment. How can you do that?
3
Terraform workspaces allow you to use the same configuration but keep different states
for each environment.
terraform init
Terraform provides a default workspace called default, but we can create new workspaces
for each environment:
4
provider "aws" {
region = "us-east-1"
bucket = "my-app-${terraform.workspace}"
• my-app-dev
• my-app-stage
• my-app-prod
terraform apply
Each workspace has a different Terraform state file, ensuring isolation between
environments.
5
Approach 2: Using Variable Files (.tfvars) for Each Environment
If your environments have different settings (e.g., instance size, region), use separate
variable files.
Example:
dev.tfvars
instance_type = "t2.micro"
env_name = "dev"
stage.tfvars
instance_type = "t3.small"
env_name = "stage"
prod.tfvars
instance_type = "t3.large"
6
env_name = "prod"
variable "instance_type" {}
variable "env_name" {}
instance_type = var.instance_type
tags = {
Name = var.env_name
This ensures that each environment has its own customized configuration.
What is the Terraform state file, why is it important, What are some best practices ?
The Terraform state file is a JSON or binary file that stores the current state of the managed
infrastructure. State file is like a blueprint that stores information about the infrastructure
you manage.
Why is it important:
"Terraform uses a state file (terraform.tfstate) to track and manage resources efficiently. It
prevents duplication, improves performance, detects drift, and allows safe modifications.
Best practices:
While Terraform stores the state file locally by default, in a team environment, we use
remote backends like AWS S3 with state locking (DynamoDB) to prevent conflicts.
7
We follow best practices like encryption, backup, and using terraform state commands
instead of manual edits. If the state file is deleted, Terraform loses track of resources, so
remote state management is critical."
"For example, if I create an EC2 instance using Terraform, the state file will store its
instance ID. When I modify the instance type, Terraform will compare the new configuration
with the state file and apply only the necessary changes instead of creating a new
instance."
Jr DevOps Engineer accidently deleted the state file, what steps should we take to
resolve this?
1. Recover Backup: If available, restore the state file from a recent backup.
3. Review and Prevent: Analyze the incident, implement preventive measures, and
educate team members on best practices to avoid similar incidents in the future.
8
Your team is adopting a multicloud strategy and you need to manage resources on
both AWS and Azure using terraform so how do you structure your terraform code to
handle this?
9
Final Answer for Interview
"To manage AWS and Azure resources with Terraform, I structure the code using modules
for each cloud provider, ensuring separation of concerns. I use remote state storage (S3 for
AWS, Azure Blob for Azure) to avoid conflicts. Each environment (dev, stage, prod) has its
own variable files (.tfvars) for flexibility. Terraform providers are configured separately, and I
use CI/CD pipelines for automated deployment. This approach ensures modularity,
scalability, and consistency in multi-cloud infrastructure management."
There are some bash scripts that you want to run after creating your resources with
terraform so how would you achieve this?
provisioner "remote-exec" {
inline = [
connection {
type = "ssh"
user = "your-ssh-user"
private_key = file("/path/to/your/private-key.pem")
host = aws_instance.example.public_ip
10
Best Practices for remote-exec
Use remote-exec only when necessary (Avoid it if User Data or Ansible can do the
job).
Ensure SSH access is set up (Use correct key pairs and security group rules).
Use inline scripts for small tasks, and upload larger scripts using file provisioner.
Avoid hardcoding sensitive data in scripts.
For example, I have used remote-exec to install Nginx on EC2 instances after provisioning.
However, in production, I prefer User Data for instance initialization and Ansible for
configuration management, as they are more scalable and reliable."*
Your company is looking ways to enable HA. How can you perform blue-green
deployments using Terraform?
Blue-Green Deployment is a strategy to achieve High Availability (HA) and zero downtime
during deployments. It involves having two identical environments:
Once the Green environment is tested, we switch traffic from Blue to Green, minimizing
downtime and rollback risks.
11
Why Use Blue-Green Deployment?
Approach 1: Using Load Balancer (Best for HA & AWS, Azure, GCP)
We deploy two environments (Blue & Green) behind an Application Load Balancer
(ALB).
Terraform updates the ALB Target Group to switch traffic from Blue to Green.
Instead of ALB, we can use DNS (e.g., Route 53, Azure DNS, Cloud DNS) to switch
traffic between Blue & Green.
12
Ensure Green is fully tested before activation.
Monitor logs & rollback quickly if needed.
Use feature flags for gradual rollouts.
To enable high availability (HA) and perform blue-green deployments using Terraform,
we create two identical environments, Blue (active) and Green (new).
I prefer using an Application Load Balancer (ALB) with target groups to seamlessly
switch traffic from Blue to Green without downtime. Once the Green environment is
validated, we update the ALB to forward traffic to it.
Alternatively, for multi-cloud setups, we can use DNS (Route 53, Azure DNS) to update
records and redirect users from Blue to Green. This method ensures zero downtime,
quick rollback, and safe deployments.
Your company wants to automate Terraform through CICD pipelines. How can you
integrate Terraform with CI/CD pipelines?
Tools for Terraform CI/CD : Jenkins, GitHub Actions, GitLab CI/CD, Azure DevOps,
CircleCI.
13
Best Practices for Terraform CI/CD
Use remote state storage (AWS S3, Terraform Cloud, Azure Blob) for collaboration.
Implement security scanning (Checkov, tfsec) to detect misconfigurations.
Use variables and modules to make Terraform code reusable.
Restrict terraform apply to only run on the main branch (avoid accidental changes).
Enable approvals before applying infrastructure changes in production.
To automate Terraform using a CI/CD pipeline, I would integrate Terraform with a tool like
GitHub Actions, GitLab CI, or Jenkins.
In a typical setup, when developer pushes Terraform code to Git, the pipeline automatically
runs Terraform format, validation, and plan to check for errors. After a review or approval,
it executes terraform apply to deploy the infrastructure.
For state management, we use remote backends like AWS S3 to ensure consistency.
Additionally, we add security checks using tfsec or Checkov to detect misconfigurations.
This ensures a reliable, automated, and secure Terraform workflow.
Describe how you can use Terraform with infrastructure deployment tools like Ansible
or Chef.
Terraform is used for infrastructure provisioning, while Ansible and Chef are used for
configuration management. We can integrate Terraform with these tools to fully automate
infrastructure deployment and configuration.
14
Why Use Terraform with Ansible or Chef?
Combining them allows:
End-to-End Automation – From provisioning to configuration.
Idempotency – Terraform ensures infrastructure consistency, Ansible ensures
configuration consistency.
Improved Scalability – Terraform handles cloud resources, Ansible manages software
updates.
"Suppose we need to deploy an AWS EC2 instance and configure it with Apache using
Ansible."
15
provider "aws" {
region = "us-east-1"
instance_type = "t2.micro"
key_name = "my-key"
provisioner "remote-exec" {
connection {
type = "ssh"
user = "ubuntu"
private_key = file("~/.ssh/my-key.pem")
host = self.public_ip
inline = [
tags = {
Name = "Terraform-Ansible-Instance"
output "instance_ip" {
value = aws_instance.web.public_ip
} 16
Step 2: Ansible Playbook to Install Apache: ansible/playbook.yml
hosts: all
become: true
tasks:
apt:
name: apache2
state: present
service:
name: apache2
state: started
provisioner "local-exec" {
command = <<EOT
EOT
17
Explanation
Terraform is great for provisioning infrastructure, while tools like Ansible and Chef handle
software configuration.
In a typical setup, I use Terraform to deploy cloud resources like EC2 instances. Then, I use
Terraform’s local-exec or remote-exec provisioners to trigger an Ansible playbook or
install Chef on the instance. This ensures infrastructure is provisioned and configured
automatically.
18
Your infrastructure contains database passwords and other sensitive information.
How can you manage secrets and sensitive data in Terraform?
Terraform should never store secrets in plain text. Instead, use secure methods like
environment variables, remote backends, or secret management tools (AWS Secrets
Manager, Vault, etc.).
Terraform does not handle secrets natively, so I use best practices to manage them
securely.
19
For simple cases, I use environment variables (TF_VAR_db_password) to avoid
hardcoding.
For production, I use AWS Secrets Manager or HashiCorp Vault to retrieve secrets
dynamically.
To secure Terraform state, I store it in AWS S3 with encryption to prevent exposure.
Additionally, I use sensitive = true in Terraform variables to prevent secrets from being
printed in logs.
You have 20 servers created through Terraform but you want to delete one of them. Is it
possible to destroy a single resource out of multiple resources using Terraform?
Yes, you can delete a single resource from Terraform without affecting others using the
terraform destroy -target command or by removing it from the configuration and applying
changes.
20
Effect: Deletes server1, but other servers remain.
Limitation: The resource still exists in .tf files. Next terraform apply will recreate it.
Steps:
ami = "ami-12345678"
instance_type = "t2.micro"
terraform apply
Yes, Terraform allows you to delete a single resource while keeping others.
If I want to delete only one resource without affecting others, I use terraform destroy -
target=<resource_name>.
If I want to permanently remove it, I delete it from the .tf file and run terraform apply.
If I want to stop managing it but keep it in the cloud, I use terraform state rm.
21
What are the advantages of using Terraform's "count" feature over resource
duplication?
• Code Simplicity
• Scalability
• Easier Updates
• Dynamic Resource Creation
• Reduced Errors
22
This makes updates easier—if I need to change an AMI or instance type, I update a single
place.
It also helps in auto-scaling environments where I can adjust the number of resources by
changing one variable.
This keeps my Terraform code DRY (Don’t Repeat Yourself), efficient, and easy to
manage at scale.
What is Terraform's "module registry," and how can you leverage it?
A collection of pre-built Terraform modules for AWS, Azure, GCP, Kubernetes, etc.
Hosted by HashiCorp at Terraform Registry.
Allows teams to reuse and share infrastructure code instead of writing everything from
scratch.
• Reusability
• Time-Saving
23
• Consistency
• Security
• Community & Official Modules
I use it to quickly provision common infrastructure like VPCs, databases, and Kubernetes
clusters without writing everything manually.
It ensures consistency, security, and best practices in deployments.
we can use the private module registry to enforce internal standards and improve
collaboration.
This approach saves time, reduces errors, and enhances infrastructure maintainability.
Registry: https://registry.terraform.io/browse/modules
Terraform testing ensures infrastructure is deployed correctly, follows best practices, and
prevents misconfigurations before applying changes.
24
Why Is Automated Testing Important for Terraform?
I start with terraform fmt and terraform validate to check for syntax errors.
I use checkov or tfsec to scan for security misconfigurations.
For deeper testing, I use Terratest (Go) to deploy and validate infrastructure.
Finally, I integrate these tests into a CI/CD pipeline (GitHub Actions, Jenkins, GitLab CI) to
automatically test Terraform code before deployment.
This approach ensures our infrastructure is secure, error-free, and follows best practices
before it reaches production.
You are tasked with migrating your existing infrastructure from terraform version 1.7 to
version 1.8 so what kind of considerations and steps would you take?
Upgrading Terraform versions requires careful planning to avoid breaking changes, ensure
compatibility, and maintain infrastructure stability.
25
Key Considerations Before Upgrading
Aspect Consideration
Breaking Changes Check Terraform 1.8 release notes for breaking changes.
State File Safety Backup the Terraform state file (terraform.tfstate) before
upgrading.
26
Test the upgrade in a sandbox environment before applying to production.
Deploy the upgrade in production after validation.
A Terraform provider is a plugin that allows Terraform to interact with cloud platforms
(AWS, Azure, GCP), SaaS applications (GitHub, Datadog), and on-prem services.
A bridge between Terraform and external systems (AWS, Kubernetes, GitHub, etc.).
Providers authenticate and configure how Terraform interacts with services.
Examples:
27
• google (Google Cloud)
• kubernetes (Kubernetes)
A Terraform provider is a plugin that allows Terraform to interact with cloud platforms like
AWS, Azure, and GCP.
To use a provider, we define it in Terraform, initialize it with terraform init, and then use it to
create and manage resources. For example, I can use the AWS provider to create an S3
bucket.
Terraform also supports multi-cloud deployments, where I can define multiple providers
in the same configuration. I always ensure to pin provider versions, use environment
variables for authentication, and keep providers updated to follow best practices.
Terraform resources are the building blocks of infrastructure, while modules are reusable
collections of resources that improve organization, reusability, and scalability.
28
Terraform Resource:
Terraform Module:
29
What are Terraform variables, and how do you use them?
Local Variables (local.*) Define constants within a module locals { name = "my-instance" }
Environment Variables export TF_VAR_region="us-
(TF_VAR_*) Set variables externally east-1"
30
Final Answer (How to Say It in an Interview)
This approach makes Terraform code dynamic, reusable, and easy to manage across
environments.
The terraform init command is the first step in any Terraform project. It downloads
provider plugins, initializes the backend, and prepares the working directory for
Terraform operations.
For example, when using AWS S3 as a remote backend, terraform init configures the
backend and ensures Terraform can store the state remotely.
I also use terraform init -upgrade to update provider versions and terraform init -reconfigure
when changing backends. Skipping terraform init results in errors, as Terraform cannot
execute without initialization.
Remote state allows Terraform to store the terraform.tfstate file in a centralized, shared
location instead of locally, enabling collaboration and preventing conflicts.
31
Why Use Remote State?
Feature Benefit
To manage remote state in Terraform, I configure the backend to store the terraform.tfstate
file in a shared location like AWS S3, Terraform Cloud, or Azure Blob Storage.
For example, in AWS, I store the state in S3, enable encryption, and use DynamoDB for
state locking to prevent conflicts. I also ensure access is restricted using IAM policies.
Using remote state helps teams collaborate efficiently, prevent conflicts, and improve
infrastructure security.
What is the terraform apply command, and how does it differ from terraform plan?
terraform apply:
The terraform apply command executes the planned changes to provision or modify
infrastructure.
It reads Terraform configuration and applies changes to match the desired state.
terraform plan:
The terraform plan command previews changes Terraform will make without applying
them.
It helps review and verify infrastructure changes before execution.
32
What is the difference between count and for_each in Terraform?
Both count and for_each allow Terraform to create multiple resources dynamically, but they
have different use cases and behaviors.
count in Terraform:
count is used when creating multiple identical resources based on a numeric value.
It works with lists and numbers.
for_each in Terraform:
In Terraform, count is used when creating identical resources based on a number, while
for_each is used when creating resources with unique attributes from a set or map.
For example, if I need 5 identical EC2 instances, I use count. If each instance requires
different configurations (e.g., frontend, backend), I use for_each.
I prefer for_each when resource order might change because it prevents unnecessary re-
creation of resources.
Terraform provisioners are used to execute scripts or commands during resource creation.
remote-exec runs commands inside the provisioned resource (e.g., an AWS EC2
instance).
local-exec in Terraform:
Executes commands on the machine running Terraform, not inside the created
resource.
Useful for running scripts, notifications, or triggering external processes.
33
remote-exec in Terraform:
Executes commands inside the created resource (e.g., AWS EC2, Azure VM).
Useful for configuring servers, installing software, or running scripts.
The local-exec provisioner runs commands on the local machine where Terraform is
executed, while remote-exec runs commands inside the created resource via SSH or
WinRM.
For example, I use local-exec to log instance creation or trigger an external API, whereas
I use remote-exec to install software like Apache inside an EC2 instance.
The terraform fmt command automatically formats Terraform configuration files to follow
HashiCorp’s best practices. It improves code readability, enforces consistency, and
ensures teams maintain a clean and standardized Terraform codebase.
I use terraform fmt -check in CI/CD pipelines to prevent unformatted code from being
merged, ensuring best practices are followed across the team.
What is the difference between terraform destroy and terraform apply -destroy?
The difference between terraform destroy and terraform apply -destroy is how they
determine which resources to delete.
terraform destroy removes all resources tracked in the Terraform state file.
terraform apply -destroy only destroys resources currently defined in .tf files.
34
What are Terraform modules, and how do you create a reusable module?
For example, instead of writing the same EC2 configuration repeatedly, I create an EC2
module that can be reused for different instances. I also use remote modules from
Terraform Registry to simplify deployments, such as VPCs or RDS databases."*
Structure:
├── main.tf
├── variables.tf
├── outputs.tf
# main.tf
ami = var.ami
instance_type = var.instance_type
# variables.tf
variable "ami" {}
variable "instance_type" {
default = "t2.micro"
# outputs.tf
output "instance_id" {
value = aws_instance.example.id
module "example_instance" {
35
source = "./path/to/module"
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.small"
Terraform prevents conflicts in a team environment using state locking, remote state
backends, workspaces, and CI/CD pipelines to ensure infrastructure consistency and
collaboration.
• This prevents two users from running terraform apply at the same time.
Instead of running Terraform manually, teams can automate Terraform execution in CI/CD
pipelines (GitHub Actions, GitLab, Jenkins).
Pull Requests (PRs) must pass terraform plan before applying changes.
✔ Use Remote State Storage (S3, Terraform Cloud) to centralize state management.
✔ Enable State Locking (DynamoDB, Terraform Cloud) to prevent conflicts.
✔ Use Workspaces for multiple environments to avoid cross-environment issues.
36
✔ Automate Terraform with CI/CD to enforce code reviews and prevent manual errors.
✔ Implement Role-Based Access Control (RBAC) to restrict who can apply Terraform
changes.
State Locking ensures only one user can modify the state at a time (e.g., S3 + DynamoDB).
Workspaces allow teams to work on separate environments like dev and prod.
CI/CD Pipelines automate Terraform execution, ensuring changes go through code reviews
before deployment.
What are Terraform dynamic blocks, and how are they used?
For example, instead of manually defining multiple security group rules, I use a dynamic
block that loops over a list of ports, ensuring scalability and maintainability.
Dynamic blocks are most useful when working with nested resources, such as IAM
policies, security groups, and network ACLs, where the number of configurations may
change over time.
The terraform refresh command was used to update the Terraform state file with real-world
infrastructure changes, but it was disapprove in Terraform 1.6.
Now, I use terraform plan -refresh-only to check and update the state without modifying
infrastructure, or terraform apply if I need to sync and apply changes.
This ensures state consistency while maintaining visibility into infrastructure updates.
37
What are the limitations of Terraform?
State Management – The state file can become large and difficult to manage, but using
remote backends with locking helps.
No Built-in Secret Management – Secrets are stored in plaintext, so I integrate Vault or
AWS Secrets Manager.
Limited Procedural Logic – Terraform lacks advanced loops and conditionals, but I use
count, for_each, and dynamic blocks as workarounds.
No Automatic Rollback – If an apply fails, Terraform doesn’t rollback, so I rely on CI/CD
pipelines for safe deployment.
Not Ideal for Configuration Management – I use Terraform for infrastructure
provisioning and tools like Ansible for software configuration.
By following best practices, I ensure Terraform remains reliable, scalable, and secure in
team environments.
State locking prevents multiple users from modifying the Terraform state file
(terraform.tfstate) simultaneously, avoiding conflicts and ensuring infrastructure
consistency.
I typically store Terraform state in AWS S3 with state locking enabled via DynamoDB.
When Terraform runs, it locks the state in DynamoDB, preventing others from applying
changes at the same time.
For enterprise setups, I use Terraform Cloud, which provides automatic locking and
collaboration features without extra setup.
This ensures state consistency, prevents corruption, and avoids race conditions in
multi-user environments."*
38
How does Terraform support conditional resource creation?
The terraform taint command was used to mark resources for recreation but was
disapproved in Terraform 0.15.
Now, I use terraform apply -replace to explicitly force Terraform to destroy and recreate
a resource when needed.
This is useful for fixing corrupt infrastructure, forcing updates, or testing new
deployments, ensuring controlled and trackable changes.
39
What is the difference between provider and provisioner in Terraform?
Terraform Provider:
• kubernetes (Kubernetes)
Terraform Provisioner:
For example, I use the AWS provider to create an EC2 instance and a remote-exec
provisioner to install Apache on it. However, I avoid overusing provisioners and prefer
using Ansible for post-deployment configurations.
40