Terraform
Terraform
Ahmed Galal
MCSE, CCIE, CEH, CISSP
Network Solution Architect.
Course Content Flow
Terraform Fundamentals
What is IaC
Terraform:
• Is a tool for building, changing, and versioning infrastructure safely and
efficiently.
• Is a provisioning declarative tool that based on Infrastructure as a Code
paradigm.
• Customers define a desired state and Terraform works to ensure that state is
maintained.
• Allows customers to define infrastructure through repeatable templates.
• Is an open source built and maintained by Hashicorp.
• Uses own syntax - HCL (Hashicorp Configuration Language).
• Written in Golang.
• Terraform lets customers define infrastructures in config/code and will enables them
to rebuild, change, and track infrastructure changes with ease.
• It is completely platform agnostic.
• Enables customers to implement all kinds of coding principles like having a code in a
version control system and the ability to write automated tests /tasks etc.
• Has a big support community and is open source.
• Speed and operations are exceptional.
• One cool thing about Terraform is, customers can validate the changes before applying
them (dry run).
Section Assessment
terraform {
We can define the provider either in the Terraform required_providers {
block or inside provider block. aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_user
tags = {
tag-key = "tag-value"
}
}
provider "aws" {
shared_config_files = ["/Users/tf_user/.aws/conf"] $ export AWS_ACCESS_KEY_ID="anaccesskey"
shared_credentials_files = ["/Users/tf_user/.aws/creds"] $ export AWS_SECRET_ACCESS_KEY="asecretkey"
profile = "customprofile" $ export AWS_REGION="us-west-2"
}
1. Create an IAM user that has programmatic access with the appropertiate
permissions to create EC2 instances, VPCs, and subnets. Use this user to do the
below tasks.
2. Create a new VPC and a new public subnet, use any supported CIDR blocks.
3. Create an EC2 instance of t2.micro type and assign a tag to it as follows name=
“dolfined_instance”, ensue the instance will be assigned a public IP.
4. Ensure the EC2 instance is created inside the newly created VPC public subnet
above.
5. Finally destroy the terraform deployed infrastructure.
ami = "ami-026b57f3c383c2eec"
instance_type = "t2.micro"
To output the attributes of the tags = {
created resource, we can use
Name = "dolfined_demo"
the output block to display the
desired resource attributes. }
output "myec2_instance" {
value = aws_instance.myec2.id
• The terraform output command is used to extract the value of an output variable
from the state file.
• With no additional arguments, output will display all the outputs for the root
module. If an output NAME is specified, only the value of that output is printed.
output "instance_ips" {
value =
aws_instance.web.*.public_ip
}
output "lb_address" {
value = aws_alb.web.public_dns
}
output "password" {
sensitive = true
value = var.secret_password
}
1. Create an IAM user account and assign the name “dolfined_user” to it.
2. Create a new NAT gateway and a new Elastic IP address, and assign it to the new
created NAT gateway.
3. Display the EIP public IP on your terminal, also display the dolfined_user arn & NAT
gateway ID on your terminal screen.
4. Finally destroy of all your terraform deployed infrastructure.
terraform {
# ...
}
• The special terraform configuration block type is used
to configure some behaviors of Terraform itself.
• Terraform settings are written into terraform block. terraform {
• The required_version setting accepts a Terraform required_version = "<0.11"
}
version constraint string which specifies which versions
of Terraform can be used with your configuration.
• The required_providers block specifies all of the terraform {
required_providers {
providers required by the current module, mapping
aws = {
each local provider name to a source address and a version = ">= 2.7.0"
version constraint. source = "hashicorp/aws"
}
}
}
• The dependency lock file is a file that belongs to the configuration in the working
directory of the Root Module.
• The lock file is always named .terraform.lock.hcl
• When terraform init is working on installing all of the providers needed for a
configuration, Terraform considers both the version constraints in the
configuration and the version selections recorded in the lock file.
• If a particular provider has no existing recorded selection, Terraform will select
the newest available version.
• If a particular provider already has a selection recorded in the lock file, Terraform
will always re-select that version for installation, even if a newer version has
become available.
• We can override that behavior by adding the -upgrade option when you
run terraform init .
• Terraform will also verify that each package it installs matches at least one of the
checksums it previously recorded in the lock file, if any, returning an error if none
of the checksums match.
• The new lock file entry records several pieces of information :
ü Version
ü Constrains.
ü Hashes
• The terraform plan command creates an execution plan, which lets you preview
the changes that Terraform plans to make to your infrastructure.
• By default, when Terraform creates a plan it:
Ø Reads the current state of any already-existing remote objects to make sure
that the Terraform state is up-to-date.
Ø Compares the current configuration to the prior state and notes any
differences.
Ø Proposes a set of change actions that should, if applied, make the remote
objects match the configuration.
• The function of terraform plan is speculative: you cannot apply it unless you save
its contents and pass them to a terraform apply command.
• In an automated Terraform pipeline, applying a saved plan file ensures the
changes are the ones expected and scoped by the execution plan.
• The terraform apply command is used to apply the changes required to reach the
desired state of the configuration, or the pre-determined set of actions generated
by a terraform plan execution plan.
• terraform apply -auto-approve - Skips interactive approval of plan before applying.
• The terraform destroy command is a convenient way to destroy all remote objects
managed by a particular Terraform configuration.
• terraform destroy with -target flag allows us to destroy a specific resource and not
all resources as the main command do.
• terraform plan –destroy is used to show The behavior of any terraform destroy
command as a destroy plan.
• Prior to any operation, Terraform does a refresh to update the state with the
real infrastructure.
• This state is stored by default in a local file named "terraform.tfstate", but it
can also be stored remotely, which works better in a team-based work
environment.
In case we have a single large state file, we can prevent terraform from querying
the current state during operations like terraform plan.
• This can be achieved with the -refresh=false flag
• By default, provisioners run when the resource they are defined within is created.
• Creation-time provisioners are only run during creation, not during updating or any
other lifecycle.
• If a Creation-time provisioner fails, the resource is marked as tainted.
Ø A tainted resource will be planned for destruction and recreation upon the
next terraform apply.
Ø You can change this behavior by setting the on_failure attribute.
• On_failure attribute has two settings, either:
Ø continue - Ignore the error and continue with creation or destruction
Ø fail (the default behavior) - Raise an error and stop and taint the resource
• If when = destroy is specified, the provisioner will run when the resource it is
defined within is destroyed.
• Destroy provisioners are run before the resource is destroyed.
Ø If they fail, Terraform will error and re-run the provisioners again on the
next terraform apply.
Ø Due to this behavior, care should be taken for destroy provisioners to be
safe to run multiple times.
1) Use Terraform to create an EC2 instance and install on it an apache server using
a terraform remote provisioner.
2) Use a terraform local provisioner to save the private ip address of the created
instance on your local machine.
3) After finsishing the above tasks, remove any deployed infrastructure without
any interactive prompt.
• Data sources allow data to be fetched or collected for use elsewhere in Terraform
configuration.
• The data fetched can be outside terraform or it can be from another separate
Terraform configuration.
• A data source is accessed via a special kind of resource known as a data resource
which gets declared using a data block
output "available_zones" {
value = data.aws_availability_zones.available.names[*]
}
output "current_region" {
value = data.aws_region.current.id
}
• We may need to create multiple provider blocks with the same provider's name.
• For each additional non-default configuration, use the alias meta-argument to
provide an extra name segment.
• A provider block without an alias argument is the default configuration for that
provider
provider "aws" {
region = "us-east-1"
profile = "dev_admin"
}
# Additional provider configuration for west coast region; resources can reference this as `aws.west`.
provider "aws" {
alias = "west"
region = "us-west-2"
profile = "dev_admin"
}
What if we need to deploy resources using different AWS accounts or different AWS
users in the the same configuration project?
OR
You can add an import block to any Terraform configuration file. A common
pattern is to create an imports.tf file, or to place each import block beside
the resource block it imports into.
import {
to = aws_instance.example
id = "i-abcd1234"
}
This import block defines an import of
resource "aws_instance" "example" { the AWS instance with the ID "i-
name = "hashi" abcd1234" into
# (other resource arguments...) the aws_instance.example resource in
} the root module
• The terraform taint command informs Terraform that a particular object has
become degraded or damaged.
• Terraform represents this by marking the object as "tainted" in the Terraform state
and Terraform will propose to replace it in the next plan you create.
• This command will not modify the infrastructure but does modify the state file in
order to mark the resource as tainted.
• Once a resource is marked as tainted, the next plan will show that the resource will
be destroyed and recreated.
Ø The next apply will implement this change.
• A use case for that may be due to manual changes occurring outside the terraform
management, you want to control all changes within terraform only.
• This command is replaced now by terraform apply -replace option
• The terraform refresh command is used to reconcile the state Terraform knows
about (via its state file) with the real-world infrastructure.
• This does not modify the implemented infrastructure but does modify the state file.
• Example use case: when some resources have been changed manually outside
terraform management, this command can reconcile the state file to match with the
current implemented infrastructure.
• The same command “terraform apply -refresh-only” does the same behavior.
• Terraform has many levels of built-in logs that can be enabled by setting the TF_LOG
environment variable to one of the below key words.
• You can set TF_LOG to one of the log levels TRACE, DEBUG, INFO, WARN or ERROR to
change the verbosity of the logs.
• The most detailed verbose log level is TRACE.
• You can extract the log and save it to a local file using TF_LOG_PATH environment
variable when log is enabled.
export TF_LOG=TRACE
export TF_LOG_PATH=./logs.txt
1. Create a user with admin access privilages and use this user account to create
resources.
2. Using terraform, create two EC2 instances, one in us-east-1 and the other in us-
west-1 regions, respectively.
3. Using the AWS Console, create a new EC2 instance in us-east-1 region, tag it with
“manually_created”.
4. You are required to control all your resources from terraform on your local
machine, Please do the needeful configuration actions to achieve this.
5. At end, your state file should include all the created EC2 instances, please use your
terminal to list all your resources without accessing your state file.
6. Your colleague installed unwanted applications on one of your EC2 instances
manually using the AWS Console and you want to revert its earlier state, what
could you do to maintain the desired state of that instance? Please do the needful
action.
7. Finally, destroy your deployed infrastructure.
© DolfinED All rights reserved
Terraform – HCL
Basics with AWS
Terraform loads variables in the following order, with later sources taking precedence over
earlier ones:
Ø Environment variables
Ø The terraform.tfvars file, if present.
Ø The terraform.tfvars.json file, if present.
Ø Any *.auto.tfvars or *.auto.tfvars.json files, processed in lexical order of their
filenames.
Ø Any -var and -var-file options on the command line, in the order they are provided.
• If the same variable is assigned multiple values, Terraform uses the last value it finds,
overriding any previous values.
Ø Note that the same variable cannot be assigned multiple values within a single
source.
Variables
• number: a numeric value. The number type can represent both whole numbers like
15 and fractional values like 6.283185.
• map (or object): a group of values identified by named labels, like {name = "Mabel",
age = 52}.
• set : is the same as List but its elements are not ordered, and it can’t accept
duplication for elements {“us-east-1”,us-east-2”}.
variable "bucket_var" {
default = "dolfined98765412345"
type = string
• The type argument in a variable }
block allows you to restrict the type
variable "az_var" {
of value that will be accepted as default = ["us-east-1a", "us-east-1b", "us-east-1c"]
the value for a variable. type = list(any)
}
• To extract a value from a list data
type, we need to reference it by variable "instance_types" {
index number. type = map(any)
default = {
• To extract a value from a map data us-east-1 = "t2.micro"
type, we need to reference it by us-west-2 = "t2.nano"
“Key_name” ap-south-1 = "t2.small"
}
}
instance_port = 8000 }
instance_protocol = "http"
lb_port = 80 variable "instance_types" {
lb_protocol = "http" type = map(any)
} default = {
} us-east-1 = "t2.micro"
t2.nano
us-west-2 = "t2.nano"
resource "aws_instance" "web_server" { ap-south-1 = "t2.small"
ami = "ami-082b5a644766e0e6f" }
instance_type = var.instance_types["us-east-1"] }
}
• String interpolation is an integral part of the HCL, Variables can be used via $
{var_name} in strings.
• It evaluates the expression given between the markers, converts the result to a string
if necessary, and then inserts it into the final string.
Example - 1 Example - 2
© DolfinED All rights reserved
Hands-on Labs (HoLs)
Variables Count
Splat Expression
Conditionals
locals {
• A local value assigns a name to an info = {
expression, so you can use the owner = "dolfined_dev"
name multiple times within a service = "database"
module or your configuration
}
instead of repeating the }
expression.
• local values can be declared
together in a single locals block.
Local Values
• The Terraform language includes a number of built-in functions that you can call
from within expressions to transform and combine values.
• The general syntax for function calls is a function name followed by comma-
separated arguments in parentheses , function (argument1, argument2),
e.g.,: max(1,2,3).
• The Terraform language does not support user-defined functions, and so only the
functions built-in the language are available for use.
Functions (cont.)
• Functions are divided according to their types into many categories like:
Ø Numeric.
Ø String.
Ø Collection.
Ø Encoding.
Ø File system.
Ø Date and Time.
Ø Hash and Crypto, IP Network, Type Conversion.
Terraform Function - Examples
lookup({a="ay", b="bee"}, "a", "what?") >>>> Lookup deals with map data type.
ay
element(["a", "b", "c"], 1) >>>> element deals with List data type.
b
file("${path.module}/hello.txt")
Hello World
timestamp()
2018-05-13T07:44:12Z
Terraform Functions
egress {
from_port = 80
• Dynamic blocks means that to_port = 80
variable "external_ports" {
protocol = "tcp"
we have a repeated cidr_blocks = ["0.0.0.0/0"]
type = list(any)
default = ["80", "8080", "443"]
configuration and we want } }
to dynamically construct egress {
from_port = 8080
repeatable nested blocks to_port = 8080
instead of writing many protocol = "tcp" dynamic "egress" {
cidr_blocks = ["0.0.0.0/0"] for_each = var.external_ports
repeatable blocks. content {
}
• Dynamic blocks are egress { from_port = egress.value
to_port = egress.value
supported inside resource, from_port = 443
protocol = "tcp"
to_port = 443
data, provider, and protocol = "tcp" cidr_blocks = ["0.0.0.0/0"]
provisioner blocks. cidr_blocks = ["0.0.0.0/0"] }
} }
}
Without Dynamic Block With Dynamic Block
variable "external_ports" {
type = list(any)
default = ["80", "8080", "443"]
• for_each argument provides what to iterate }
over.
• The iterator argument (optional) sets the dynamic "egress" {
name of a temporary variable that represents for_each = var.external_ports
iterator = port
the current element of the complex value. content {
Ø If omitted, the name of the variable from_port = port.value
defaults to the label of the dynamic block to_port = port.value
(”egress" in the Previous example). protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
Dynamic Blocks
• In any programming language, comments are used to give a description about the
purpose of the code below it or to give some notes for the one who reads the code.
• The Terraform language supports three different syntaxes for comments:
Ø # begins a single-line comment, ending at the end of the line.
Ø // also begins a single-line comment, as an alternative to #.
Ø /* and */ are start and end delimiters for a comment that might span over multiple
lines.
/*
# 5- Create EIP for NAT Gateway
resource "aws_eip" "nat_gateway_eip" {
vpc = true
depends_on = [aws_internet_gateway.internet_gateway]
tags = {
Name = "project_ngw_eip"
}
}
*/
• Modules are containers for multiple resources that are used together.
• A module consists of a collection of .tf and/or .tf.json files kept together in a directory.
• Modules are the main way to package and reuse resource configurations in Terraform.
• Modules can be referenced in your code and can be reused in several code parts.
• The original module (the one in the main working directory) is called the Root module.
• The module that is called or referenced inside the Root module is called the Child
module.
module "ec2module" {
source = "../../modules/ec2"
# type = "t2.large"
}
Local Modules
• Locals are used to avoid repetitive static values inside our configuration.
• The main use case for locals inside a Child Module is to prevent users assigning
their own values in their Root Module and make them stick to the values
assigned inside the Child modules.
Ø This is to prevent users from overriding the values assigned by Child modules.
Referencing Child
Module Outputs
• Verified modules are always maintained by HashiCorp in order to have them up-to-date
and compatible with both Terraform and their respective providers.
• By default, only verified modules are shown in search results.
Ø Using filters, we can view unverified modules.
• The syntax for specifying a registry module is <NAMESPACE>/<NAME>/<PROVIDER>.
Ø For example: hashicorp/consul/aws.
Terraform Workspaces
1. Using Terraform, create three local workspaces named “dev”,”staging”, and ”prod”.
2. Create an EC2 instance configuration file which will change its instance type
according to the chosen workspace as follows:
“dev” workspace will set the instance type to t2.micro.
“Staging” workspace will set the instance type to t2.medium.
“Prod” workspace will set the instance type to t2.large.
3. After finishing , please remove all your created resources.
https://github.com/github/gitignore/blob/main/Terraform.gitignore
To import a module from a GitHub repo, we need to use the source Keyword
followed by either the HTTPS or SSH path.
module "consul" {
source = "github.com/hashicorp/example" HTTPS
}
module "consul" {
source = "[email protected]:hashicorp/example.git" SSH
}
• By default, Terraform will clone and use the default branch (referenced by HEAD) in
the selected repository.
• We can override this behavior using the ref argument.
module "myvcs_repo" {
source = "github.com/enggalal/dolfined_repo.git?ref=development"
}
terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}
Implementing S3 Backend
• State locking prevents two users from updating the terraform state at the
same time.
Ø This is very important in team collaboration scenarios to avoid write errors
and conflicts in the .tfstate file.
• If state locking is supported by your backend, Terraform automatically locks
the state file for all operations that could write state.
terraform {
backend "s3" {
bucket = "dolfined123456789"
key = "dev/terrform.tfstate"
region = "us-east-2"
profile = "dev_admin"
dynamodb_table = "state_lock_table"
}
}
https://cloud.hashicorp.com/products/terraform/pricing
• Terraform Cloud and Terraform Enterprise are different distributions of the same
application.
• Terraform Enterprise is a self-hosted distribution of Terraform Cloud.
Ø It offers enterprises a private instance of the Terraform Cloud application
Ø It has no resource limits
Ø Offers additional enterprise-grade architectural features like audit logging and
SAML single sign-on.
Creating A Terraform
Cloud Account
• Remote plan and apply use variable values from the associated Terraform
Cloud workspace.
• Authentication and credentials are configured on Terraform cloud and not
on the local Machine.
Implementing Cli-driven
Workspaces with a
Terraform Cloud Backend
Hard mandatory is the default enforcement level, it should be used in situations where
an override is not possible.
• Never put secret values, like passwords or access tokens in .tf files or
other files that are checked into source control either local or remote
(especially remote one like VCS)
• Do not store secrets in plain text.
Mark variables as sensitive so Terraform won’t output the value in the Terraform CLI.
• Remember that this value will still show in the Terraform state file.
variable "phone_number" {
type = string
sensitive = true
default = "1234-5678"
}
output "phone_number" {
value = var.phone_number
sensitive = true
}
• Another way to protect secrets is to simply keep plain text secrets out of your
code by taking advantage of Terraform’s environment variables.
• By setting the TF_VAR_<name> environment variable, Terraform will use that
value rather than having to embed that directly in the code.
export TF_VAR_phone_number="1234-5678"
unset TF_VAR_phone_number
• Another way to protect secrets is to store them in secrets management solution, like
HashiCorp Vault.
• By storing them in Vault, you can use the Terraform Vault provider to quickly retrieve
values from Vault and use them in our Terraform code.
• We can download HashiCorp Vault for our operating system at vaultproject.io.
https://www.vaultproject.io/docs/install
• Launch two EBS-backed EC2 instances, one in each of the two private
subnets above (10.0.100.0/24 and 10.0.200.0/24).
Ø The instances will serve as the web and application tiers.
Ø Ensure that the EBS volumes of these instances are encrypted at rest.
Ø The instances will have the user data script (shown in the last slide) run
at launch time.
Ø The security group assigned to the instances should use the name
webSG and must allow ports ssh (22), http (80) and https (443) in the
inbound direction.
# The bash script (user data) to use for this hands on lab
#Web/app instance 1:
#!bin/bash
yum update -y
yum install httpd -y # installs apache (httpd) service
systemctl start httpd # starts httpd service
systemctl enable httpd # enable httpd to auto-start at system boot
echo " This is server *1* in AWS Region US-EAST-1 in AZ US-EAST-1B " > /var/www/html/index.html
#Web/app instance 2:
#!bin/bash
yum update -y
yum install httpd -y
systemctl start httpd
systemctl enable httpd
echo " This is server *1* in AWS Region US-EAST-1 in AZ US-EAST-1A " > /var/www/html/index.html
• Launch a NAT gateway in each of the two availability zones above to allow the
two instances to access the internet for updates.
• Adjust the private subnets’ route tables to route the update traffic through the
NAT Gateway.
• Create a target group with the name webTG and add the two application
instances to it.
• The target group will use the port 80 (HTTP) for traffic forwarding and health
checks.
• Launch an application load balancer that will load balance to these two
instances using HTTP.
Ø The application load balancer must be enabled in the two public subnets
you have configured above.
• Adjust the security group of the web/app instances to allow inbound traffic
only from the application load balance security group as a source.
• The ALB security group (ALBSG) must allow outbound http to the web/app
security group (webSG)
• The ALBSG must allow inbound traffic from the internet on port http.
• Configure a target tracking auto scaling group that will ensure elasticity and
cost effectiveness. The Auto Scaling group should monitor to the two
instances and be able to add instances on-demand and replace failed
instances.
• Launch a Multi AZ RDS database and ensure that its security group will only
allow access from the web/app tier security group above.
• Test to ensure that you can get to the index.html message on the instances through
the load balancer. If it works, congratulations on finishing this amazing project on
AWS.
• Once completed successfully, please remember to destroy your deployed resources
to avoid any surprise charges.
Main Requirements :
• The Jenkins server must be deployed on an EC2 instance.
• The EC2 instance should be accessible via the internet on port 80.
• The EC2 instance should be accessible using SSH.
• Terraform is used to implement this installation.
Terraform Graph
Terraform Get
# implicit Dependency
resource "aws_eip" "myeip" {
vpc = "true"
instance = aws_instance.myec2.id
}
resource "aws_instance" "myec2" {
instance_type = "t2.micro"
ami = "ami-082b5a644766e0e6f"
}
# explicit Dependency
resource "aws_s3_bucket" "example" {
bucket = "dolfined123456789"
depends_on = [aws_s3_bucket.example]
}
https://developer.hashicorp.com/terraform/registry/modules/publish
module "iam" {
source = "terraform-aws-modules/iam/aws"
version = "5.24.0"
}
module "s3-webapp" {
source = "app.terraform.io/hashicorp-learn/s3-webapp/aws"
name = var.name
region = var.region
prefix = var.prefix
version = "1.0.0"
}
• The null_resource resource implements the standard resource lifecycle but takes
no further action, no resources created on the cloud.
• The triggers argument-optional, allows specifying an arbitrary set of values that,
when changed, will cause the null resource to be replaced or executed again.
• As long as the trigger value is the same, trigger will not cause provisioner to be
executed.
• It can be used with local-exec, remote-exec or data block.
• We can run Shell Commands, Python Scripts, execute commands & run Ansible
Playbooks inside it.
Null Resources
https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle
variable "server_ami" {
type = string
description = "The id of the machine image (AMI) to use for the EC2 Instance."
validation {
condition = length(var.server_ami) > 4 && substr(var.server_ami, 0, 3) == "ami-"
error_message = "The image_id value must be a valid AMI id, starting with \"ami-\"."
}
}
Variable Validation
https://www.hashicorp.com/certification/terraform-associate
• The knowledge and examples, quizzes, project in this course are enough to pass the exam,
when mastered.
• Additional tutorials are available:
https://developer.hashicorp.com/terraform/tutorials/certification/associate-review
• As needed, you can also read more on some topics that you need to know more about
here:
https://developer.hashicorp.com/terraform/tutorials/certification/associate-study
• Recommended – Use additional practice questions.