0% found this document useful (0 votes)
5 views

Terraform

The document outlines a Terraform course led by Ahmed Galal, covering Infrastructure as Code (IaC) concepts, Terraform fundamentals, and practical applications. It includes detailed sections on Terraform installation, configuration files, workflows, providers, resources, and hands-on labs. The course aims to equip participants with the skills to manage infrastructure efficiently using Terraform's capabilities.

Uploaded by

Mahmoud Rashad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Terraform

The document outlines a Terraform course led by Ahmed Galal, covering Infrastructure as Code (IaC) concepts, Terraform fundamentals, and practical applications. It includes detailed sections on Terraform installation, configuration files, workflows, providers, resources, and hands-on labs. The course aims to equip participants with the skills to manage infrastructure efficiently using Terraform's capabilities.

Uploaded by

Mahmoud Rashad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 311

Terraform Course

Ahmed Galal
MCSE, CCIE, CEH, CISSP
Network Solution Architect.
Course Content Flow

© DolfinED All rights reserved


Content Flow – A Logical Progression – Scaffolding Approach
Terraform Air
Capstone Project-2 Capstone Project-1 Hashicorp Vault Cloud Backend Cloud Sentinel
Gabbed Systems

Terraform Built in Team


Terraform HCL Terraform
Terraform Graph Functions & Collaboration
Basics Workspaces Terraform Cloud
Modules with Terraform

Terraform Terraform Multiple Provider


format & Terraform Import Terraform Taint Data Sources
Debug Configuration
Validation

Terraform Config Resources & Terraform Terraform Local Terraform


Setting Block
File Providers Workflow State Provisioners

Terraform Fundamentals

What is Terraform and its Advantages

What is IaC

© DolfinED All rights reserved


Introduction to
Terraform

© DolfinED All rights reserved


Section Outline

In this section, we will learn:


• What is Infrastructure as a code(IaC).
• Benefits of IaC.
• Terraform Overview.
• Why Terraform.
• Terraform Types.
• Terraform Installation.
• Terraform Components.

© DolfinED All rights reserved


What is Infrastructure
as Code (IaC)

© DolfinED All rights reserved


What is Infrastructure as Code (IaC)?

• Infrastructure as Code (IaC) means to manage your IT


infrastructure using configuration files.
• Infrastructure as Code (IaC) is the management of infrastructure
(Networks, virtual machines, load balancers …etc.) in a descriptive
model and have versioning of its configuration files.

© DolfinED All rights reserved


Benefits of IaC

• Speed : Automation increases the provisioning speed of the infrastructure’s


development, testing, and production.
• Consistency : Since it is based on code, it generates the same result every
time.
• Cost : for sure it is lowering the costs of infrastructure management as
everything is automated and organized , give time to performing other
manual tasks and higher-value jobs.
• Security : as every deployment based on Configuration files or templates,
these files can be checked against security leakage using other tools like
sentinel or policy check tools.
• Version control : Since the infrastructure configurations are codified, we can
check-in into version control like GitHub and start versioning it.

© DolfinED All rights reserved


What is Terraform?

© DolfinED All rights reserved


Terraform Overview

Terraform:
• Is a tool for building, changing, and versioning infrastructure safely and
efficiently.
• Is a provisioning declarative tool that based on Infrastructure as a Code
paradigm.
• Customers define a desired state and Terraform works to ensure that state is
maintained.
• Allows customers to define infrastructure through repeatable templates.
• Is an open source built and maintained by Hashicorp.
• Uses own syntax - HCL (Hashicorp Configuration Language).
• Written in Golang.

© DolfinED All rights reserved


Why Terraform?

© DolfinED All rights reserved


Using Terraform – The Benefits

• Terraform lets customers define infrastructures in config/code and will enables them
to rebuild, change, and track infrastructure changes with ease.
• It is completely platform agnostic.
• Enables customers to implement all kinds of coding principles like having a code in a
version control system and the ability to write automated tests /tasks etc.
• Has a big support community and is open source.
• Speed and operations are exceptional.
• One cool thing about Terraform is, customers can validate the changes before applying
them (dry run).

© DolfinED All rights reserved


Installing Terraform

© DolfinED All rights reserved


Terraform Tool Download

• Terraform tool supports many platforms like MAC, Windows


,Linux…etc.
• It can be downloaded from this URL :
https://developer.hashicorp.com/terraform/downloads

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Installing Terraform on Windows

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Install VS Code & Terraform


Environment (Plugin) Setup

© DolfinED All rights reserved


Terraform Components

© DolfinED All rights reserved


How Terraform works with Plugins

• Terraform is logically split into two main parts: Terraform


Core and Terraform Plugins.
• Terraform Core uses remote procedure calls (RPC) to
communicate with Terraform Plugins and offers multiple ways
to discover and load required plugins to use.
• Terraform Plugins expose an implementation for a specific
service, such as AWS, or provisioner, such as bash.

© DolfinED All rights reserved


Terraform Core

• Terraform Core is a statically-compiled binary written in the Go


programming language. The compiled binary is the command
line tool (CLI) terraform, the entry point for anyone using
Terraform.
• The primary responsibilities of Terraform Core are:
Ø Infrastructure as code: reading and interpolating
configuration files and modules
Ø Resource state management
Ø Construction of the Resource Graph.
Ø Plan execution.
Ø Communication with plugins over RPC.

© DolfinED All rights reserved


Terraform Plugins

• Terraform Plugins are written in Go and are


executable binaries invoked by Terraform
Core over RPC.
• Each plugin exposes an implementation for a
specific service, such as AWS, or provisioner,
such as bash.
• All Providers and Provisioners used in
Terraform configurations are plugins. They are
executed as a separate process and
communicate with the main Terraform binary
over an RPC interface.
• Terraform Plugins are responsible for the
domain specific implementation of their type.

© DolfinED All rights reserved


Terraform Plugins – Provider & Provisioner

• The primary responsibilities of Provider Plugins are:


Ø Initialization of any included libraries used to make API calls.
Ø Authentication with the Infrastructure Provider.
Ø Define Resources that map to specific Services.

• The primary responsibilities of Provisioner Plugins are:


Ø Executing commands or scripts on the designated Resource after creation,
or on destruction.

© DolfinED All rights reserved


Quiz

Section Assessment

© DolfinED All rights reserved


Terraform
Fundamentals

© DolfinED All rights reserved


Section Outline

In this section, we will learn:


• Terraform Configuration File/Files.
• Resources & Providers.
• Terraform Setting Block.
• Terraform Workflow.
• Terraform Local State.
• Dealing with Large Infra.
• Terraform Provisioners.
• Terraform Data Sources.
• Terraform Aliases.
• Import Existing Resources.

© DolfinED All rights reserved


Section Outlines (cont.)

In this section, we will continue to learn:


• Terraform taint.
• Terraform Debug.
• Terraform format & Validation.
• Terraform Refresh.
• Terraform Show.
• Terraform Output.

© DolfinED All rights reserved


Terraform
Configuration File(s)

© DolfinED All rights reserved


Terraform Files - .tf Files

• Terraform uses text files with a .tf extension.


• A .tf file can be a single file.
• Resources are declared through a series of
text files in HCL (Terraform configuration
language).
• You might have multiple terraform files
including variable files (.tfvars) in the same
working directory or folder.
• Would typically be stored in some kind of
source control.

© DolfinED All rights reserved


Terraform Files/ .tf Files (cont.)

• Terraform generally loads all the configuration


files within the directory specified in alphabetical
order.
Ø The files loaded must end in either .tf or
.tf.json.
• For large projects, we can have multiple
configuration files in the same directory.
resource "aws_instance" "web" {
Ø Each can represent specific types of resources
ami = "ami-082b5a644766e0e6f"
like (e.g., ec2.tf , s3.tf, variables.tf). instance_type = "t2.micro"
Ø This can help write cleaner code and make }
troubleshooting tasks easier. resource "aws_instance" "app" {
• Each resource will have a local name that ami = "ami-082b5a644766e0e6f"
instance_type = "t2.micro"
differentiates it from other resources. }

© DolfinED All rights reserved


Terraform Workflow
Introduction

© DolfinED All rights reserved


Terraform Workflow

• A single workflow to plan, provision, and teardown resources.


• We have four main phases: init, plan, apply & destroy.

© DolfinED All rights reserved


Terraform Workflow (cont.)

• Terraform Init: to initialize the terraform plugins and providers and


download them on the local machine.
• Terraform plan: to give speculative plan for what resources will be
implemented.
• Terraform apply: to initiate the actual deployment of these resources.
• Terraform destroy : to destroy and remove any deployed resources.

© DolfinED All rights reserved


Terraform Providers &
Resources

© DolfinED All rights reserved


Providers

• Terraform relies on plugins called providers to


interact with cloud providers, SaaS providers,
and other APIs.
• A Provider adds a set of of resource types
and/or data resources that Terraform can
manage.
• You can find a list of publicly available
providers in the Terraform registry along with
the documentation on how to use them.
• Providers fall into a category of: Official,
Verified, Community and Archived.
• Providers allow you to abstract away from the
individual processes and technology.

© DolfinED All rights reserved


Providers (cont.)

terraform {
We can define the provider either in the Terraform required_providers {
block or inside provider block. aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}

# Configure the AWS Provider


provider "aws" {
region = "us-east-1"
}

© DolfinED All rights reserved


Resources

• Resources represent the individual services the


Resource syntax looks like :
provider can offer. resource "TF_given_type" "loacl_name"{
• E.g., resources in the AWS Provider can be: Argument1= ""
§ An EC2 Instance, IAM user, a VPC, or a VPC Argument2= ""
Subnet. }
• A resource block declares a resource of a given
type ("aws_instance") with a given local name
("web"). resource "aws_instance" "web" {
ami = "ami-082b5a644766e0e6f"
• The name is used to refer to this resource from
instance_type = "t2.micro"
within the same Terraform module but has no }
significance outside that module's scope

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_user

© DolfinED All rights reserved


Resources (cont.)

resource "aws_instance" "this" {


ami = data.aws_ami.this.id
instance_market_options {
spot_options {
max_price = 0.0031
• For each resource, there are a set of }
Arguments and a set of Attributes. }
instance_type = "t4g.nano"
• To define a resource, some Arguments tags = {
Name = "test-spot"
must be declared when creating the }
resource while others are optional. }

• Attributes represent the values that we


resource "aws_iam_user" "lb" {
will get after the resource is created. name = "loadbalancer"
path = "/system/"

tags = {
tag-key = "tag-value"
}
}

© DolfinED All rights reserved


Resources (cont.)

• The resource type and name together


resource "aws_s3_bucket" "mys3" {
serve as an identifier for a given resource bucket = "dolfined123456789"
and must be unique within a module. }
output "mys3bucket" {
• To reference or recall any attribute of this value = aws_s3_bucket.mys3.arn
resource inside its module, we use the }

syntax ”type.local name.attribute_name”

© DolfinED All rights reserved


Provider Authentication
- AWS

© DolfinED All rights reserved


Provider Authentication Methods

• We can set the Authentication using many


different methods which are applied in the
following order:
Ø Parameters in the provider configuration.
Ø Environment variables.
Ø Profile credentials.
• We can also use AssumeRole, Config Files and
Credential files.
provider "aws" {
region = "us-west-2"
provider "aws" { access_key = "my-access-key"
profile = "customprofile" secret_key = "my-secret-key"
} }

provider "aws" {
shared_config_files = ["/Users/tf_user/.aws/conf"] $ export AWS_ACCESS_KEY_ID="anaccesskey"
shared_credentials_files = ["/Users/tf_user/.aws/creds"] $ export AWS_SECRET_ACCESS_KEY="asecretkey"
profile = "customprofile" $ export AWS_REGION="us-west-2"
}

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Provider Authentication Methods-


Profile Method

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Resources Examples

© DolfinED All rights reserved


Assignment

Creating AWS Resources-1

© DolfinED All rights reserved


Assignment Tasks

1. Create an IAM user that has programmatic access with the appropertiate
permissions to create EC2 instances, VPCs, and subnets. Use this user to do the
below tasks.
2. Create a new VPC and a new public subnet, use any supported CIDR blocks.
3. Create an EC2 instance of t2.micro type and assign a tag to it as follows name=
“dolfined_instance”, ensue the instance will be assigned a public IP.
4. Ensure the EC2 instance is created inside the newly created VPC public subnet
above.
5. Finally destroy the terraform deployed infrastructure.

© DolfinED All rights reserved


Output Block

© DolfinED All rights reserved


Output block

resource "aws_instance" "myec2" {

ami = "ami-026b57f3c383c2eec"

instance_type = "t2.micro"
To output the attributes of the tags = {
created resource, we can use
Name = "dolfined_demo"
the output block to display the
desired resource attributes. }

output "myec2_instance" {

value = aws_instance.myec2.id

© DolfinED All rights reserved


Output Block - Optional Arguments

• Output blocks can optionally output "instance_ip_addr" {


include description, sensitive, value = aws_instance.server.private_ip
description = "The private IP address of the main server instance."
and depends_on arguments. }
• We can use description to briefly
describe the purpose of each value.
• An output can be marked as
output "db_password" {
containing sensitive material using the value = aws_db_instance.db.password
optional sensitive argument which will description = "The password for logging in to the database."
sensitive = true
hide it from the CLI screen. }
• However, the value can be shown
from the state file.

© DolfinED All rights reserved


Output Block - Optional Arguments (cont.)

• When using the depends_on argument,


it means the output value depends on
output "instance_ip_addr" {
some other values. value = aws_instance.server.private_ip
• In this example, security group rule description = "The private IP address of the main server instance."

should be created first before displaying depends_on = [


the output of the private IP address. # Security group rule must be created before this IP address could
# actually be used, otherwise the services will be unreachable.
• The depends_on argument should be aws_security_group_rule.local_access,
used only as a last resort. }
]

Ø When using it, always include a


comment explaining why it is being
used. This will help future
maintainers understand the purpose
of the additional dependency.

© DolfinED All rights reserved


Terraform Output

• The terraform output command is used to extract the value of an output variable
from the state file.
• With no additional arguments, output will display all the outputs for the root
module. If an output NAME is specified, only the value of that output is printed.

output "instance_ips" {
value =
aws_instance.web.*.public_ip
}
output "lb_address" {
value = aws_alb.web.public_dns
}
output "password" {
sensitive = true
value = var.secret_password
}

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Output Example

© DolfinED All rights reserved


Reference Resources

© DolfinED All rights reserved


Referencing a Resource Into another Resource

resource "aws_eip" "lb" {


vpc = true
}
• Attributes can also act as an
input to the other resources output "eip" {
value = aws_eip.lb.public_ip
being created by terraform. }
• In this example, we created a
resource "aws_security_group" "allow_tls" {
new security group resource and name = "dolfined_tls"
a new EIP resource. ingress {
description = "TLS from VPC"
• We then referenced the EIP from_port = 443
to_port = 443
resource when configuring a protocol = "tcp"
source IP in a permit rule cidr_blocks = ["${aws_eip.lb.public_ip}/32"]
}
inbound to the created security
group. }

© DolfinED All rights reserved


Referencing a Resource Into another Resource (cont.)

resource "aws_eip" "nat_gateway_eip" {


vpc = true
}
#Create NAT Gateway and associate an EIP to it
Another Example of referencing resource into resource "aws_nat_gateway" "nat_gateway" {
another resource is : allocation_id = aws_eip.nat_gateway_eip.id
subnet_id = "subnet-08459ee0345277557"
Ø Creating new EIP resource. }
output "mynat_gateway" {
Ø Creating new NAT gateway. value = aws_nat_gateway.nat_gateway.id
Ø Associate EIP with the NAT Gateway. }
output "elastic_ip" {
value = aws_eip.nat_gateway_eip.public_ip
}

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Resource Reference Example

© DolfinED All rights reserved


Assignment

Creating AWS Resources-2

© DolfinED All rights reserved


Assignment Tasks

1. Create an IAM user account and assign the name “dolfined_user” to it.
2. Create a new NAT gateway and a new Elastic IP address, and assign it to the new
created NAT gateway.
3. Display the EIP public IP on your terminal, also display the dolfined_user arn & NAT
gateway ID on your terminal screen.
4. Finally destroy of all your terraform deployed infrastructure.

© DolfinED All rights reserved


Terraform Setting Block

© DolfinED All rights reserved


Terraform Setting

terraform {
# ...
}
• The special terraform configuration block type is used
to configure some behaviors of Terraform itself.
• Terraform settings are written into terraform block. terraform {
• The required_version setting accepts a Terraform required_version = "<0.11"
}
version constraint string which specifies which versions
of Terraform can be used with your configuration.
• The required_providers block specifies all of the terraform {
required_providers {
providers required by the current module, mapping
aws = {
each local provider name to a source address and a version = ">= 2.7.0"
version constraint. source = "hashicorp/aws"
}
}
}

© DolfinED All rights reserved


Dependency Lock File

© DolfinED All rights reserved


Dependency Lock File

• The dependency lock file is a file that belongs to the configuration in the working
directory of the Root Module.
• The lock file is always named .terraform.lock.hcl
• When terraform init is working on installing all of the providers needed for a
configuration, Terraform considers both the version constraints in the
configuration and the version selections recorded in the lock file.
• If a particular provider has no existing recorded selection, Terraform will select
the newest available version.
• If a particular provider already has a selection recorded in the lock file, Terraform
will always re-select that version for installation, even if a newer version has
become available.

© DolfinED All rights reserved


Dependency Lock File - (cont.)

• We can override that behavior by adding the -upgrade option when you
run terraform init .
• Terraform will also verify that each package it installs matches at least one of the
checksums it previously recorded in the lock file, if any, returning an error if none
of the checksums match.
• The new lock file entry records several pieces of information :
ü Version
ü Constrains.
ü Hashes

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Dependency Lock File

© DolfinED All rights reserved


Terraform Workflow
Detailed

© DolfinED All rights reserved


Init Phase

• The terraform init command initializes a


working directory containing Terraform
configuration files.
• This is the first command that should be
run after writing a new Terraform
configuration or cloning an existing one
from version control systems.
• terraform init will download plugins
associated with the provider.
• Terraform init –upgrade is used to
upgrade to the latest acceptable version
of each provider.

© DolfinED All rights reserved


Plan Phase

• The terraform plan command creates an execution plan, which lets you preview
the changes that Terraform plans to make to your infrastructure.
• By default, when Terraform creates a plan it:
Ø Reads the current state of any already-existing remote objects to make sure
that the Terraform state is up-to-date.
Ø Compares the current configuration to the prior state and notes any
differences.
Ø Proposes a set of change actions that should, if applied, make the remote
objects match the configuration.

© DolfinED All rights reserved


Plan Phase (cont.)

• The function of terraform plan is speculative: you cannot apply it unless you save
its contents and pass them to a terraform apply command.
• In an automated Terraform pipeline, applying a saved plan file ensures the
changes are the ones expected and scoped by the execution plan.

© DolfinED All rights reserved


Apply Phase

• The terraform apply command is used to apply the changes required to reach the
desired state of the configuration, or the pre-determined set of actions generated
by a terraform plan execution plan.
• terraform apply -auto-approve - Skips interactive approval of plan before applying.

© DolfinED All rights reserved


Destroy Phase

• The terraform destroy command is a convenient way to destroy all remote objects
managed by a particular Terraform configuration.

• terraform apply -destroy is an alias for the main command.

• terraform destroy with -target flag allows us to destroy a specific resource and not
all resources as the main command do.

• terraform plan –destroy is used to show The behavior of any terraform destroy
command as a destroy plan.

© DolfinED All rights reserved


Terraform Local State

© DolfinED All rights reserved


Terraform State

• Terraform stores the state of the infrastructure


that is being created from the TF files.
• The TF configuration files represent the desired
state and the actual deployed infrastructure
through Terraform is called the current state.
• Terraform tries to ensure that the deployed
infrastructure is based on the desired state.
• If there is a difference between the two,
terraform plan presents a description of the
changes necessary to reach to the desired state.

© DolfinED All rights reserved


Terraform State (cont.)

• Terraform stores state about the managed infrastructure and configuration.


• This state is used by Terraform to:
Ø Map deployed resources to your configuration,
Ø Keep track of metadata, and
Ø Improve performance for large infrastructures
• Terraform uses this local state to create plans and make changes to your
infrastructure.

© DolfinED All rights reserved


Terraform State (cont.)

• Prior to any operation, Terraform does a refresh to update the state with the
real infrastructure.
• This state is stored by default in a local file named "terraform.tfstate", but it
can also be stored remotely, which works better in a team-based work
environment.

© DolfinED All rights reserved


Terraform State File Example

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Workflow example


with Local State overview

© DolfinED All rights reserved


Dealing with Large
Infrastructures

© DolfinED All rights reserved


Terraform State – A Large Infrastructure

• When you have a larger infrastructure, you will


encounter many issues related to API limits of a provider.
• To overcome this API Limitation, we can divide our single
file configuration into several smaller configurations
where:
Ø Each one can be applied independently, and
Ø Each can have its own directory and its own state file.

© DolfinED All rights reserved


Setting Refresh To False

In case we have a single large state file, we can prevent terraform from querying
the current state during operations like terraform plan.
• This can be achieved with the -refresh=false flag

© DolfinED All rights reserved


Choosing A Specific Target

• The -target=resource flag can be used to


target a specific resource and can be used
with the refresh flag as well.
• It is generally used to operate on isolated
portions of a very large configuration or a
large state file.

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Dealing With A Large Infrastructure

© DolfinED All rights reserved


Assignment

Creating AWS Resources-3

© DolfinED All rights reserved


Assignment Tasks

1. Using Terraform, Create an S3 bucket with a unique name of your choice.


2. The access to this bucket is private.
3. Tag the bucket with the tag “terrform_bucket”.
4. Create a security group which has an inbound permit rule for the
192.168.120.0/24 subnet.
5. Ensure that your settings will restrict terraform to only version range v1.5.x.
6. After creating the above resources, update the tag of your bucket to be
“terraform_testbd”, ensure terraform refreshes only the S3 Bucket resource from
the resources in the state file.
7. Destroy the security group resource only and keep the S3 Resource bucket in
your deployed infrastrucure. This should be reflected in your state file.
8. Finally, destroy all your deployed terraform infrastructure.

© DolfinED All rights reserved


Terraform Provisioners

© DolfinED All rights reserved


Provisioners Overview

• Provisioners in Terraform are used to do specific


actions on the local machine or on a remote
machine in order to prepare servers or other
infrastructure objects for service.
• As an example, we can install an Apache server as
an example after creating an AWS EC2 instance
resource using a remote provisioner.
Ø It is the same concept as a user data concept
in AWS.

© DolfinED All rights reserved


Local Provisioners (Local-Exec)

• local-exec provisioners allow us to invoke a local executable after the resource is


created.
• As an example, we can get the private IP Address of a created EC2 instance and
display it on the local machine or save some attributes locally in a file after the
resource is created.
• The Provisioner block is located inside the resource block.

resource "aws_instance" "myec2" {


ami = "ami-0dfcb1ef8550277af"
instance_type = "t2.micro"
provisioner "local-exec" {
command = "echo ${aws_instance.myec2.private_ip} >> private_ips.txt"
}
}

© DolfinED All rights reserved


Remote Provisioner (Remote-Exec)
resource "aws_instance" "myec2" {
• The remote-exec provisioner invokes a script on ami = "ami-0dfcb1ef8550277af"
instance_type = "t2.micro"
a remote resource after it is created. key_name = "ec2_key"
• For example, this can be used to run a connection {
configuration management tool or bootstrap type = "ssh"
user = "ec2-user"
into a cluster. private_key = file("~/.ssh/ec2_key")
• Most provisioners require access to the remote host = self.public_ip
resource via SSH or WinRM*. }
provisioner "remote-exec" {
• A self object, which represents the connection's inline = [
parent resource and has all of that resource's "sudo amazon-linux-extras install -y nginx1",
"sudo systemctl start nginx"
attributes. ]
Ø For example, use self.public_ip to reference }
an aws_instance's public_ip attribute. }

In the above example, we will install NGINX


after creating the EC2 Instance
* Windows Remote Management

© DolfinED All rights reserved


Provisioner Types: Creation-Time Provisioner

• By default, provisioners run when the resource they are defined within is created.
• Creation-time provisioners are only run during creation, not during updating or any
other lifecycle.
• If a Creation-time provisioner fails, the resource is marked as tainted.
Ø A tainted resource will be planned for destruction and recreation upon the
next terraform apply.
Ø You can change this behavior by setting the on_failure attribute.
• On_failure attribute has two settings, either:
Ø continue - Ignore the error and continue with creation or destruction
Ø fail (the default behavior) - Raise an error and stop and taint the resource

© DolfinED All rights reserved


Creation-Time Provisioner Example

resource "aws_instance" "myec2" {


ami = "ami-0dfcb1ef8550277af"
instance_type = "t2.micro"
provisioner "local-exec" {
command = "echo ${aws_instance.myec2.private_ip} >>> private_ips.txt"
on_failure = continue
}
}

© DolfinED All rights reserved


Provisioner Types: Destroy-Time Provisioner

• If when = destroy is specified, the provisioner will run when the resource it is
defined within is destroyed.
• Destroy provisioners are run before the resource is destroyed.
Ø If they fail, Terraform will error and re-run the provisioners again on the
next terraform apply.
Ø Due to this behavior, care should be taken for destroy provisioners to be
safe to run multiple times.

resource "aws_instance" "myec2" {


ami = "ami-0dfcb1ef8550277af"
instance_type = "t2.micro"
provisioner "local-exec" {
command = "echo 'my instance will be destroyed'"
when = destroy
}
}

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Provisioners Examples

© DolfinED All rights reserved


Assignment

Creating AWS Resources-4

© DolfinED All rights reserved


Assignment Tasks

1) Use Terraform to create an EC2 instance and install on it an apache server using
a terraform remote provisioner.
2) Use a terraform local provisioner to save the private ip address of the created
instance on your local machine.
3) After finsishing the above tasks, remove any deployed infrastructure without
any interactive prompt.

© DolfinED All rights reserved


Terraform Data Sources

© DolfinED All rights reserved


Data Sources Overview

• Data sources allow data to be fetched or collected for use elsewhere in Terraform
configuration.
• The data fetched can be outside terraform or it can be from another separate
Terraform configuration.
• A data source is accessed via a special kind of resource known as a data resource
which gets declared using a data block

#Retrieve the list of AZs in the current AWS region


data "aws_availability_zones" "available" {}
data "aws_region" "current" {}

output "available_zones" {
value = data.aws_availability_zones.available.names[*]
}
output "current_region" {
value = data.aws_region.current.id
}

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Data Sources Example

© DolfinED All rights reserved


Terraform Alias

© DolfinED All rights reserved


Multiple Provider Configurations

Sometimes we need to assign resources in multiple or


different regions within the same Cloud provider.

© DolfinED All rights reserved


Multiple Provider Configurations (cont.)

• We may need to create multiple provider blocks with the same provider's name.
• For each additional non-default configuration, use the alias meta-argument to
provide an extra name segment.
• A provider block without an alias argument is the default configuration for that
provider

provider "aws" {
region = "us-east-1"
profile = "dev_admin"
}
# Additional provider configuration for west coast region; resources can reference this as `aws.west`.

provider "aws" {
alias = "west"
region = "us-west-2"
profile = "dev_admin"
}

© DolfinED All rights reserved


Multiple AWS Profiles in
Terraform

© DolfinED All rights reserved


Multiple AWS Profile Configurations

What if we need to deploy resources using different AWS accounts or different AWS
users in the the same configuration project?

OR

© DolfinED All rights reserved


Multiple AWS Profile Configurations

• We can add multiple configurations for a given provider.


• To do this, we include multiple provider blocks with the same provider's name
Ø Set the alias meta-argument to an alias name to use for each additional
configuration.
Ø Use the profile argument mapping each user or AWS account.

# it as the default, and it can be referenced as `aws`.


provider "aws" {
region = "us-east-1"
profile = "dev_admin"
}
# Additional provider configuration for west coast region; resources can
# reference this as `aws.west`.
provider "aws" {
alias = "west"
region = "us-west-2"
profile = "user123"
}

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Alias Examples

© DolfinED All rights reserved


Import Existing
Resources

© DolfinED All rights reserved


Import Existing Resource

• Terraform is able to import


existing infrastructure.
• This allows you to take resources
you have created by some other
means and bring them under the
Terraform management.
• The imported resources will be
managed by Terraform and will be
listed in the terraform state file.
• terraform import [args] “resource
address ID”

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Import Example

© DolfinED All rights reserved


Import new (v1.5 and
later)

© DolfinED All rights reserved


Import Blocks (New after Terraform v1.5)

• Terraform v1.5.0 and later supports import blocks.


• Unlike the terraform import command, you can use import blocks to import more than one
resource at a time, and you can review imports as part of your normal plan and apply
workflow.
• Import blocks are only available in Terraform v1.5.0 and later.
• Use the import block to import existing infrastructure resources into Terraform, bringing
them under Terraform's management.
• Once imported, Terraform tracks the resource in your state file. You can then manage the
imported resource like any other, updating its attributes and destroying it as part of a
standard resource lifecycle

© DolfinED All rights reserved


Import Blocks (Cont.)

You can add an import block to any Terraform configuration file. A common
pattern is to create an imports.tf file, or to place each import block beside
the resource block it imports into.

import {
to = aws_instance.example
id = "i-abcd1234"
}
This import block defines an import of
resource "aws_instance" "example" { the AWS instance with the ID "i-
name = "hashi" abcd1234" into
# (other resource arguments...) the aws_instance.example resource in
} the root module

© DolfinED All rights reserved


Import Blocks (Cont.)

The -generate-config-out flag is how Terraform processes imports


during the plan stage and generates configuration.
ü Terraform plan -generate-config-out=myconfig.tf >>>> generates
the configs of manually resources in the file “myconfig.tf”

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Import [New]

© DolfinED All rights reserved


Terraform Taint

© DolfinED All rights reserved


Taint Command

• The terraform taint command informs Terraform that a particular object has
become degraded or damaged.
• Terraform represents this by marking the object as "tainted" in the Terraform state
and Terraform will propose to replace it in the next plan you create.
• This command will not modify the infrastructure but does modify the state file in
order to mark the resource as tainted.

© DolfinED All rights reserved


Taint Command (cont.)

• Once a resource is marked as tainted, the next plan will show that the resource will
be destroyed and recreated.
Ø The next apply will implement this change.
• A use case for that may be due to manual changes occurring outside the terraform
management, you want to control all changes within terraform only.
• This command is replaced now by terraform apply -replace option

terraform taint “aws_instance.example[0]"

terraform apply -replace="aws_instance.example[0]"

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Taint Example

© DolfinED All rights reserved


Terraform Commands

© DolfinED All rights reserved


Terraform fmt

The terraform fmt command is used to rewrite Terraform configuration files to


take care of the overall formatting and makes it is easy to read it in an
understandable format.

© DolfinED All rights reserved


Terraform validate

The terraform validate command checks whether a configuration is syntactically


valid, contains errors…etc.
• It can check various aspects including unsupported arguments, undeclared
variables, and others.

© DolfinED All rights reserved


Terraform Show

• The terraform show command is used to provide


human-readable output from a state or plan file.
• This can be used to inspect a plan to ensure that
the planned operations are as expected, or to.
inspect the current state as Terraform sees it.
• Machine-readable output is generated by adding
the -json command-line flag.

© DolfinED All rights reserved


Terraform Refresh

• The terraform refresh command is used to reconcile the state Terraform knows
about (via its state file) with the real-world infrastructure.
• This does not modify the implemented infrastructure but does modify the state file.
• Example use case: when some resources have been changed manually outside
terraform management, this command can reconcile the state file to match with the
current implemented infrastructure.
• The same command “terraform apply -refresh-only” does the same behavior.

© DolfinED All rights reserved


Terraform Debug (Troubleshooting)

• Terraform has many levels of built-in logs that can be enabled by setting the TF_LOG
environment variable to one of the below key words.
• You can set TF_LOG to one of the log levels TRACE, DEBUG, INFO, WARN or ERROR to
change the verbosity of the logs.
• The most detailed verbose log level is TRACE.
• You can extract the log and save it to a local file using TF_LOG_PATH environment
variable when log is enabled.

export TF_LOG=TRACE
export TF_LOG_PATH=./logs.txt

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Commands - Examples

© DolfinED All rights reserved


Assignment

Creating AWS Resources-5

© DolfinED All rights reserved


Assignment Tasks

1. Create a user with admin access privilages and use this user account to create
resources.
2. Using terraform, create two EC2 instances, one in us-east-1 and the other in us-
west-1 regions, respectively.
3. Using the AWS Console, create a new EC2 instance in us-east-1 region, tag it with
“manually_created”.
4. You are required to control all your resources from terraform on your local
machine, Please do the needeful configuration actions to achieve this.
5. At end, your state file should include all the created EC2 instances, please use your
terminal to list all your resources without accessing your state file.
6. Your colleague installed unwanted applications on one of your EC2 instances
manually using the AWS Console and you want to revert its earlier state, what
could you do to maintain the desired state of that instance? Please do the needful
action.
7. Finally, destroy your deployed infrastructure.
© DolfinED All rights reserved
Terraform – HCL
Basics with AWS

© DolfinED All rights reserved


Section Outline

In this section, we will learn:


• Variables.
• Variable Assignment Approaches.
• Variable Definition Precedence.
• Variables Data types.
• String Interpolation.
• Variables Names Constraints.
• Count Parameters.
• For-Each Meta Argument.

© DolfinED All rights reserved


Section Outline (cont.)

In this section, we will learn:


• Splat Expression.
• Conditionals.
• Local Values.
• Terraform Built in Functions.
• Dynamic Blocks.
• Comments in Terraform HCL.

© DolfinED All rights reserved


Terraform Variables

© DolfinED All rights reserved


Terraform Variables

• a variable is a value that can change,


depending on conditions or on
information passed to the program.
• A Variable in any coding language can
be reused in many code parts without
having to write its static value many
times in our code.
• By changing the variable value, this
maps automatically in many code
parts where the variable was used.

© DolfinED All rights reserved


Terraform Variables - Example
resource "aws_security_group" "var_demo" {
name = "dolfined-variables"
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = [var.vpn_ip]
} variable "vpn_ip" {
ingress { default = "118.10.10.118/32"
from_port = 80 }
to_port = 80
protocol = "tcp"
cidr_blocks = [var.vpn_ip]
} Input variables are created by a
ingress {
from_port = 53 variable block, and you reference them
to_port = 53 as var.variable_name inside your config
protocol = "tcp"
cidr_blocks = [var.vpn_ip]
file.
}
}

© DolfinED All rights reserved


Terraform Variable
Assignment
Approaches

© DolfinED All rights reserved


Variable Assignment Approaches

We can assign variables in many ways


inside terraform such as:
• Environment variables export TF_VAR_image_id=ami-abc123

• Command Line Flags terraform apply -var="image_id=ami-abc123"

• From a customized File-*.tfvars. terraform apply -var-file="testing.tfvars" image_id = "ami-abc123"


availability_zone_names
=[
"us-east-1a",
"us-west-1c",
]

variable "vpn_ip" { .tfvars file


Variable Defaults - input variable. default = "118.10.10.118/32"
}

© DolfinED All rights reserved


Variable Definition
Precedence

© DolfinED All rights reserved


Variable Assignment Approaches

Terraform loads variables in the following order, with later sources taking precedence over
earlier ones:
Ø Environment variables
Ø The terraform.tfvars file, if present.
Ø The terraform.tfvars.json file, if present.
Ø Any *.auto.tfvars or *.auto.tfvars.json files, processed in lexical order of their
filenames.
Ø Any -var and -var-file options on the command line, in the order they are provided.
• If the same variable is assigned multiple values, Terraform uses the last value it finds,
overriding any previous values.
Ø Note that the same variable cannot be assigned multiple values within a single
source.

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Variables

© DolfinED All rights reserved


Variable Data Types

© DolfinED All rights reserved


Variable Data Types

• string: a sequence of Unicode characters representing some text, like "hello".

• number: a numeric value. The number type can represent both whole numbers like
15 and fractional values like 6.283185.

• bool: a Boolean value, either true or false.


Ø bool values can be used in conditional logic.

© DolfinED All rights reserved


Variable Data Types (cont.)

• list (or tuple): a sequence of values, like ["us-west-1a", "us-west-1c"].


Ø Elements in a list or tuple are starting by Index zero and elements are ordered.

• map (or object): a group of values identified by named labels, like {name = "Mabel",
age = 52}.

• set : is the same as List but its elements are not ordered, and it can’t accept
duplication for elements {“us-east-1”,us-east-2”}.

© DolfinED All rights reserved


Variable Data Types (cont.)

variable "bucket_var" {
default = "dolfined98765412345"
type = string
• The type argument in a variable }
block allows you to restrict the type
variable "az_var" {
of value that will be accepted as default = ["us-east-1a", "us-east-1b", "us-east-1c"]
the value for a variable. type = list(any)
}
• To extract a value from a list data
type, we need to reference it by variable "instance_types" {
index number. type = map(any)
default = {
• To extract a value from a map data us-east-1 = "t2.micro"
type, we need to reference it by us-west-2 = "t2.nano"
“Key_name” ap-south-1 = "t2.small"
}
}

© DolfinED All rights reserved


Variable Data Types (cont.)

resource "aws_elb" "bar" { variable "az_var" {


name = var.lb_name_var default = ["us-east-1a", "us-east-1b",
availability_zones = var.az_var "us-east-1c"]
listener { type = list(any)
Us-west-2b

instance_port = 8000 }
instance_protocol = "http"
lb_port = 80 variable "instance_types" {
lb_protocol = "http" type = map(any)
} default = {
} us-east-1 = "t2.micro"
t2.nano
us-west-2 = "t2.nano"
resource "aws_instance" "web_server" { ap-south-1 = "t2.small"
ami = "ami-082b5a644766e0e6f" }
instance_type = var.instance_types["us-east-1"] }
}

Config File Variable File

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Variables Data Types

© DolfinED All rights reserved


String Interpolation

© DolfinED All rights reserved


String Interpolation

• String interpolation is an integral part of the HCL, Variables can be used via $
{var_name} in strings.
• It evaluates the expression given between the markers, converts the result to a string
if necessary, and then inserts it into the final string.

The named object var.name is accessed and its


"Hello, ${var.name}!" value inserted into the string, producing a
result like "Hello, Dolfined".

© DolfinED All rights reserved


Variables Names
Constraints

© DolfinED All rights reserved


Terraform Variable Name Constraints

We cannot use any word we like within variable names.


• Terraform reserves some additional names that can no longer be used as input
variable names for modules.
• These reserved names are:
Ø count
Ø depends_on
Ø for_each
Ø lifecycle
Ø providers
Ø source

© DolfinED All rights reserved


Variable Count
Parameter

© DolfinED All rights reserved


Variable Count Parameter

resource "aws_instance" "web1" {


ami = "ami-026b57f3c383c2eec"
• We mainly use Count parameter instance_type = "t2.micro"
with resources to avoid repetition }
resource "aws_instance" "web2" {
inside our code and make our code ami = "ami-026b57f3c383c2eec"
cleaner. instance_type = "t2.micro"
• We can also scale the number of }
resources by just increasing the
count number.
• In this example, we created two EC2
instances in the same resource block resource "aws_instance" "web" {
ami = "ami-026b57f3c383c2eec"
instead of creating two different
instance_type = "t2.micro"
resources. count =2
}

© DolfinED All rights reserved


Variable Count.index

resource "aws_iam_user" "app" { variable "instance_name" {


name = "app_user1" type = list(any)
• Sometimes we need to path = "/system/" default = ["dev", "prod", "lab"]
differentiate between } }
the created resources to resource "aws_iam_user" "app" { resource "aws_instance" "myec2" {
name = "app_user2" ami = "ami-026b57f3c383c2eec"
avoid having the same
path = "/system/" instance_type = "t2.micro"
name or description. } tags = {
• We can use count.index Name =
to overcome this var.instance_name[count.index]
challenge. }
resource "aws_iam_user" "app" { count = 3
• Index values always start }
name = "app_user${count.index}"
at Zero . path = "/system/"
count = 2 Here we created three instances each
} has its own tag.

Example - 1 Example - 2
© DolfinED All rights reserved
Hands-on Labs (HoLs)

Variables Count

© DolfinED All rights reserved


For_Each Meta
Argument

© DolfinED All rights reserved


For_each meta-argument

• Sometimes we have challenges with Count parameter especially if we need


distinct values for resources like AWS IAM usernames.
• The best way to overcome this challenge is to use for_each parameter.
• for_each parameter iterates over each item or element in a list, set or map and
chooses a different item in each iteration.
Ø In case of map, it chooses a key-value pair each time

resource "aws_iam_user" "the-accounts" {


for_each = toset( ["Todd", "James", "Alice", "Dottie"] )
name = each.key
}

© DolfinED All rights reserved


Hands-on Labs (HoLs)

For_each meta argument

© DolfinED All rights reserved


Splat Expression

© DolfinED All rights reserved


Splat Expression

resource "aws_iam_user" "sys_adm" {


name = "sys_adm${count.index}"
path = "/system/"
• Splat expression is used to get a count = 6
}
list of all the attributes.
output "ARNs" {
• In this example, we use splat [*] value = aws_iam_user.sys_adm[*].arn
to list all the ARNs of all users. }
• Count starts with zero and so, we
can use instead the count
number to get the specific ARN output "ARNs" {
for a specific user. value = aws_iam_user.sys_adm[1].arn “sys_adm1”
}

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Splat Expression

© DolfinED All rights reserved


Conditionals

© DolfinED All rights reserved


Conditionals

resource "aws_instance" "myec2_1" {


• Conditionals are used to choose ami = "ami-02f97949d306b597a"
between two values (Boolean instance_type = "t2.micro"
expression), either the first one or count = var.conditional == "true" ? 1 : 0
}
the second one according to the
resource "aws_instance" "myec2_2" {
condition itself. ami = "ami-02f97949d306b597a"
• Its syntax is : instance_type = "t2.large"
condition? True Value : False Value count = var.conditional == "false" ? 1 : 0
• If the condition is True, the True
}
value will be chosen, if not the False variable "conditional" {}
one will be.
In this example, the conditional
value will determine which
instance resource will be created

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Conditionals

© DolfinED All rights reserved


Local Values

© DolfinED All rights reserved


Local Values

locals {
• A local value assigns a name to an info = {
expression, so you can use the owner = "dolfined_dev"
name multiple times within a service = "database"
module or your configuration
}
instead of repeating the }
expression.
• local values can be declared
together in a single locals block.

resource "aws_instance" "myec2_1" { resource "aws_instance" "myec2_2" {


ami = "ami-026b57f3c383c2eec" ami = "ami-026b57f3c383c2eec"
instance_type = "t2.micro" instance_type = "t2.large"
tags = local.info tags = local.info
} }

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Local Values

© DolfinED All rights reserved


Terraform Functions

© DolfinED All rights reserved


Functions

• The Terraform language includes a number of built-in functions that you can call
from within expressions to transform and combine values.
• The general syntax for function calls is a function name followed by comma-
separated arguments in parentheses , function (argument1, argument2),
e.g.,: max(1,2,3).
• The Terraform language does not support user-defined functions, and so only the
functions built-in the language are available for use.
Functions (cont.)

• Functions are divided according to their types into many categories like:
Ø Numeric.
Ø String.
Ø Collection.
Ø Encoding.
Ø File system.
Ø Date and Time.
Ø Hash and Crypto, IP Network, Type Conversion.
Terraform Function - Examples

lookup({a="ay", b="bee"}, "a", "what?") >>>> Lookup deals with map data type.
ay

lookup({a="ay", b="bee"}, "c", "what?")


what?

element(["a", "b", "c"], 1) >>>> element deals with List data type.
b

© DolfinED All rights reserved


Terraform Function - Examples (cont.)

file("${path.module}/hello.txt")
Hello World

timestamp()
2018-05-13T07:44:12Z

formatdate("DD MMM YYYY hh:mm ZZZ", "2018-01-02T23:12:01Z")


02 Jan 2018 23:12 UTC

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Functions

© DolfinED All rights reserved


Dynamic Blocks

© DolfinED All rights reserved


Dynamic Blocks

egress {
from_port = 80
• Dynamic blocks means that to_port = 80
variable "external_ports" {
protocol = "tcp"
we have a repeated cidr_blocks = ["0.0.0.0/0"]
type = list(any)
default = ["80", "8080", "443"]
configuration and we want } }
to dynamically construct egress {
from_port = 8080
repeatable nested blocks to_port = 8080
instead of writing many protocol = "tcp" dynamic "egress" {
cidr_blocks = ["0.0.0.0/0"] for_each = var.external_ports
repeatable blocks. content {
}
• Dynamic blocks are egress { from_port = egress.value
to_port = egress.value
supported inside resource, from_port = 443
protocol = "tcp"
to_port = 443
data, provider, and protocol = "tcp" cidr_blocks = ["0.0.0.0/0"]
provisioner blocks. cidr_blocks = ["0.0.0.0/0"] }
} }
}
Without Dynamic Block With Dynamic Block

© DolfinED All rights reserved


Dynamic Blocks (cont.)

variable "external_ports" {
type = list(any)
default = ["80", "8080", "443"]
• for_each argument provides what to iterate }
over.
• The iterator argument (optional) sets the dynamic "egress" {
name of a temporary variable that represents for_each = var.external_ports
iterator = port
the current element of the complex value. content {
Ø If omitted, the name of the variable from_port = port.value
defaults to the label of the dynamic block to_port = port.value
(”egress" in the Previous example). protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Dynamic Blocks

© DolfinED All rights reserved


Comments in
Terraform

© DolfinED All rights reserved


Comments in Terraform

• In any programming language, comments are used to give a description about the
purpose of the code below it or to give some notes for the one who reads the code.
• The Terraform language supports three different syntaxes for comments:
Ø # begins a single-line comment, ending at the end of the line.
Ø // also begins a single-line comment, as an alternative to #.
Ø /* and */ are start and end delimiters for a comment that might span over multiple
lines.
/*
# 5- Create EIP for NAT Gateway
resource "aws_eip" "nat_gateway_eip" {
vpc = true
depends_on = [aws_internet_gateway.internet_gateway]
tags = {
Name = "project_ngw_eip"
}
}
*/

© DolfinED All rights reserved


Terraform
Modules &
Workspaces

© DolfinED All rights reserved


Section Outlines

In this section, we will learn:


• Terraform Local Modules.
• Variable Behavior with Local Modules.
• Locals with Modules.
• Accessing Child Module outputs.
• Terraform Registry.
• Terraform Workspaces.

© DolfinED All rights reserved


Local Modules

© DolfinED All rights reserved


Terraform Local Modules

• Modules are containers for multiple resources that are used together.
• A module consists of a collection of .tf and/or .tf.json files kept together in a directory.
• Modules are the main way to package and reuse resource configurations in Terraform.
• Modules can be referenced in your code and can be reused in several code parts.
• The original module (the one in the main working directory) is called the Root module.
• The module that is called or referenced inside the Root module is called the Child
module.

module "ec2module" {
source = "../../modules/ec2"
# type = "t2.large"
}

ec2 module is called inside the


ec2 is the Child Module Project1 is the Root/Main Module Main module
© DolfinED All rights reserved
Hands-on Labs (HoLs)

Local Modules

© DolfinED All rights reserved


Using Variables with
Modules

© DolfinED All rights reserved


Variable Assignment With Modules

resource "aws_instance" "myec2" {


• As a Child Module is called inside the Main
ami = "ami-089a545a9ed9893b6"
module, it is not recommended to assign

instance_type = "t2.micro"
static values within the Child module. }
Ø The best way is to use variables in order
to get the benefit of the Child Module in
many environments (Dev, prod and
staging environments). resource "aws_instance" "myec2" {
ami = "ami-089a545a9ed9893b6"
• Inside Child Modules, we can assign default
instance_type = var.type
values for our variables in case a user doesn’t }
define values.
• The values assigned inside the Main module variable "type" { ✅
override the default values assigned inside default = "t2.micro"
}
Child Module.

© DolfinED All rights reserved


Using Locals with
Modules

© DolfinED All rights reserved


Locals with Modules

• Locals are used to avoid repetitive static values inside our configuration.
• The main use case for locals inside a Child Module is to prevent users assigning
their own values in their Root Module and make them stick to the values
assigned inside the Child modules.
Ø This is to prevent users from overriding the values assigned by Child modules.

resource "aws_instance" "web" {


ami = "ami-089a545a9ed9893b6"
instance_type = local.instance_type module "ec2module" {
} source = "../../modules/ec2"
type = "t2.large"
locals { }
instance_type = "t2.micro"
}
The instance type restricted by Local variable The user can’t change the instance type to
to t2.micro inside the Child Module. t2.large inside his own Root Module.
© DolfinED All rights reserved
Locals vs. Variables

• Unlike variables found in programming languages, Terraform's locals do not change


values during or between Terraform runs (plan, apply, or destroy).
• Unlike input variables, locals are not set directly by users of your configuration.

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Locals vs. Variables


with Modules

© DolfinED All rights reserved


Accessing Child
Module Output

© DolfinED All rights reserved


Referencing Child Module Outputs

In a parent Root module, outputs of child modules are available in expressions as


module.<MODULE_NAME>.<OUTPUT NAME>
• To reference any value inside a child module, it must be expressed in the output block.

resource "aws_security_group" "appSG" {


name = "myapp-sg"
module "sgmodule" { ingress {
source = "../../modules/sg" description = "Allow Inbound from My Database
} Application "
resource "aws_instance" "myapp" { from_port = local.app_port
ami = "ami-0ca285d4c2cda3300" to_port = local.app_port
instance_type = "t2.micro" protocol = "tcp"
vpc_security_group_ids = [module.sgmodule.sg_id] cidr_blocks = ["0.0.0.0/0"]
} }
Parent “Root” Module output "sg_id" {
value = aws_security_group.appSG.id
} Child Module

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Referencing Child
Module Outputs

© DolfinED All rights reserved


Terraform Registry

© DolfinED All rights reserved


Terraform Registry

• Terraform Registry is a repository of modules written by the Terraform community.


• Terraform Registry also contains all providers, and their plugins which we can use
to work with many cloud and other providers.
https://registry.terraform.io/

© DolfinED All rights reserved


Terraform Registry Modules

• Within Terraform Registry, you can find


verified modules that are maintained by module "vpc"{
source = "terraform-aws-modules/vpc/aws"
various third-party vendors.
version = "3.18.1"
• To use a Module from Terraform Registry, you }
need to provide its path and version
(optional) in order to use it inside your
terraform configuration.
• After you execute terraform init, a copy of the
module gets stored inside the .terraform
folder located inside your working directory.
• Module files are stored in a GitHub
Repository.

© DolfinED All rights reserved


Terraform Registry Modules (cont.)

• Verified modules are always maintained by HashiCorp in order to have them up-to-date
and compatible with both Terraform and their respective providers.
• By default, only verified modules are shown in search results.
Ø Using filters, we can view unverified modules.
• The syntax for specifying a registry module is <NAMESPACE>/<NAME>/<PROVIDER>.
Ø For example: hashicorp/consul/aws.

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Registry Public Modules

© DolfinED All rights reserved


Terraform Workspaces

© DolfinED All rights reserved


Without Terraform Workspaces

• Working with Terraform involves managing collections of infrastructure resources.


• Most organizations manage different environments.
• By default, when run locally, Terraform manages the entire infrastructure within a
single persistent working directory, which contains the configuration, state data, and
variables.

© DolfinED All rights reserved


With Terraform Workspaces

• Using Terraform workspaces, we can efficiently manage the different


environments and their resources by keeping their configurations in separate
directories.
• All created workspaces directories will be under terraform.tfstate.d directory.

© DolfinED All rights reserved


Creating Local Workspaces

We can create local workspaces using this command

terraform workspace new <name of workspace>

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Workspaces

© DolfinED All rights reserved


Assignment

Creating AWS Resources-6

© DolfinED All rights reserved


Assignment Tasks

1. Using Terraform, create three local workspaces named “dev”,”staging”, and ”prod”.
2. Create an EC2 instance configuration file which will change its instance type
according to the chosen workspace as follows:
“dev” workspace will set the instance type to t2.micro.
“Staging” workspace will set the instance type to t2.medium.
“Prod” workspace will set the instance type to t2.large.
3. After finishing , please remove all your created resources.

© DolfinED All rights reserved


Team
Collaboration In
Terraform

© DolfinED All rights reserved


Section Outlines

In this section, we will learn:


• Using VCS Repository for Team Collaboration.
• Challenges with Git VCS repo.
• Terraform with .gitignore file.
• Module sources in Terraform.
• Terraform Backends.
• Implementing S3 Backend.
• Terraform State Management.
• Terraform Remote State Data Source.

© DolfinED All rights reserved


VCS Cloud Repo

© DolfinED All rights reserved


Why The To Have A VCS Repo With Terraform?

In many scenarios, we need to have our terraform code hosted in a centralized


VCS repo to ensure:
• In a team collaboration mode, team members need to have access and
contribute to the same code project.
• Better code protection against loss or corruption when stored on local user
machine(s).

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Setting Up A GitHub Repo

© DolfinED All rights reserved


Challenges with VCS
Repos

© DolfinED All rights reserved


Challenges With Git VCS Repos

• While committing data to the Git repository, we need to avoid pushing


access/secret keys with the code to avoid storing them in the public VCS repos.
• We can overcome this challenge using many approaches like:
Ø Keeping secret keys locally and referencing them in the code that will be
uploaded to the repo ( Note: still, the .tfstate shows the secrets).
Ø We can add secret keys & Credentials files to .gitignore file to not be
uploaded while committing the code to the VCS repo.
Ø We can store the .tfstate file to a secure storage location (backend) like S3.

© DolfinED All rights reserved


.gitignore With
Terraform

© DolfinED All rights reserved


VCS Repos & Using .gitignore

• The .gitignore file is a text file that tells


Git which files or folders to ignore in a
project.
• Files that contain sensitive information
like credentials, logs, .tfstate, .tfvars files
or big directories like .terraform directory
can be added to the .gitignore file to
avoid committing them to the Git repo.

https://github.com/github/gitignore/blob/main/Terraform.gitignore

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Secrets & .gitignore with VCS Repo

© DolfinED All rights reserved


Terraform Module
Sources

© DolfinED All rights reserved


Terraform Module Sources

• Terraform Modules have many different sources like:


Ø Local Paths
Ø Terraform Registry
Ø GitHub
Ø Bitbucket
Ø Any Git repository
Ø HTTPS URLs
Ø S3 Buckets

© DolfinED All rights reserved


Terraform Module - GitHub Source

To import a module from a GitHub repo, we need to use the source Keyword
followed by either the HTTPS or SSH path.

module "consul" {
source = "github.com/hashicorp/example" HTTPS
}

module "consul" {
source = "[email protected]:hashicorp/example.git" SSH
}

© DolfinED All rights reserved


Terraform Module - GitHub Source (cont.)

• By default, Terraform will clone and use the default branch (referenced by HEAD) in
the selected repository.
• We can override this behavior using the ref argument.

module "myvcs_repo" {
source = "github.com/enggalal/dolfined_repo.git?ref=development"
}

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform GitHub Module Source

© DolfinED All rights reserved


Terraform Backends

© DolfinED All rights reserved


Terraform Backends

• Terraform uses persisted state data to keep track of


its managed resources.
• A backend defines where Terraform stores
its state data files.
Ø By default, terraform uses a local backend and
the state file is stored locally on disk.
• In many cases, we need to store state file remotely.
Ø e.g., to allow multiple people access to the state
data and collaborate on that collection of
infrastructure resources.
• The right approach in team collaboration is to store
the terraform config files in a VCS repository and the
state file in a remote backend.

© DolfinED All rights reserved


S3 Backend

• Terraform supports multiple backends that allow remote service-related operations.


• Popular backends include S3, Consul, Azurerm, Kubernetes, HTTP, and ETCD.
• An S3 Bucket can be used to store state files using a backend argument inside the
terraform block.
• Accessing state in a remote service generally requires access credentials.
Ø In case of S3 Backend, list bucket, get, put and delete Object permissions are
required in the IAM policy.

terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Implementing S3 Backend

© DolfinED All rights reserved


Terraform State Lock

© DolfinED All rights reserved


Terraform State Locking

• State locking prevents two users from updating the terraform state at the
same time.
Ø This is very important in team collaboration scenarios to avoid write errors
and conflicts in the .tfstate file.
• If state locking is supported by your backend, Terraform automatically locks
the state file for all operations that could write state.

© DolfinED All rights reserved


Terraform State Locking (cont.)

• If state locking fails, Terraform will not continue.


• Not all Backends support state locking, check the backend documentation.

© DolfinED All rights reserved


Terraform State Locking (cont.)

• Terraform has a force-unlock command to manually unlock the state if


unlocking fails.
• Be very careful with this command.
Ø It may cause multiple writers.
• Force unlock should only be used to unlock your own lock in cases where
automatic unlocking fails.
• Usage: terraform force-unlock [options] LOCK_ID

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Implement S3 Backend State


Locking using DynamoDB

© DolfinED All rights reserved


Implementing State Lock Using S3 Backend

In S3 Bucket backend configuration, S3 supports state locking via DynamoDB.


• We define dynamodb_table - (Optional) Name of DynamoDB Table to use for
state locking and consistency.
• The table must have a partition key named LockID with type of String.
• If not configured, state locking will be disabled.

terraform {
backend "s3" {
bucket = "dolfined123456789"
key = "dev/terrform.tfstate"
region = "us-east-2"
profile = "dev_admin"
dynamodb_table = "state_lock_table"
}
}

© DolfinED All rights reserved


Terraform State
Management

© DolfinED All rights reserved


Terraform State Management Command – Sub-commands

• terraform state list


shows and lists all the resources terraform manages which are stored in the
terraform .tfstate file.
• terraform state rm <name of the resource>
removes that resource from terraform management, so it is no longer
managed by terraform but terraform will not destroy it.
• terraform state mv
Is used to move items in a Terraform state.
This command is used in many cases where you want to rename an existing
resource without destroying and recreating it.

© DolfinED All rights reserved


Terraform State Management Command – Sub-commands (cont.)

• terraform state show <name of the resource>


shows all the attributes of that resource.
• terraform state pull
downloads and outputs the state from a
remote state locally to see the resources and
its attributes stored in the remote state.
• terraform state push
used to manually upload a local state file to a
remote state.

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform State Management

© DolfinED All rights reserved


Terraform Remote
State Data Source

© DolfinED All rights reserved


Terraform Remote State Data Source

• The terraform_remote_state data source uses data "terraform_remote_state" "vpc" {


the latest state snapshot from a specified state backend = "remote"
backend to retrieve the root module output
config = {
values from another Terraform configuration. organization = "hashicorp"
• It retrieves state data from a Terraform local or workspaces = {
remote backend. name = "vpc-prod"
Ø This allows you to use the root-level outputs }
}
of one or more Terraform configurations as
}
input data for another configuration.

© DolfinED All rights reserved


Terraform Remote State Data Source - Example

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Remote State


Data Source

© DolfinED All rights reserved


Terraform Cloud &
Enterprise
Overview

© DolfinED All rights reserved


Section Outline

In this section, we will learn:


• Overview of Terraform Cloud.
• Terraform Cloud Types.
• Terraform Cloud Sentinel.
• Terraform Cloud Backend.
• Terraform Cloud Supported VCS Providers.

© DolfinED All rights reserved


Terraform Cloud
Overview

© DolfinED All rights reserved


Terraform Cloud

Terraform Cloud is HashiCorp’s managed SaaS service offering;


• It is an application that helps teams use Terraform together.
• It runs in a consistent and reliable environment and includes easy access to shared state
and secret data and access controls for approving changes to infrastructure.
• It has a private registry for sharing Terraform modules, detailed policy controls for
governing the contents of Terraform configurations, and more.
• Terraform Cloud is available as a hosted service at https://app.terraform.io

© DolfinED All rights reserved


Terraform Cloud Types

© DolfinED All rights reserved


Terraform Cloud Types

https://cloud.hashicorp.com/products/terraform/pricing

© DolfinED All rights reserved


Terraform Enterprise

• Terraform Cloud and Terraform Enterprise are different distributions of the same
application.
• Terraform Enterprise is a self-hosted distribution of Terraform Cloud.
Ø It offers enterprises a private instance of the Terraform Cloud application
Ø It has no resource limits
Ø Offers additional enterprise-grade architectural features like audit logging and
SAML single sign-on.

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Creating A Terraform
Cloud Account

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Integrating Terraform Cloud


with GitHub

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Creating a New Workspace


and Integration with a VCS Repo

© DolfinED All rights reserved


Terraform Cloud
Backend

© DolfinED All rights reserved


Terraform Cloud Backend

• The remote backend is unique among all other


Terraform backends because it can both store terraform {
cloud {
state snapshots and execute operations for organization = "dolfinednew_terraform"
Terraform Cloud's CLI-driven run workflow.
Ø It used to be called an "enhanced" backend. workspaces {
• When using full remote operations, operations name = "dolfined_cli_driven"
}
like terraform plan or terraform apply can be }
executed in Terraform Cloud's run environment, }
with log output streaming to the local terminal.

© DolfinED All rights reserved


Terraform Cloud Backend (cont.)

• Remote plan and apply use variable values from the associated Terraform
Cloud workspace.
• Authentication and credentials are configured on Terraform cloud and not
on the local Machine.

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Implementing Cli-driven
Workspaces with a
Terraform Cloud Backend

© DolfinED All rights reserved


Terraform Cloud
Sentinel

© DolfinED All rights reserved


Cloud Sentinel

• Sentinel is an embedded policy-as-code framework integrated with the HashiCorp


Enterprise products.
Ø It enables fine-grained, logic-based policy decisions.
• If enabled, Sentinel is run between the terraform plan and apply stages of the
workflow.

© DolfinED All rights reserved


Cloud Sentinel (cont.)

Sentinel has three enforcement levels:


• Advisory:
The policy is allowed to fail. However, a warning should be shown to the user or
logged.
• Soft Mandatory:
The policy must pass unless an override is specified. The purpose of this level is to
provide a level of privilege separation for a behavior, so it can support overriding.
• Hard Mandatory:
The policy must pass no matter what, The only way to override a hard mandatory
policy is to explicitly remove the policy.

Hard mandatory is the default enforcement level, it should be used in situations where
an override is not possible.

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Cloud Sentinel Policy

© DolfinED All rights reserved


Terraform Cloud VCS -
Supported Providers

© DolfinED All rights reserved


Supported VCS Providers with Terraform Cloud

Terraform Cloud supports the following VCS providers:


• GitHub.com
• GitHub.com (OAuth)
• GitHub Enterprise
• GitLab.com
• GitLab EE and CE
• Bitbucket Cloud
• Bitbucket Server
• Azure DevOps Server
• Azure DevOps Services

© DolfinED All rights reserved


Terraform Security

© DolfinED All rights reserved


Section Outline

In this section, we will learn:


• Best Security Practices for Terraform
• Terraform Vault.

© DolfinED All rights reserved


Terraform Security Best
Practices

© DolfinED All rights reserved


Storing Secrets – Best Practices

• Never put secret values, like passwords or access tokens in .tf files or
other files that are checked into source control either local or remote
(especially remote one like VCS)
• Do not store secrets in plain text.

• Ramifications for placing secrets in plain text include:


Ø Anyone who has access to the version control system has access to
that secret.
Ø Every computer that has access to the version control system keeps
a copy of that secret
Ø Every piece of software you run has access to that secret.
Ø No way to audit or revoke access to that secret.

© DolfinED All rights reserved


Mark Variables As Sensitive

Mark variables as sensitive so Terraform won’t output the value in the Terraform CLI.
• Remember that this value will still show in the Terraform state file.

variable "phone_number" {
type = string
sensitive = true
default = "1234-5678"
}
output "phone_number" {
value = var.phone_number
sensitive = true
}

© DolfinED All rights reserved


Environment Variables

• Another way to protect secrets is to simply keep plain text secrets out of your
code by taking advantage of Terraform’s environment variables.
• By setting the TF_VAR_<name> environment variable, Terraform will use that
value rather than having to embed that directly in the code.

export TF_VAR_phone_number="1234-5678"

unset TF_VAR_phone_number

To remove the value assigned in the Environment variable

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Sensitive &


Environment Variables

© DolfinED All rights reserved


Terraform Vault

© DolfinED All rights reserved


Terraform Vault

• Another way to protect secrets is to store them in secrets management solution, like
HashiCorp Vault.
• By storing them in Vault, you can use the Terraform Vault provider to quickly retrieve
values from Vault and use them in our Terraform code.
• We can download HashiCorp Vault for our operating system at vaultproject.io.
https://www.vaultproject.io/docs/install

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Vault setup


and Examples

© DolfinED All rights reserved


Capstone Projects

© DolfinED All rights reserved


Capstone Project 1

Building A Highly Available


Infrastructure in AWS
Using Terraform

© DolfinED All rights reserved


Project 1 - Tasks

Design a solution for a multi-tier web application that will be deployed in a


custom AWS VPC.
Create a custom VPC with CIDR block 10.0.0.0/16 with:
• Two Public subnets in two different Availability Zones, US-east-1a and us-east-
1B in US-east-1 region.
Ø Use 10.0.10.0/24 and 10.0.20.0/24 ranges for these two subnets.
• Two Private subnets in the same AZs as above.
Ø Use 10.0.100.0/24 and 10.0.200.0/24. Create a separate route table for
the private subnets.

© DolfinED All rights reserved


Project 1 – Tasks (cont.)

• Launch two EBS-backed EC2 instances, one in each of the two private
subnets above (10.0.100.0/24 and 10.0.200.0/24).
Ø The instances will serve as the web and application tiers.
Ø Ensure that the EBS volumes of these instances are encrypted at rest.
Ø The instances will have the user data script (shown in the last slide) run
at launch time.
Ø The security group assigned to the instances should use the name
webSG and must allow ports ssh (22), http (80) and https (443) in the
inbound direction.

© DolfinED All rights reserved


Project 1 – User Data Script

# The bash script (user data) to use for this hands on lab

#Web/app instance 1:
#!bin/bash
yum update -y
yum install httpd -y # installs apache (httpd) service
systemctl start httpd # starts httpd service
systemctl enable httpd # enable httpd to auto-start at system boot
echo " This is server *1* in AWS Region US-EAST-1 in AZ US-EAST-1B " > /var/www/html/index.html

#Web/app instance 2:
#!bin/bash
yum update -y
yum install httpd -y
systemctl start httpd
systemctl enable httpd
echo " This is server *1* in AWS Region US-EAST-1 in AZ US-EAST-1A " > /var/www/html/index.html

© DolfinED All rights reserved


Project 1 Tasks (cont.)

• Launch a NAT gateway in each of the two availability zones above to allow the
two instances to access the internet for updates.
• Adjust the private subnets’ route tables to route the update traffic through the
NAT Gateway.
• Create a target group with the name webTG and add the two application
instances to it.
• The target group will use the port 80 (HTTP) for traffic forwarding and health
checks.
• Launch an application load balancer that will load balance to these two
instances using HTTP.
Ø The application load balancer must be enabled in the two public subnets
you have configured above.

© DolfinED All rights reserved


Project 1 – Tasks (cont.)

• Adjust the security group of the web/app instances to allow inbound traffic
only from the application load balance security group as a source.
• The ALB security group (ALBSG) must allow outbound http to the web/app
security group (webSG)
• The ALBSG must allow inbound traffic from the internet on port http.
• Configure a target tracking auto scaling group that will ensure elasticity and
cost effectiveness. The Auto Scaling group should monitor to the two
instances and be able to add instances on-demand and replace failed
instances.
• Launch a Multi AZ RDS database and ensure that its security group will only
allow access from the web/app tier security group above.

© DolfinED All rights reserved


Project 1 – Tasks (cont.)

• Test to ensure that you can get to the index.html message on the instances through
the load balancer. If it works, congratulations on finishing this amazing project on
AWS.
• Once completed successfully, please remember to destroy your deployed resources
to avoid any surprise charges.

© DolfinED All rights reserved


Capstone Project 2

Implementing A Jenkins Server


on An AWS EC2 Instance
using Terraform

© DolfinED All rights reserved


Project 2 - Tasks

Main Requirements :
• The Jenkins server must be deployed on an EC2 instance.
• The EC2 instance should be accessible via the internet on port 80.
• The EC2 instance should be accessible using SSH.
• Terraform is used to implement this installation.

Steps to Implement the Project


• Create the VPC
• Create the Internet Gateway and attach it to the VPC using a Route Table
• Create a Public Subnet and associate it with the Route Table
• Create a Security Group for the EC2 Instance
• Create a script to automate the installation of Jenkins on the EC2 Instance
• Create the EC2 Instance and attach an Elastic IP and Key Pair to it
• Verify that everything works

© DolfinED All rights reserved


Advanced Topics
- Exam Scope

© DolfinED All rights reserved


Terraform Graph

© DolfinED All rights reserved


Terraform Graph

• The terraform graph command is used to


generate a visual representation of either a
configuration or execution plan.
• The output is in the DOT format, which can
be used by GraphViz to generate charts or
converted it into an image.
• Using this graph representation, we can know
which resources depend on each other.
• We can past that output of dot file into
http://www.webgraphviz.com to get a visual
representation of dependencies that
Terraform creates for your configuration.

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Graph

© DolfinED All rights reserved


Terraform Get

© DolfinED All rights reserved


Terraform Get

• The terraform get command is used


to download and update the
modules mentioned in the root
module.
• The modules are downloaded into
a .terraform subdirectory of the
current working directory.

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Terraform Get

© DolfinED All rights reserved


Resource Implicit and
Explicit Dependency

© DolfinED All rights reserved


Resource Implicit Dependency

With implicit dependency, Terraform can automatically find references of the


object and create an implicit ordering requirement between the two resources.

# implicit Dependency
resource "aws_eip" "myeip" {
vpc = "true"
instance = aws_instance.myec2.id
}
resource "aws_instance" "myec2" {
instance_type = "t2.micro"
ami = "ami-082b5a644766e0e6f"
}

© DolfinED All rights reserved


Resource Explicit Dependency

Explicitly dependency is only necessary when a resource relies on some other


resource's behavior but doesn't access any of that resource's data in its
arguments.

# explicit Dependency
resource "aws_s3_bucket" "example" {
bucket = "dolfined123456789"

resource "aws_instance" "example_c" {


ami = "ami-082b5a644766e0e6f"
instance_type = "t2.micro"

depends_on = [aws_s3_bucket.example]
}

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Resources Implicit / Explicit


Dependency

© DolfinED All rights reserved


Publishing Modules on
Terraform Registry

© DolfinED All rights reserved


Publishing Modules

• Anyone can publish and share modules


on the Terraform Registry.
• Published modules support versioning,
automatically generate documentation,
allow browsing version histories, show
examples and READMEs, and more.

https://developer.hashicorp.com/terraform/registry/modules/publish

© DolfinED All rights reserved


Standard Module Structure

• The standard module structure is a file and


directory layout we recommend for reusable
modules distributed in separate repositories.
• We have the minimal structure and the
complete structure.

© DolfinED All rights reserved


Module Versions

© DolfinED All rights reserved


Module Versions

• It is recommended to explicitly constrain the acceptable version numbers for


each external module to avoid unexpected or unwanted changes.
• Version constraints are supported only for modules installed from a module
registry, such as the Terraform Registry or Terraform Cloud's private module
registry.
• Version for Modules is not mandatory, but it is preferred.

module "iam" {
source = "terraform-aws-modules/iam/aws"
version = "5.24.0"
}

© DolfinED All rights reserved


Terraform Private
Registry

© DolfinED All rights reserved


Private Registry

• Terraform Cloud allows users to create and confidentially share infrastructure


modules within an organization using the private registry.
• With Terraform Enterprise, the private registry allows you to share modules within
or across organizations.
• Private registry modules have source strings of the following form:
<HOSTNAME>/<NAMESPACE>/<NAME>/<PROVIDER>
This is the same format as the public registry, but with an added hostname prefix.

module "s3-webapp" {
source = "app.terraform.io/hashicorp-learn/s3-webapp/aws"
name = var.name
region = var.region
prefix = var.prefix
version = "1.0.0"
}

© DolfinED All rights reserved


Terraform Air Gapped
Systems

© DolfinED All rights reserved


Terraform Air gapped Environments

• Air gapped environments are networks that are isolated


from other networks, usually both physically and
logically. That means no internet and No outside
connectivity (Isolated Environment)
• A great number of systems exist in what are called “air
gapped environments”.
• The industries that utilize them span public sector
(government and military), finance, energy, and more.
• Examples for Air gapped systems include:
Ø Military/governmental computer networks/systems.
Ø Financial computer systems, such as stock
exchanges.
Ø Industrial control systems, such as SCADA in Oil &
Gas fields.

© DolfinED All rights reserved


Terraform Air gapped Environments (cont.)

• Due to lack of Internet connectivity, Air gapped environments present some


unique challenges to the installation and maintenance of applications.
• Terraform Enterprise can be installed either on the cloud or in the Air gapped
system, one requires internet, and the other doesn’t.
https://www.hashicorp.com/blog/deploying-terraform-enterprise-in-
airgapped-environments

© DolfinED All rights reserved


Advanced Topics-
Not In The Exam
Scope

© DolfinED All rights reserved


Terraform Null
Resources

© DolfinED All rights reserved


Null Resources

• The null_resource resource implements the standard resource lifecycle but takes
no further action, no resources created on the cloud.
• The triggers argument-optional, allows specifying an arbitrary set of values that,
when changed, will cause the null resource to be replaced or executed again.
• As long as the trigger value is the same, trigger will not cause provisioner to be
executed.
• It can be used with local-exec, remote-exec or data block.
• We can run Shell Commands, Python Scripts, execute commands & run Ansible
Playbooks inside it.

© DolfinED All rights reserved


Advantages Of Null Resources

• Don't create any infrastructure resources.


• Automate tasks and integrate with external systems.
• Modular and reusable infrastructure code.
• Variety of provisioners.
• Powerful and Flexible.

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Null Resources

© DolfinED All rights reserved


Resource Lifecycle
Meta Argument

© DolfinED All rights reserved


Terraform Lifecycle Meta Argument

• The Lifecycle meta-argument defined inside a resource block to change the


default behavior of the resource.
• The default behavior includes create, update and delete resource as stated in
the general setting in the configuration file and to match with the state file.
• That default behavior can be customized using the special
nested lifecycle block within a resource block body.

https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle

© DolfinED All rights reserved


Terraform Lifecycle Meta Argument (cont.)

resource "aws_instance " "example" {


# ...
lifecycle { Don’t delete the resource first; instead, create
create_before_destroy = true the new one first then delete the old one.
}
}

resource "aws_instance" "example" {


# ...
lifecycle {
ignore_changes = [ Ignore any changes done outsite terraform
tags, management that relate to tags part as an
] example
}
}

© DolfinED All rights reserved


Terraform Lifecycle Meta Argument (cont.)

resource "aws_instance" "example" {


# ... Don’t delete the resource for any reason.
lifecycle { This will cause Terraform to reject with an error any
Prevent_destroy = true plan that would destroy the infrastructure object
} associated with the resource
}

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Life Cycle Meta Argument

© DolfinED All rights reserved


Variables Validation

© DolfinED All rights reserved


Validation Inside Variable Block

resource "aws_instance" "myec2" {


In addition to type constraints in a variable ami = var.server_ami
block, a module author can specify arbitrary instance_type = "t2.micro"
tags = {
custom validation rules for a particular
Name = "the AMI of this Server is : ${var.server_ami}"
variable using a validation block nested }
within the corresponding variable block. }

variable "server_ami" {
type = string
description = "The id of the machine image (AMI) to use for the EC2 Instance."
validation {
condition = length(var.server_ami) > 4 && substr(var.server_ami, 0, 3) == "ami-"
error_message = "The image_id value must be a valid AMI id, starting with \"ami-\"."
}
}

© DolfinED All rights reserved


Hands-on Labs (HoLs)

Variable Validation

© DolfinED All rights reserved


Terraform Best Practices

© DolfinED All rights reserved


Terraform Best Practices

• Host Terraform Code in a Git Repository.


• Use .gitignore to exclude Terraform state files, logs, and provider executables.
• To have a pretty code, use IDE like VS-code to format your code or use Terraform
commands like fmt and validate to ensure your code is well formatted and has a valid
syntax.
• Avoid Hard Coding Resources - Always use variables.
• Follow a Naming Convention for your resources.
• Don’t Repeat Yourself (DRY) – Reuse code by using Modules and call them in your
Root Module .
• Generate README for Each Module with Input and Output Variables.
• Manage your State File on a secure Remote Storage like S3.
• Lock your Remote State File.

© DolfinED All rights reserved


Terraform Best Practices - (cont.)

• Take Advantage of Terraform Functions.


• Take Advantage of Terraform Workspaces.
• Avoid Storing Credentials in your Terraform Code.
• Automate your Deployment with a CI/CD Pipeline.
• Constrain your Terraform and Provider Versions.
• Use Terraform import.
• Run Terraform Command with var-file.
• Avoid changing your State File Manually , only use Terraform Commands to do so.
• Back Up your State Files.

© DolfinED All rights reserved


Exam Information

© DolfinED All rights reserved


Terraform Associate Certification (003) Notes

• The Terraform Associate certification (003) is for Cloud Engineers specializing in


operations, IT, or development who know the basic concepts and skills associated with
open source HashiCorp Terraform.
• Terraform Associate exam includes multiple-choice, multi-select, true or false and fill-in-
the blank type questions (not many of these).
• Online Proctored Exam - You’ll be expected to take this proctored exam in a quiet space
with a webcam enabled to ensure you are following the exam guidelines and not
receiving additional assistance.
• 57 questions
• One hour
• Costs $70.50 plus taxes and is available to take through PSI.

https://www.hashicorp.com/certification/terraform-associate

© DolfinED All rights reserved


Preparing for the Exam

• The knowledge and examples, quizzes, project in this course are enough to pass the exam,
when mastered.
• Additional tutorials are available:
https://developer.hashicorp.com/terraform/tutorials/certification/associate-review
• As needed, you can also read more on some topics that you need to know more about
here:
https://developer.hashicorp.com/terraform/tutorials/certification/associate-study
• Recommended – Use additional practice questions.

© DolfinED All rights reserved


How To Book The Exam

© DolfinED All rights reserved


End of Course

© DolfinED All rights reserved

You might also like