Terraform Projects _ DevOps Shack
Terraform Projects _ DevOps Shack
DevOps Shack
Top 5 Terraform Projects to Master Cloud
Infrastructure Automation
Table of Contents
Project 1: Deploy a High Availability (HA) Web Application on AWS
1. Overview of the Project
2. Implementation Steps
o Set Up Terraform
o Create a VPC
o Create Public and Private Subnets
o Set Up Internet Gateway and Route Table
o Launch EC2 Instances
o Configure Application Load Balancer
o Set Up Auto-Scaling Group
o Apply Terraform Configuration
3. Outcome
2
o Configure Security Groups for Tiers
o Launch EC2 Instances for Web, Application, and Database Tiers
o Set Up a Bastion Host for Secure Access
o Apply Terraform Configuration
3. Outcome
3
o Configure IAM Roles for CodeBuild and CodePipeline
o Create a Fully Automated CodePipeline
o Apply Terraform Configuration
o Test the CI/CD Pipeline
3. Outcome
4
Introduction
Terraform, developed by HashiCorp, is one of the most powerful and widely
used tools in the world of Infrastructure as Code (IaC). It allows engineers,
developers, and cloud architects to define and provision infrastructure
resources in a consistent, repeatable, and automated manner. With Terraform,
infrastructure management becomes simpler, scalable, and free from the
pitfalls of manual configuration, making it a cornerstone for cloud automation
and DevOps practices.
In today’s dynamic cloud-driven environment, mastering Terraform has become
a must-have skill for professionals. The ability to write declarative configuration
files and manage infrastructure across major cloud providers like AWS, Azure,
and Google Cloud is invaluable for anyone aiming to enhance their expertise in
cloud computing and DevOps.
This guide introduces five practical and real-world projects that showcase
Terraform's capabilities and highlight how it can be used to automate different
aspects of cloud infrastructure. Each project has been carefully designed to
help you gain hands-on experience, from deploying high-availability web
applications to setting up CI/CD pipelines and building serverless applications.
By working through these projects, you'll learn how to:
• Deploy secure, scalable, and fault-tolerant web applications.
• Automate Kubernetes cluster provisioning and management.
• Build and automate a complete CI/CD pipeline.
• Leverage serverless technologies like AWS Lambda and DynamoDB.
• Integrate infrastructure automation seamlessly into your workflow.
Whether you are a beginner looking to kickstart your journey in cloud
automation or an experienced professional wanting to deepen your expertise,
this document serves as a practical, hands-on resource. Each project comes
with detailed implementation steps, helping you understand the core concepts
while applying them to real-world scenarios. So, let’s dive into the world of
Terraform and explore how it can transform the way you manage and automate
your infrastructure!
5
Project 1: Deploy a High Availability (HA) Web Application
on AWS
This project demonstrates how to deploy a highly available web application on
AWS using Terraform. The infrastructure includes a Virtual Private Cloud (VPC),
subnets, an internet gateway, a route table, EC2 instances, an application load
balancer (ALB), and an auto-scaling group. The goal is to ensure fault tolerance
and scalability for the web application.
Implementation Steps
Step 1: Install and Configure Terraform
1. Install Terraform: Download Terraform from the official website and
install it on your local machine.
2. Set up AWS CLI: Configure the AWS CLI with your credentials using the
following command:
aws configure
Provide your AWS Access Key, Secret Key, default region (e.g., us-east-1), and
default output format.
3. Create a Working Directory: Create a folder for your project, e.g.,
terraform-ha-web-app.
Step 2: Initialize Terraform Project
1. Inside the project folder, create a file named main.tf and add the AWS
provider configuration:
provider "aws" {
region = "us-east-1"
}
2. Run the following command to initialize Terraform and download the
AWS provider plugin:
terraform init
Step 3: Create a VPC
6
A Virtual Private Cloud (VPC) isolates your resources and provides networking
infrastructure.
1. Define a VPC in a new file called vpc.tf:
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "terraform-vpc"
}
}
2. This configuration creates a VPC with the CIDR block 10.0.0.0/16.
Step 4: Create Public and Private Subnets
Subnets divide your VPC into smaller networks. Public subnets allow access to
the internet, while private subnets do not.
1. Add the subnet configurations to vpc.tf:
resource "aws_subnet" "public" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
availability_zone = "us-east-1a"
tags = {
Name = "public-subnet"
}
}
7
availability_zone = "us-east-1b"
tags = {
Name = "private-subnet"
}
}
2. The public subnet is configured to assign public IPs to instances
automatically.
Step 5: Add an Internet Gateway and Route Table
An internet gateway allows internet traffic to flow to resources in the public
subnet.
1. In vpc.tf, add the following resources:
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Name = "terraform-igw"
}
}
8
subnet_id = aws_subnet.public.id
route_table_id = aws_route_table.public.id
}
2. This configuration sets up internet access for resources in the public
subnet.
Step 6: Launch EC2 Instances
Create web server instances to host your application.
1. Create a new file ec2.tf and define an EC2 instance:
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0" # Amazon Linux 2 AMI
instance_type = "t2.micro"
subnet_id = aws_subnet.public.id
key_name = "your-key-pair"
tags = {
Name = "web-server"
}
}
2. Ensure you have an existing key pair in your AWS account for SSH access.
Step 7: Set Up an Application Load Balancer
An ALB distributes incoming traffic across multiple instances for high
availability.
1. Create a new file alb.tf and define the ALB:
resource "aws_lb" "app" {
name = "terraform-alb"
internal = false
load_balancer_type = "application"
9
security_groups = [aws_security_group.alb_sg.id]
subnets = [aws_subnet.public.id]
tags = {
Name = "terraform-alb"
}
}
lifecycle {
create_before_destroy = true
}
}
tag {
key = "Name"
value = "web-instance"
propagate_at_launch = true
}
}
2. This configuration ensures the application can handle varying traffic
loads.
11
Step 9: Apply the Terraform Configuration
1. Initialize Terraform again:
terraform init
2. Validate the configuration to ensure there are no syntax errors:
terraform validate
3. Preview the infrastructure changes:
terraform plan
4. Deploy the infrastructure:
terraform apply
5. Confirm the deployment when prompted.
Outcome
• Infrastructure Components:
o A VPC with public and private subnets.
o An internet gateway for public subnet access.
o EC2 instances running in an auto-scaling group.
o An application load balancer routing traffic to the instances.
• Access:
o The web application is accessible via the ALB's DNS name.
o Traffic is distributed across instances to ensure high availability.
12
Project 2: Deploy a Secure Multi-Tier Web Application
on AWS
This project focuses on deploying a secure multi-tier web application
architecture on AWS. The architecture consists of a public-facing web tier, an
internal application tier, and a database tier hosted in private subnets. A
bastion host is used for secure access to private resources.
Implementation Steps
Step 1: Set Up Terraform
1. Install Terraform on your local machine.
2. Create a directory for your project, e.g., terraform-multi-tier-app.
Step 2: Define the AWS Provider
1. Create a main.tf file and configure the AWS provider:
provider "aws" {
region = "us-east-1"
}
2. Run the initialization command:
terraform init
13
}
14
availability_zone = "us-east-1c"
tags = {
Name = "db-subnet"
}
}
15
route_table_id = aws_route_table.public.id
}
egress {
from_port = 0
to_port =0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "web-sg"
}
}
16
resource "aws_security_group" "app_sg" {
vpc_id = aws_vpc.main.id
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
security_groups = [aws_security_group.web_sg.id]
}
egress {
from_port = 0
to_port =0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "app-sg"
}
}
17
to_port = 3306
protocol = "tcp"
security_groups = [aws_security_group.app_sg.id]
}
egress {
from_port = 0
to_port =0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "db-sg"
}
}
18
tags = {
Name = "web-instance"
}
}
19
Step 8: Configure Bastion Host
Secure access to private instances using a bastion host.
1. Add a bastion instance to instances.tf:
resource "aws_instance" "bastion" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
subnet_id = aws_subnet.public.id
key_name = "your-key-pair"
tags = {
Name = "bastion-host"
}
}
Outcome
• Infrastructure Components:
o A VPC with public and private subnets.
20
o A secure web, app, and database tier.
o A bastion host for secure access to private instances.
• Access:
o Web tier accessible via the public subnet.
o Secure communication between tiers using security groups.
o SSH access to private instances through the bastion host.
21
Project 3: Automate Kubernetes Cluster Deployment
on AWS Using Terraform
This project focuses on automating the deployment of a Kubernetes cluster on
AWS using Amazon Elastic Kubernetes Service (EKS) with Terraform. It sets up a
managed Kubernetes control plane, worker nodes, and networking
infrastructure to host containerized applications.
Implementation Steps
Step 1: Set Up Terraform
1. Install Terraform: Ensure Terraform is installed on your system.
2. Create a Project Directory: Create a directory, e.g., terraform-eks-cluster,
and navigate into it.
Step 2: Configure the AWS Provider
1. Create a file named main.tf and configure the AWS provider:
provider "aws" {
region = "us-east-1"
}
2. Initialize the project:
terraform init
22
enable_dns_hostnames = true
tags = {
Name = "eks-vpc"
}
}
2. Add subnets for the VPC:
resource "aws_subnet" "public_subnet" {
vpc_id = aws_vpc.eks_vpc.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
availability_zone = "us-east-1a"
tags = {
Name = "eks-public-subnet"
}
}
23
vpc_id = aws_vpc.eks_vpc.id
tags = {
Name = "eks-igw"
}
}
24
subnets = [aws_subnet.public_subnet.id, aws_subnet.private_subnet.id]
vpc_id = aws_vpc.eks_vpc.id
node_groups = {
eks_nodes = {
desired_capacity = 2
max_capacity =3
min_capacity =1
instance_type = "t3.medium"
}
}
tags = {
Name = "eks-cluster"
}
}
2. This uses a Terraform EKS module to simplify the deployment.
25
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "eks.amazonaws.com"
}
}]
})
tags = {
Name = "eks-role"
}
}
26
Step 7: Configure kubectl
1. Update your kubeconfig file to connect to the EKS cluster:
aws eks --region us-east-1 update-kubeconfig --name eks-cluster
2. Verify the cluster connection:
kubectl get nodes
27
ports:
- containerPort: 80
2. Apply the deployment:
kubectl apply -f app-deployment.yaml
3. Expose the application using a service:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
4. Apply the service:
kubectl apply -f service.yaml
5. Access the application via the Load Balancer URL.
Outcome
• Infrastructure Components:
o A VPC with public and private subnets.
o An EKS cluster with a managed control plane and worker nodes.
o Networking and security configurations.
28
o A sample application deployed on Kubernetes.
• Access:
o The sample application is accessible through the Load Balancer's
DNS name.
29
Project 4: Automate the Deployment of a Complete
CI/CD Pipeline on AWS Using Terraform
This project focuses on building a fully automated CI/CD pipeline on AWS using
Terraform. The pipeline integrates AWS CodePipeline, CodeBuild, CodeCommit,
and S3 for hosting and deploying a static website.
Implementation Steps
Step 1: Set Up Terraform
1. Install Terraform and create a new project directory, e.g., terraform-cicd-
pipeline.
2. Configure the AWS provider in main.tf:
provider "aws" {
region = "us-east-1"
}
website {
index_document = "index.html"
}
tags = {
Name = "CICD Website Bucket"
30
}
}
2. Add a policy to make the bucket content publicly accessible:
resource "aws_s3_bucket_policy" "website_bucket_policy" {
bucket = aws_s3_bucket.website_bucket.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = "*"
Action = "s3:GetObject"
Resource = "${aws_s3_bucket.website_bucket.arn}/*"
}]
})
}
tags = {
Name = "CI/CD Demo Repo"
}
31
}
2. Initialize a local Git repository and push code to the CodeCommit
repository:
git init
git remote add origin https://git-codecommit.us-east-
1.amazonaws.com/v1/repos/cicd-demo-repo
git add .
git commit -m "Initial commit"
git push -u origin main
32
type = "LINUX_CONTAINER"
environment_variables = [
{
name = "S3_BUCKET"
value = aws_s3_bucket.website_bucket.bucket
}
]
}
tags = {
Name = "Build Project"
}
}
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = {
Service = "codebuild.amazonaws.com"
}
33
Action = "sts:AssumeRole"
}]
})
tags = {
Name = "CodeBuild Role"
}
}
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = {
Service = "codepipeline.amazonaws.com"
}
Action = "sts:AssumeRole"
}]
34
})
tags = {
Name = "CodePipeline Role"
}
}
artifact_store {
location = aws_s3_bucket.website_bucket.id
type = "S3"
}
stage {
name = "Source"
action {
35
name = "Source"
category = "Source"
owner = "AWS"
provider = "CodeCommit"
version = "1"
output_artifacts = ["source_output"]
configuration = {
RepositoryName =
aws_codecommit_repository.source_repo.repository_name
BranchName = "main"
}
}
}
stage {
name = "Build"
action {
name = "Build"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["source_output"]
output_artifacts = ["build_output"]
configuration = {
ProjectName = aws_codebuild_project.build_project.name
}
36
}
}
stage {
name = "Deploy"
action {
name = "Deploy"
category = "Deploy"
owner = "AWS"
provider = "S3"
version = "1"
input_artifacts = ["build_output"]
configuration = {
BucketName = aws_s3_bucket.website_bucket.bucket
Extract = "true"
}
}
}
tags = {
Name = "CI/CD Pipeline"
}
}
37
terraform validate
2. Preview the infrastructure changes:
terraform plan
3. Apply the configuration:
terraform apply
4. Confirm the deployment when prompted.
Outcome
• Infrastructure Components:
o An S3 bucket for hosting a static website.
o A CodeCommit repository for source code.
o A CodeBuild project for building the application.
o A CodePipeline to automate CI/CD.
• Access:
o The static website is deployed to the S3 bucket and publicly
accessible.
38
Project 5: Automate the Deployment of a Serverless
Application Using AWS Lambda and Terraform
This project focuses on deploying a serverless application using AWS Lambda,
API Gateway, and DynamoDB. Terraform automates the setup, including
creating a Lambda function, configuring API Gateway to expose the function,
and integrating DynamoDB as the database layer.
Implementation Steps
Step 1: Set Up Terraform
1. Install Terraform on your system.
2. Create a directory, e.g., terraform-serverless-app, for the project.
3. Initialize Terraform by creating a main.tf file and adding the AWS
provider:
provider "aws" {
region = "us-east-1"
}
4. Run:
terraform init
tags = {
Name = "Lambda Deployment Bucket"
39
}
}
tags = {
Name = "Serverless App Table"
}
}
dynamodb = boto3.resource("dynamodb")
40
table_name = os.environ["DYNAMODB_TABLE"]
table = dynamodb.Table(table_name)
41
handler = "app.lambda_handler"
role = aws_iam_role.lambda_execution_role.arn
environment {
variables = {
DYNAMODB_TABLE = aws_dynamodb_table.app_table.name
}
}
tags = {
Name = "Serverless Lambda Function"
}
}
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = {
Service = "lambda.amazonaws.com"
}
42
Action = "sts:AssumeRole"
}]
})
tags = {
Name = "Lambda Execution Role"
}
}
43
resource "aws_api_gateway_resource" "app_resource" {
rest_api_id = aws_api_gateway_rest_api.app_api.id
parent_id = aws_api_gateway_rest_api.app_api.root_resource_id
path_part = "items"
}
44
function_name = aws_lambda_function.app_lambda.function_name
principal = "apigateway.amazonaws.com"
source_arn = "${aws_api_gateway_rest_api.app_api.execution_arn}/*/*"
}
Outcome
• Infrastructure Components:
45
o A DynamoDB table for data storage.
o A Lambda function to handle HTTP requests and interact with
DynamoDB.
o An API Gateway to expose the Lambda function as a RESTful API.
o An S3 bucket to store the Lambda deployment package.
• Access:
o The application is accessible via the API Gateway URL.
46
Conclusion
Terraform has redefined how we approach infrastructure management by
enabling a declarative and automated way of provisioning cloud resources. Its
ability to support multi-cloud environments, simplify complex setups, and
maintain consistency across deployments makes it an indispensable tool for
developers, DevOps engineers, and cloud architects.
In this guide, we explored five practical and impactful projects to master
Terraform:
1. Deploying a high-availability web application.
2. Building a secure multi-tier architecture.
3. Automating Kubernetes cluster provisioning.
4. Creating a complete CI/CD pipeline.
5. Implementing a serverless application with AWS Lambda and
DynamoDB.
These projects covered a wide range of use cases, showcasing how Terraform
can be applied to automate infrastructure, enhance scalability, and simplify
maintenance. Each project was designed to help you gain hands-on experience
with real-world scenarios, providing you with the knowledge to confidently
work on cloud-based infrastructures.
By following the step-by-step implementation of these projects, you’ve not
only learned how to build different components but also gained insights into
Terraform best practices, such as modular design, role-based access, and
secure resource management. These skills are essential for scaling applications,
reducing downtime, and ensuring that your infrastructure can adapt to
changing business needs.
Whether you’re starting your journey with Terraform or looking to refine your
existing skills, these projects offer a solid foundation for mastering
infrastructure as code. Terraform’s flexibility and powerful capabilities make it a
key player in the DevOps ecosystem, empowering organizations to move faster
and more efficiently in today’s cloud-first world.
As you continue to explore Terraform, remember that the possibilities are
endless. You can build on these projects, customize them for your unique
47
requirements, and expand your expertise to include more advanced topics like
state management, CI/CD pipelines for Terraform itself, and integrations with
third-party tools.
With Terraform in your toolkit, you’re well-equipped to tackle the challenges of
modern infrastructure management. Keep experimenting, learning, and
building, and you’ll soon become a pro at automating infrastructure with
Terraform!
48