🚀 Terraform Mastery Series - Part 2: From Local to EC2 - Automate Your First Instance with Terraform
Terraform Mastery — Part 2, Article 2
Hey Guys 👋
Following up on Part 1 of my Terraform Mastery Series, where we explored the fundamentals and deployed our first S3 bucket, I’m back with Part 2 — and this one’s exciting! 🔥
In this part, we’ll move from concept to real-world application — provisioning your very first EC2 instance using Terraform, step-by-step. Whether you’re a DevOps beginner or just new to IaC, this guide is built to make you confident and deployment-ready.
🔨 What You’ll Learn in This Part:
✅ Structuring a Terraform project for EC2
✅ Configuring and securing EC2 with Key Pairs
✅ Managing AMIs, Instance Types & Regions
✅ Terraform Variables & Outputs blocks
✅ Full init → plan → apply → destroy cycle
✅ A clean, modular workflow you can reuse anytime
⚙️ Let’s Get to Work
Step 1: Set Up Your Project Directory
Start by organizing your Terraform project:
🔧 terraform.tf: Define Required Providers
This file ensures consistent provider versions across environments. Create a terraform.tf file with the following content:
This tells Terraform to use the AWS provider from HashiCorp and pins the version for stability and compatibility.
You can copy this directly from the Terraform AWS Provider Documentation
Step 2: Provider Configuration
Before we begin provisioning resources, we need to define the provider. Terraform supports many providers like AWS, Azure, GCP, and local systems. In our case, we are working with AWS.
Create a file named provider.tf and define the AWS provider block. It looks like this:
Ensure AWS CLI is installed and configured. You must generate AWS access keys and SSH key pairs.
How to configure AWS access Keys : Read Here
Step 3: Create and Configure SSH Key Pair
Terraform needs an SSH key pair to allow secure access to the EC2 instance. Use the following command on your terminal after configuring AWS :
provide name → e.g terra-key
terra-key.pub → public key (to be used in Terraform)
terra-key → private key (used for SSH login)
You can reference the public key in your Terraform script using the file() function.
Now we will create EC2.tf and use above mentioned :
Step 4: Create Networking Components (VPC & Security Group)
EC2 requires a network setup. We’ll create:
VPC
Security Group with inbound and outbound rules
Inside EC2.tf Create VPC :
let’s pause and understand what are resources in AWS :
Example of Security Group block:
We have to provide Name , description and most important VPC id.
Just like we fill values in the AWS Console UI, we define them here in the security group resource block.
Note : all the syntax are provided in official terraform documents under each provider. (For example : Security Group)
Step 5: Create EC2 Instance
Now let’s provision an EC2 instance using the key pair and security group in same file EC2.tf :
📄 Full ec2.tf File will look like this :
Now finally create the ec2
Run the following Terraform commands:
✅ Once applied, the EC2 instance will be created and visible on your AWS dashboard.
Step 6: Connect to EC2
After launching an EC2 instance, click “Connect” in the AWS console to get the SSH command, then run it in your terminal using your .pem key to access the server securely.
Congratulations! You have now automated EC2 provisioning from our local machine using terraform.
🚨 Future Issue: What if I have to make changes in EC2 configurations (type, storage size, AMI ID, etc)?
Let’s say we have written the ec2.tf file and everything is working perfectly. But what happens if the we were asked for changes like:
A different instance type (t2.small instead of t2.micro)
More storage
A new AMI
Another key pair
We’d be stuck editing every single value manually. Not only is it time-consuming, but it’s also very error-prone and hard to manage in big projects.
🔧 Solution: Use variables instead of hardcoding values
Instead of hardcoding values like this:
We replace them with variables, like this:
📁 Create variables.tf file :
In this file, we define all the variables we’ll use. Example:
Now whenever we want changes, we just edit this one file instead of touching the main code. This keeps the code clean and makes automation easier.
📌 Result
✅ Clean code
✅Easy to scale
✅Fewer chances of mistakes
✅Real-world best practice
📡 Realization: After Launching the EC2 — We Still Need Info Like IP Address?
When we launched our EC2 instance successfully using Terraform, we celebrated! 🎉 But soon after, we realized…
“Wait… how do I even connect to this EC2? What’s its IP address?”
The only way to check was by going back to the AWS Console, searching for the instance, and copying the public IP manually.
That’s not efficient — especially in a DevOps workflow where we aim for automation and full CLI control.
💡 Solution: Use Terraform output Block to Fetch Key Info
Terraform provides a clean way to extract and display useful data after apply using an output block.
Here’s how we do it 👇
📁 Add to output.tf:
✅ Now, when we run terraform apply, Terraform will automatically show the public IP of our EC2 instance in the terminal!
✅ Bonus Tip: You Can Output Other Useful Details Too
Example — extract Public dns, Private IP, Security group ,availability zone, and more:
This turns your Terraform output into a quick dashboard for critical data, without needing to open the AWS Console.
The Output will look like this :
🎯 Result
No more AWS Console dependency 🔒
Instant access to EC2 data from terminal ✅
Perfect for scripting and automation 💻
Cleaner, modern DevOps flow 🌐
🧹 Cleaning Up with terraform destroy
Once you’re done testing or showcasing your EC2 setup, it’s important to tear it down to avoid unnecessary AWS charges. Terraform makes cleanup just as easy as deployment:
This will safely remove all resources defined in your configuration — key pairs, security groups, EC2 instance, and more.
Always remember: provision responsibly, destroy confidently. ✅
✨ Wrapping It Up
Congratulations again 🎉
You’ve just taken a huge leap in your Terraform journey by launching your very first EC2 instance — end-to-end — all from your local machine. From setting up the provider to securely connecting via SSH, and even making your code cleaner with variables and outputs, you’ve now mastered the core building blocks of infrastructure as code on AWS.
This exact setup — with modular code, reusable variables, and automation-first mindset — is used by real-world DevOps teams to deploy cloud infrastructure at scale.
But we’re just getting started.
🔮 What’s Coming Next?
“Provisioning is power. Automation is freedom.”
In the next part of this series, we’ll go even deeper and learn:
How to automate EC2 configuration and install packages like nginx using user_data
How to launch multiple EC2 instances at once
How to create dynamic outputs and avoid name conflicts
And how to scale confidently with advanced Terraform techniques
So make sure to bookmark, follow, and share this with your DevOps circle. 🙌
Till then, keep building, keep automating. 💻⚙️
SOC Analyst | AI-Driven Threat Detection & Incident Response Skilled in SIEM (Splunk, QRadar), log analysis, and endpoint/network defense , ELK, and threat intelligence tools | Driven to secure digital environments
3moVery informative
Linux Administrator | Python | CISSP | Rhel | AWS | Information security | Cyber Security | Soc Analyst | Ethical Hacker |
3moVery informative
Cloud & Cybersecurity Enthusiast | Linux Administrator | CISSP | Bash & Python Scripting | RHEL | ELK | AWS | DevOps | Kubernetes | Docker
3moThanks for sharing Muhammad Hanzala