Voiced by Amazon Polly |
Overview
Shared storage is a cornerstone for building scalable and efficient applications in cloud computing. Amazon S3 is a durable and scalable object storage service often paired with Amazon EC2 instances to separate compute from storage, enabling cloud-native applications. In this blog, we will explore how to provision an Amazon S3 bucket and mount it across three Amazon EC2 instances using Terraform as the Infrastructure as Code (IaC) tool.
Pioneers in Cloud Consulting & Migration Services
- Reduced infrastructural costs
- Accelerated application deployment
Introduction
Amazon Web Services (AWS) provides powerful cloud services like Amazon S3 and Amazon EC2 that help build scalable and efficient applications. Amazon S3 is a highly durable and scalable object storage service designed to store and retrieve any amount of data. Amazon EC2 offers resizable compute capacity, allowing users to run virtual servers on demand. Combining Amazon S3 with Amazon EC2 enables separation of storage and compute, supporting flexible, cloud-native architectures.
Why use Amazon S3 with Amazon EC2 instances?
Integrating Amazon S3 with Amazon EC2 offers several benefits:
- Scalability: Amazon S3 provides unlimited storage, allowing applications to scale seamlessly.
- Durability: Data stored in Amazon S3 is replicated across multiple facilities, ensuring high availability.
- Cost Efficiency: With pay-as-you-go pricing, Amazon S3 minimizes costs for storing large datasets.
- Flexibility: Amazon EC2 instances can access Amazon S3 buckets to store application data, make backups, or host static assets.
Prerequisites
- AWS Account: Ensure you have an AWS account with the necessary permissions to create resources like Amazon VPCs, Amazon EC2 instances, Amazon S3 buckets, and AWS IAM roles.
- Terraform: Install Terraform on your machine to manage infrastructure as code.
Solution Overview
The goal is to create a shared storage system where three Amazon EC2 instances in separate availability zones can access a single Amazon S3 bucket. Each instance will mount the bucket using s3fs, an open-source FUSE file system that enables Amazon S3 buckets to appear as local file systems.
Infrastructure Setup
Terraform Scripts
- VPC.tf: Creates Amazon VPC with three subnets in different availability zones.
- EC2.tf: Provisions three Amazon EC2 instances and attaches AWS IAM roles for Amazon S3 access.
- S3.tf: Creates an Amazon S3 bucket for shared storage.
- Variable.tf: Centralizes configuration variables for easy management.
- Userdata.sh: Configures each Amazon EC2 instance to mount the Amazon S3 bucket during boot.
VPC.tf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
variable "availability_zones" { default = ["us-east-1a", "us-east-1b", "us-east-1c"] } resource "aws_vpc" "hts-l2-vpc" { cidr_block = "10.0.0.0/16" tags = { Name = "hts-l2-vpc" } } resource "aws_internet_gateway" "igw" { vpc_id = aws_vpc.hts-l2-vpc.id tags = { Name = "hts-l2 Internet Gateway" } } resource "aws_subnet" "my_subnet" { count = length(var.availability_zones) vpc_id = aws_vpc.hts-l2-vpc.id cidr_block = cidrsubnet("10.0.0.0/16", 8, count.index) availability_zone = var.availability_zones[count.index] tags = { Name = "hts-l2-subnet-${count.index}" } } resource "aws_route_table" "ig_rt" { vpc_id = aws_vpc.hts-l2-vpc.id route { cidr_block = "0.0.0.0/0" gateway_id = aws_internet_gateway.igw.id } tags = { Name = "hts-l2 ig rt" } } resource "aws_route_table_association" "ig-assoc" { count = length(var.availability_zones) subnet_id = aws_subnet.my_subnet[count.index].id route_table_id = aws_route_table.ig_rt.id } |
EC2.tf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
resource "aws_iam_role_policy_attachment" "s3_access_policy_attachment" { role = aws_iam_role.ec2-role.name policy_arn = aws_iam_policy.s3_access_policy.arn } resource "aws_security_group" "ec2-sg" { vpc_id = aws_vpc.hts-l2-vpc.id dynamic "ingress" { for_each = var.inbound_ports content { from_port = ingress.value to_port = ingress.value protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } } resource "aws_instance" "hts-l2-instance" { count = length(var.availability_zones) ami = "ami-04b70fa74e45c3917" instance_type = "t2.micro" key_name = "hts-l2-key-pair" subnet_id = aws_subnet.my_subnet[count.index].id iam_instance_profile = "ec2-s3-access" vpc_security_group_ids = ["${aws_security_group.ec2-sg.id}"] associate_public_ip_address = true user_data = file("./userdata.sh") tags = { Name = "hts-l2-instance-${count.index}" } depends_on = [aws_iam_role.ec2-role, aws_s3_bucket.hts-l2-bucket1111] } |
S3.tf
1 2 3 |
resource "aws_s3_bucket" "hts-l2-bucket1111" { bucket = "hts-l2-bucket111" } |
Variable.tf
1 2 3 |
variable "inbound_ports" { default = [80, 443, 22] } |
Userdata.sh
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
#!/usr/bin/bash sudo apt-get update -y sudo apt-get install -y docker.io sudo systemctl start docker sudo systemctl enable docker sudo apt install unzip sudo curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" sudo unzip awscliv2.zip sudo ./aws/install sudo apt-get install s3fs -y sudo mkdir /mnt/s3-bucket cd /mnt/s3-bucket ;touch test1.txt test2.txt aws s3 sync /mnt/s3-bucket s3://hts-l2-bucket111 sudo s3fs hts-l2-bucket111 /mnt/s3-bucket -o iam_role=ec2-s3-access -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 -o use_path_request_style -o url=https://s3.amazonaws.com sudo s3fs hts-l2-bucket111 /mnt/s3-bucket -o iam_role=ec2-s3-access -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 -o use_path_request_style -o url=https://s3.amazonaws.com -o dbglevel=info -f -o curldbg |
AWS IAM role
An AWS IAM role is attached to each Amazon EC2 instance to grant permissions for accessing the Amazon S3 bucket. This ensures secure and seamless integration without hardcoding credentials.
Attached is the AWS IAM role to Amazon EC2 instances:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowS3Access", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::${BUCKET_NAME}", "arn:aws:s3:::${BUCKET_NAME}/*" ] } ] } |
Solution verification
Checking the Amazon EC2 instance has mounted the Amazon S3 bucket:
Create a new file named test3.txt, add content, and verify its presence in the Amazon S3 bucket.
Key Benefits
- High Availability: Deploying instances in separate availability zones ensures resilience against failures.
- Centralized Storage: The Amazon S3 bucket is a single source of truth for all instances.
- Automation: Terraform simplifies resource provisioning and ensures repeatability.
Common use cases
- Hosting static website assets like HTML, CSS, and images directly from Amazon S3
- Backing up application data from Amazon EC2 instances into durable Amazon S3 storage.
- Storing large datasets for processing by containerized applications running on Amazon EC2.
Conclusion
Drop a query if you have any questions regarding Amazon EC2, Amazon S3 or Terraform and we will get back to you quickly.
Making IT Networks Enterprise-ready – Cloud Management Services
- Accelerated cloud migration
- End-to-end view of the cloud environment
About CloudThat
CloudThat is a leading provider of Cloud Training and Consulting services with a global presence in India, the USA, Asia, Europe, and Africa. Specializing in AWS, Microsoft Azure, GCP, VMware, Databricks, and more, the company serves mid-market and enterprise clients, offering comprehensive expertise in Cloud Migration, Data Platforms, DevOps, IoT, AI/ML, and more.
CloudThat is the first Indian Company to win the prestigious Microsoft Partner 2024 Award and is recognized as a top-tier partner with AWS and Microsoft, including the prestigious ‘Think Big’ partner award from AWS and the Microsoft Superstars FY 2023 award in Asia & India. Having trained 650k+ professionals in 500+ cloud certifications and completed 300+ consulting projects globally, CloudThat is an official AWS Advanced Consulting Partner, Microsoft Gold Partner, AWS Training Partner, AWS Migration Partner, AWS Data and Analytics Partner, AWS DevOps Competency Partner, AWS GenAI Competency Partner, Amazon QuickSight Service Delivery Partner, Amazon EKS Service Delivery Partner, AWS Microsoft Workload Partners, Amazon EC2 Service Delivery Partner, Amazon ECS Service Delivery Partner, AWS Glue Service Delivery Partner, Amazon Redshift Service Delivery Partner, AWS Control Tower Service Delivery Partner, AWS WAF Service Delivery Partner, Amazon CloudFront Service Delivery Partner, Amazon OpenSearch Service Delivery Partner, AWS DMS Service Delivery Partner, AWS Systems Manager Service Delivery Partner, Amazon RDS Service Delivery Partner, AWS CloudFormation Service Delivery Partner and many more.
FAQs
1. Can I mount an Amazon S3 bucket on Windows-based EC2 instances?
ANS: – Yes, although mounting is more straightforward on Linux-based instances using s3fs, Windows users can interact with Amazon S3 using AWS CLI or third-party tools like TntDrive.
2. What are the security considerations when integrating Amazon EC2 with Amazon S3?
ANS: – Use AWS IAM roles instead of hardcoding credentials on your instances. Configure bucket policies and access control lists (ACLs) to restrict access.
WRITTEN BY Abhishek Dubey
Comments