Voiced by Amazon Polly |
Overview
In today’s cloud-native world, where agility, modularity, and scalability are paramount, event-driven architecture (EDA) transforms how applications are designed and operated. Developers can create highly decoupled and reactive systems by using events to trigger specific actions. In this blog, we will build a real-world event-driven workflow using AWS Lambda, Amazon S3, Amazon DynamoDB, and Amazon EventBridge to automate data processing from file upload to downstream event publishing.
Pioneers in Cloud Consulting & Migration Services
- Reduced infrastructural costs
- Accelerated application deployment
Event-Driven Architecture
Event-driven architecture is a design model in which decoupled services communicate through events. Each service listens for events it cares about and reacts when those events occur.
Benefits of EDA:
- Loose coupling of services
- High scalability and flexibility
- Asynchronous communication
- Real-time responsiveness
In the AWS ecosystem, AWS Lambda functions are often at the core of EDA, executing code in response to events from sources like Amazon S3, Amazon DynamoDB, or Amazon EventBridge.
What We will Build?
We will build a fully automated event-driven workflow that performs the following actions:
- Uploads a file to Amazon S3.
- Triggers a Lambda function via Amazon S3 event notification.
- AWS Lambda processes the file metadata and stores it in Amazon DynamoDB.
- AWS Lambda publishes a custom event to Amazon EventBridge.
- (Optional) Amazon EventBridge routes the event to additional targets.
Architecture Overview
1 2 3 4 5 6 7 |
[User Uploads File] → [S3 Bucket] ↓ [S3 Event] ↓ [AWS Lambda Function] / | \ [Extract Metadata] [Save to DynamoDB] [Send Event to EventBridge] |
Step-by-Step Implementation
Step 1: Create an Amazon S3 Bucket
Create a new Amazon S3 bucket that will act as the event source:
1 |
aws s3 mb s3://my-event-driven-bucket |
Step 2: Create an Amazon DynamoDB Table
Create an Amazon DynamoDB table with filename as the partition key:
1 2 3 4 5 |
aws dynamodb create-table \ --table-name FileMetadata \ --attribute-definitions AttributeName=filename,AttributeType=S \ --key-schema AttributeName=filename,KeyType=HASH \ --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 |
Step 3: Create an AWS IAM Role for AWS Lambda
Create a role with permissions for:
- Amazon S3 Read
- Amazon DynamoDB Write
- Amazon EventBridge PutEvents
- Amazon CloudWatch Logs
Use AWS Console or CLI to attach the necessary policies.
Step 4: Create the AWS Lambda Function
Here’s the core logic that extracts metadata, stores it in Amazon DynamoDB, and emits an Amazon EventBridge event:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
import boto3 import json def lambda_handler(event, context): s3 = boto3.client('s3') bucket = event['Records'][0]['s3']['bucket']['name'] key = event['Records'][0]['s3']['object']['key'] # Get object metadata response = s3.head_object(Bucket=bucket, Key=key) metadata = { 'filename': key, 'size': response['ContentLength'], 'type': response['ContentType'], 'timestamp': str(response['LastModified']) } # Store metadata in DynamoDB dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('FileMetadata') table.put_item(Item=metadata) # Publish to EventBridge eventbridge = boto3.client('events') eventbridge.put_events( Entries=[{ 'Source': 'custom.file.upload', 'DetailType': 'File Metadata Processed', 'Detail': json.dumps(metadata), 'EventBusName': 'default' }] ) return {'statusCode': 200, 'body': 'Metadata processed'} |
Step 5: Set Up Amazon S3 Trigger for AWS Lambda
Go to the Amazon S3 bucket in the AWS Console:
- Open Properties → Event Notifications.
- Add an event to trigger the AWS Lambda function on ObjectCreated
Step 6: Configure Amazon EventBridge Integration
This is optional but recommended. You can create a rule in Amazon EventBridge to:
- Log processed events to Amazon CloudWatch.
- Trigger another AWS Lambda for post-processing.
- Send a notification via Amazon SNS.
Example rule pattern:
1 2 3 4 |
{ "source": ["custom.file.upload"], "detail-type": ["File Metadata Processed"] } |
- Testing the Workflow
- Upload a file to your Amazon S3 bucket:
1 |
aws s3 cp testfile.txt s3://my-event-driven-bucket/ |
- Watch your AWS Lambda logs in Amazon CloudWatch.
- Confirm metadata is saved in Amazon DynamoDB.
- Verify the Amazon EventBridge event was triggered.
Benefits of This Architecture
- Loose Coupling: Components communicate asynchronously.
- Scalability: AWS Lambda and Amazon S3 are natively scalable.
- Real-time Processing: Immediate reaction to events.
- Extensibility: Add more targets to Amazon EventBridge without changing existing logic.
Common Pitfalls and Best Practices
- Enable DLQs (Dead Letter Queues) for AWS Lambda to catch failed invocations.
- Set retry policies for Amazon S3 and Amazon EventBridge.
- Use Amazon CloudWatch Alarms and Logs for observability.
- Secure access using least privilege AWS IAM policies.
Real-World Use Cases
- Image or video processing pipelines
- Real-time analytics and monitoring systems
- IoT device telemetry ingestion
- Compliance and audit logging
Conclusion
As next steps, consider enhancing the solution by:
- Adding Amazon SNS or Amazon SQS for decoupled messaging
- Using AWS Step Functions for orchestrating complex logic
- Integrating Amazon API Gateway for synchronous triggers
By embracing event-driven patterns, you unlock new possibilities for automation, responsiveness, and cloud-native design.
Drop a query if you have any questions regarding Event-Driven Workflow and we will get back to you quickly.
Making IT Networks Enterprise-ready – Cloud Management Services
- Accelerated cloud migration
- End-to-end view of the cloud environment
About CloudThat
CloudThat is a leading provider of Cloud Training and Consulting services with a global presence in India, the USA, Asia, Europe, and Africa. Specializing in AWS, Microsoft Azure, GCP, VMware, Databricks, and more, the company serves mid-market and enterprise clients, offering comprehensive expertise in Cloud Migration, Data Platforms, DevOps, IoT, AI/ML, and more.
CloudThat is the first Indian Company to win the prestigious Microsoft Partner 2024 Award and is recognized as a top-tier partner with AWS and Microsoft, including the prestigious ‘Think Big’ partner award from AWS and the Microsoft Superstars FY 2023 award in Asia & India. Having trained 850k+ professionals in 600+ cloud certifications and completed 500+ consulting projects globally, CloudThat is an official AWS Advanced Consulting Partner, Microsoft Gold Partner, AWS Training Partner, AWS Migration Partner, AWS Data and Analytics Partner, AWS DevOps Competency Partner, AWS GenAI Competency Partner, Amazon QuickSight Service Delivery Partner, Amazon EKS Service Delivery Partner, AWS Microsoft Workload Partners, Amazon EC2 Service Delivery Partner, Amazon ECS Service Delivery Partner, AWS Glue Service Delivery Partner, Amazon Redshift Service Delivery Partner, AWS Control Tower Service Delivery Partner, AWS WAF Service Delivery Partner, Amazon CloudFront Service Delivery Partner, Amazon OpenSearch Service Delivery Partner, AWS DMS Service Delivery Partner, AWS Systems Manager Service Delivery Partner, Amazon RDS Service Delivery Partner, AWS CloudFormation Service Delivery Partner, AWS Config, Amazon EMR and many more.
FAQs
1. Can I use AWS Lambda with multiple event sources in one function?
ANS: – Yes, AWS Lambda supports multiple event sources, including Amazon S3, Amazon DynamoDB Streams, Amazon API Gateway, Amazon EventBridge, and more. However, the event structure varies by source, so your AWS Lambda function must handle different event formats appropriately.
2. What happens if the AWS Lambda function fails while processing an event?
ANS: – AWS will retry the invocation if the function fails based on the trigger source’s retry policy. For more robust error handling, it’s recommended to configure Dead Letter Queues (DLQs) and enable Amazon CloudWatch Logs to monitor failures.
WRITTEN BY Runal Paliwal
Comments