AWS Lambda Deployment with Terraform (In-Depth Guide)
Introduction
Deploying and managing AWS Lambda functions can get complicated, especially when you need to orchestrate several components like IAM roles, event triggers, monitoring, and deployment pipelines. That's where Terraform shines—it allows you to manage all of this through code, providing automation, consistency, and scalability.
In this detailed guide, we’ll explore how to create and manage AWS Lambda functions using Terraform. We’ll go beyond the basics and discuss advanced topics like packaging code, using remote state, integrating with services like API Gateway and S3, and best practices for building reusable Terraform modules.
Why Terraform for AWS Lambda?
AWS Lambda is a great service for running code without provisioning or managing servers, but managing multiple Lambda functions manually can become overwhelming. Terraform lets you:
Automate Deployments: Define infrastructure as code to automate deployments and updates.
Ensure Consistency: By codifying your Lambda configurations, you can be sure they’re consistent across environments (dev, staging, production).
Version Control Everything: With Terraform, you can put your infrastructure in version control, just like your codebase, making it easy to roll back or collaborate with teams.
Simplify Management: You can manage your entire AWS infrastructure, not just Lambda, through a single tool like Terraform.
Now, let's go deeper into how to build, deploy, and manage AWS Lambda functions using Terraform.
login to stackgen - https://cloud.stackgen.com/ for Generative Infrastructure from code
Lets create a blank appstack using stackgen we are going to generate terraform
click on New appStack
lets create appStack from scratch and play around aws lambda and terraform
Click on Proceed
StackGen by Default apply all best security policies by framework so generated if will be more secure
create appstacks
Setting Up AWS Lambda in Terraform
AWS Provider Configuration
Terraform requires an AWS provider block that specifies which AWS account and region you're deploying to.
provider "aws" {
region = "us-west-2" # Adjust region as needed
}
drag and drop and bring cloud services and give required field
To go a step further, you can configure credentials securely using the AWS CLI, or IAM roles attached to the instance you're running Terraform from (if using EC2 or other AWS services):
aws configure
This sets up the ~/.aws/credentials
file with your access and secret keys.
If you're running Terraform from an EC2 instance, you can configure an instance role with proper permissions, so there’s no need to use static credentials.
Using Profiles
If you manage multiple AWS accounts or regions, you can configure the provider to use specific profiles:
provider "aws" {
profile = "dev" # This profile should match the one in your AWS credentials
region = "us-west-2"
}
Defining AWS Lambda Function in Terraform
The main resource for creating a Lambda function in Terraform is aws_lambda_function
. At a minimum, you need to define the runtime, handler, role, and deployment package (either a local ZIP or an S3 object).
Lambda Function Definition
resource "aws_lambda_function" "my_lambda" {
function_name = "my_lambda"
runtime = "python3.8"
handler = "lambda_function.lambda_handler"
role = aws_iam_role.lambda_role.arn
filename = "lambda_function.zip"
memory_size = 128
timeout = 10
environment {
variables = {
ENV_VAR_1 = "value1"
}
}
}
above code explained :
function_name
: The name of your Lambda function. It's important to note that Lambda function names must be unique within an AWS region.runtime
: Specifies the runtime environment for the function. AWS supports several runtimes like Python, Node.js, Java, Go, and custom runtimes via containers.handler
: This is the entry point of your code. For Python, it's typically<filename>.<function_name>
(e.g.,lambda_function.lambda_handler
), wherelambda_
function.py
contains the functionlambda_handler
.filename
: The ZIP file containing the Lambda deployment package.memory_size
: Memory allocated to the function, ranging from 128 MB to 10,240 MB. Lambda allocates CPU power linearly in proportion to the memory.timeout
: The maximum amount of time (in seconds) a function is allowed to run. If your function exceeds this, it will be terminated.environment
: You can pass environment variables to the Lambda function.
Lets do same using StackGen topology - add new resource - aws lambda
you will see its create IAM role and CloudWatch Log by default so generated tf is secured
you can see IaC is generated
Advanced Configurations:
Tags: You can tag Lambda functions to make it easier to track costs, permissions, and resources across AWS services.
tags = { Environment = "dev" Project = "MyProject" }
Dead Letter Queues (DLQ): If Lambda functions fail, you can use DLQs to capture and analyze failures by connecting to an SQS queue or SNS topic.
dead_letter_config { target_arn = aws_sqs_queue.lambda_dlq.arn }
Packaging and Deploying Code
AWS Lambda requires the function code to be packaged as a ZIP file. This can either be done manually or automatically using deployment pipelines.
Option 1: Local ZIP Package
When deploying code locally, you can package your Lambda function and pass the ZIP file to Terraform:
filename = "lambda_function.zip"
You can create this ZIP package manually or automate it with a script. Here's an example for Python:
zip lambda_function.zip lambda_function.py
Option 2: S3 Deployment
For larger functions or when working in a CI/CD pipeline, it's more efficient to store the Lambda package in an S3 bucket:
resource "aws_lambda_function" "my_lambda" {
function_name = "my_lambda"
runtime = "python3.8"
handler = "lambda_function.lambda_handler"
s3_bucket = "lambda-deployment-bucket"
s3_key = "lambda_function.zip"
}
This way, your CI/CD system can upload new versions of your code to S3, and Terraform will use the new version when applying changes.
Adding Dependencies with Lambda Layers
Lambda Layers allow you to package libraries and dependencies separately from your main Lambda code. This helps reduce code size, speeds up deployments, and makes managing shared code easier.
Create a Lambda Layer
resource "aws_lambda_layer_version" "common_dependencies" {
layer_name = "common_dependencies"
filename = "layer.zip"
compatible_runtimes = ["python3.8"]
}
You can attach the layer to your Lambda function by referencing its ARN:
resource "aws_lambda_function" "my_lambda" {
function_name = "my_lambda"
runtime = "python3.8"
handler = "lambda_function.lambda_handler"
layers = [aws_lambda_layer_version.common_dependencies.arn]
}
This example assumes you’ve packaged your dependencies (e.g., Python libraries) into layer.zip
.
IAM Roles and Permissions
Lambda requires an IAM role that grants it the necessary permissions to execute. The minimal requirement is permission to write logs to CloudWatch.
Basic IAM Role for Lambda
resource "aws_iam_role" "lambda_role" {
name = "lambda_execution_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
if you click on the IAM rile you will find role policy which added
Attaching Policies
We’ll need to give the role basic execution permissions, like logging to CloudWatch:
resource "aws_iam_policy_attachment" "lambda_policy_attachment" {
roles = [aws_iam_role.lambda_role.name]
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
Advanced Role Configurations
For more advanced use cases, like accessing other AWS services (S3, DynamoDB, etc.), you’ll need to attach additional permissions. For example, if your Lambda function needs to read from an S3 bucket:
data "aws_iam_policy_document" "lambda_s3_policy" {
statement {
actions = ["s3:GetObject"]
resources = ["arn:aws:s3:::mybucket/*"]
effect = "Allow"
}
}
resource "aws_iam_role_policy" "lambda_s3_access" {
name = "lambda_s3_access"
role = aws_iam_role.lambda_role.id
policy = data.aws_iam_policy_document.lambda_s3_policy.json
}
This allows your Lambda to access objects in the specified S3 bucket
lets see using StackGen lets add S3 cloud resources by drag and drop
Connect lambda function to S3 and it will ask its IAM role and trigger configuration
I will select IAM here and if you click on configuration you will see role type and policy accordingly added by StackGen . it’s smart enough to understand resource mapping and its depend cloud resources and add policy base on that also its allow you to edit or add customised .
VPC Configuration
Lambda functions can run inside a Virtual Private Cloud (VPC) to access private resources like databases or services that are not publicly available. To run your Lambda in a VPC, you need to specify the subnet IDs and security group IDs.
resource "aws_lambda_function" "my_lambda" {
function_name = "my_vpc_lambda"
vpc_config {
subnet_ids = ["subnet-12345", "subnet-67890"]
security_group_ids = ["sg-123456"]
}
}
Make sure that the subnet you choose has access to your required resources and allows outgoing traffic if necessary.
NAT Gateway
If your Lambda function needs to access the internet from inside a
VPC, you’ll need to configure a NAT Gateway. This allows outbound internet access for your function without exposing it publicly.
Event Sources and Triggers
One of the key benefits of AWS Lambda is its ability to integrate with various AWS services to automatically trigger the function. Common event sources include S3 (when an object is created), DynamoDB Streams, API Gateway, and CloudWatch Events.
S3 Event Trigger
You can trigger a Lambda function when an object is created in an S3 bucket:
resource "aws_s3_bucket_notification" "example" {
bucket = aws_s3_bucket.example.bucket
lambda_function {
lambda_function_arn = aws_lambda_function.example.arn
events = ["s3:ObjectCreated:*"]
}
}
This will trigger the Lambda function whenever a new object is uploaded to the S3 bucket.
API Gateway Integration
To expose your Lambda function as a REST API, you can integrate it with API Gateway:
resource "aws_api_gateway_rest_api" "example" {
name = "example_api"
}
resource "aws_lambda_permission" "api_gateway_invoke" {
statement_id = "AllowExecutionFromApiGateway"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.example.function_name
principal = "apigateway.amazonaws.com"
}
This grants API Gateway the permission to invoke the Lambda function.
Monitoring and Logging with CloudWatch
By default, AWS Lambda logs output to CloudWatch. You can customize log retention, alarms, and even set up custom metrics.
CloudWatch Logs
resource "aws_cloudwatch_log_group" "lambda_logs" {
name = "/aws/lambda/my_lambda"
retention_in_days = 14
}
This will store the Lambda logs for 14 days in CloudWatch.
CloudWatch Alarms
You can also set up alarms to monitor key metrics like error rates, invocation durations, or throttling:
resource "aws_cloudwatch_metric_alarm" "lambda_error_alarm" {
alarm_name = "LambdaErrorAlarm"
comparison_operator = "GreaterThanThreshold"
evaluation_periods = 1
metric_name = "Errors"
namespace = "AWS/Lambda"
period = 300
statistic = "Sum"
threshold = 1
actions_enabled = true
}
Reserved Concurrency
Lambda functions can scale automatically, but if you want to control the number of concurrent executions (e.g., to avoid hitting downstream service limits), you can use reserved concurrency.
resource "aws_lambda_function" "my_lambda" {
reserved_concurrent_executions = 5
}
This limits your function to a maximum of 5 concurrent invocations at any given time.
Best Practices for Using AWS Lambda with Terraform
1. Version Control Your Deployment Packages
Always version your Lambda deployment packages. This way, you can easily roll back to a previous version if necessary. Use S3 versioning or CI/CD pipelines to manage different versions of your code.
2. Use Terraform Modules for Reusability
Modularize your Terraform code to increase reusability and maintainability. For example, create separate modules for your Lambda function, IAM roles, API Gateway, and S3 configurations. This makes it easier to manage and reuse across different projects or environments.
3. Remote State
In multi-team or multi-environment setups, use Terraform’s remote state to ensure that the same infrastructure code is applied consistently across different environments. This helps avoid state conflicts and maintains consistency.
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "prod/terraform.tfstate"
region = "us-west-2"
}
}
4. Environment Separation (continued)
Using workspaces or separate Terraform state files for different environments (dev, staging, prod) can help prevent resource conflicts and ensure isolation between your environments. Here's an example of using Terraform workspaces to manage multiple environments:
# Create a workspace for staging
terraform workspace new staging
# Create a workspace for production
terraform workspace new production
# Switch to the desired workspace
terraform workspace select production
You can also adjust your resource configurations based on the workspace:
resource "aws_lambda_function" "my_lambda" {
function_name = "my_lambda_${terraform.workspace}"
runtime = "python3.8"
handler = "lambda_function.lambda_handler"
role = aws_iam_role.lambda_role.arn
filename = "lambda_function.zip"
memory_size = 128
timeout = 10
}
This creates separate Lambda functions in each environment, keeping them isolated and configurable independently.
Advanced AWS Lambda Features with Terraform
1. Lambda Destinations
Lambda destinations allow you to route the result of your Lambda execution to different AWS services based on success or failure. For example, you could send successful executions to an SNS topic and failed ones to an SQS queue for further processing or debugging.
resource "aws_lambda_function" "my_lambda" {
function_name = "my_lambda"
runtime = "python3.8"
handler = "lambda_function.lambda_handler"
role = aws_iam_role.lambda_role.arn
filename = "lambda_function.zip"
memory_size = 128
timeout = 10
environment {
variables = {
"ENV_VAR_1" = "value1"
}
}
# Define Lambda destinations
dead_letter_config {
target_arn = aws_sqs_queue.lambda_dlq.arn
}
# Success destination (SNS)
destination_config {
on_success {
destination = aws_sns_topic.success_topic.arn
}
on_failure {
destination = aws_sqs_queue.failure_queue.arn
}
}
}
In this configuration, successful Lambda invocations are sent to an SNS topic, while failed executions are sent to an SQS queue.
2. Provisioned Concurrency
Lambda’s Provisioned Concurrency allows you to pre-allocate a certain number of concurrent executions for low-latency, high-throughput applications. This feature ensures that your function is "warm" and ready to respond to requests immediately, without the initial cold start.
resource "aws_lambda_provisioned_concurrency_config" "example" {
function_name = aws_lambda_function.example.function_name
qualifier = "$LATEST"
provisioned_concurrent_executions = 10
}
By configuring provisioned concurrency, you can reduce the cold-start time of your function, ensuring a fast and consistent response time.
3. Lambda with Docker Containers
AWS Lambda now supports running containerized applications. Instead of zipping your code, you can package your Lambda as a Docker container image. This provides more flexibility in how you build and package your Lambda code, including running custom runtimes.
To deploy a containerized Lambda function with Terraform:
resource "aws_lambda_function" "my_lambda" {
function_name = "my_container_lambda"
package_type = "Image"
image_uri = "123456789012.dkr.ecr.us-west-2.amazonaws.com/my-lambda-image:latest"
role = aws_iam_role.lambda_role.arn
memory_size = 512
timeout = 30
}
You’ll need to build your Docker image locally or in your CI/CD pipeline and push it to Amazon Elastic Container Registry (ECR) before referencing it in Terraform.
4. CI/CD with Terraform and AWS Lambda
To securely add AWS credentials for Terraform to use in a GitHub Actions workflow, you should use GitHub Secrets. This ensures that your AWS credentials are not exposed directly in your workflow YAML file.
Here’s how to do it:
Step 1: Add AWS Credentials to GitHub Secrets
Go to your GitHub repository.
Click on Settings.
In the left sidebar, select Secrets and Variables > Actions.
Click the New repository secret button.
Add the following secrets:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Make sure these values are from an IAM user with the necessary permissions for Terraform to create resources (e.g., Lambda, S3, IAM).
Step 2: Update GitHub Actions Workflow to Use AWS Credentials
Now, you need to modify your GitHub Actions workflow to use these secrets for AWS authentication.
Here’s your updated workflow YAML:
name: Deploy Lambda with Terraform
on:
push:
branches:
- main
jobs:
terraform:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-west-2 # You can specify the region here
- name: Setup Terraform
uses: hashicorp/setup-terraform@v1
with:
terraform_version: 1.0.0
- name: Initialize Terraform
run: terraform init
- name: Plan Terraform changes
run: terraform plan
- name: Apply Terraform changes
run: terraform apply -auto-approve
Breakdown of Updates:
AWS credentials configuration:
The step
Configure AWS credentials
uses the GitHub actionaws-actions/configure-aws-credentials@v2
to set up the environment for Terraform to authenticate with AWS.The
${{
secrets.AWS
_ACCESS_KEY_ID }}
and${{
secrets.AWS
_SECRET_ACCESS_KEY }}
pull the secrets from the repository settings.
AWS Region:
- You can define the AWS region using the
aws-region
key (set tous-west-2
in this example), but you can modify it to any region you need.
- You can define the AWS region using the
By following this process, you securely add AWS credentials to your GitHub Actions workflow, enabling Terraform to authenticate and interact with AWS.
5. Cost Optimization and Monitoring
Cost optimization is an essential part of managing Lambda functions, especially as usage scales up. Here are a few ways to optimize Lambda costs:
Monitor Duration: Reduce function execution time by optimizing the code to minimize processing delays.
Optimize Memory: Use the minimum memory allocation that still allows the function to execute efficiently.
Enable CloudWatch Alarms: Set up CloudWatch Alarms for function invocation counts, error rates, and duration metrics to keep an eye on function usage and identify cost anomalies.
Terraform can also help automate the setup of these cost optimization tools:
resource "aws_cloudwatch_metric_alarm" "lambda_duration_alarm" {
alarm_name = "LambdaDurationExceeded"
comparison_operator = "GreaterThanThreshold"
evaluation_periods = 1
metric_name = "Duration"
namespace = "AWS/Lambda"
period = 60
statistic = "Average"
threshold = 3000 # 3 seconds
actions_enabled = true
}
Conclusion
AWS Lambda, combined with Terraform, offers an incredibly powerful platform for building serverless applications with infrastructure as code. By following the practices laid out in this guide, you can:
Automate the deployment and management of Lambda functions.
Integrate seamlessly with other AWS services like S3, API Gateway, and CloudWatch.
Optimize your infrastructure for cost and performance.
Maintain flexibility and scalability as your applications grow.
With Terraform, you can also scale your AWS Lambda deployment strategy to handle more complex setups involving multiple environments, reusable modules, and even containerized functions.
The possibilities are vast, but the key takeaway is that automation, consistency, and best practices with tools like Terraform will save time, reduce errors, and ensure that your AWS Lambda infrastructure is future-proof and scalable.