Efficiency Unleashed: Automating EBS Volume Transformation with AWS Lambda for Seamless GP3 Migration

Efficiency Unleashed: Automating EBS Volume Transformation with AWS Lambda for Seamless GP3 Migration

·

6 min read

💡 Introduction

Welcome to the world of cloud optimization and efficiency! In This blog post, we delve into the realm of AWS (Amazon Web Services) to explore a powerful project aimed at enhancing storage performance - the creation of a Lambda function designed to seamlessly convert Elastic Block Store (EBS) volumes from any type to the high-performance GP3 type. As organizations increasingly seek to maximize their resources and streamline operations, this innovative project offers a hands-on approach to automating EBS volume transformations, unleashing the full potential of AWS for scalable and cost-effective storage solutions.

💡 Pre-Requisites

Before diving into the implementation of the AWS Lambda function to convert EBS volumes to GP3, ensure that you have the following prerequisites in place:

  1. AWS Account:

    • Ensure you have an active AWS account with the necessary permissions to create and manage Lambda functions, IAM roles, and EC2 instances.
  2. Lambda Function Role:

    • Create an IAM role with the required permissions for your Lambda function. Ensure the role has permissions to interact with EBS volumes, EC2 instances, and the necessary AWS services.
  3. AWS Lambda Function Basics:

    • Basic understanding of AWS Lambda functions, including their setup, deployment, and configuration. If you're new to Lambda, consider reviewing the AWS Lambda documentation to get started.
  4. Knowledge of EBS Volumes:

    • Understand the characteristics of different EBS volume types, especially the transition from other types to GP3. Familiarize yourself with the performance attributes, costs, and use cases associated with GP3 volumes.

💡 Create a Lambda function

Certainly! Here's a simplified walkthrough for creating a Lambda function named ebs-volume-checker with Python 3.12 runtime, selecting "Author from scratch," and keeping the default settings:

Step 1: AWS Lambda Console

  1. Sign into AWS Console:

    • Log in to the AWS Management Console using your AWS account credentials.
  2. Navigate to Lambda:

    • Open the Lambda service from the AWS Management Console.

Step 2: Create a Lambda Function

  1. Create Function:

    • Click on the "Create function" button.
  2. Author from Scratch:

    • Choose the "Author from scratch" option.
  3. Basic Information:

    • Function Name: Enter ebs-volume-checker as the function name.

    • Runtime: Select "Python 3.12" from the runtime dropdown.

  4. Execution Role:

    • Leave the default option to create a new role with basic Lambda permissions.
  5. Function Code:

    • In the "Function code" section, either write or upload your Python code. This code will include logic to identify EBS volume types and initiate the conversion process to GP3 if needed.

Step 3: Save and Deploy

  1. Save and Deploy:

    • Click on the "Save" button, and then click "Deploy" to deploy your Lambda function.

Step 4: Test Your Lambda Function

  1. Test Function:

    • Use the Lambda console to test your function manually. Ensure that it correctly identifies EBS volume types and triggers the conversion process.

In the Code Source section, add the following code in the Python file:

import boto3

def extractVolumeId(volume_arm):
    arm_parts = volume_arm.split(":")
    volume_id = arm_parts[-1].split("/")[-1]
    return volume_id


def lambda_handler(event, context):
    print(event)

    volume_arm = event['resources'][0]
    volume_id = extractVolumeId(volume_arm)

    client = boto3.client('ec2')

    response = client.modify_volume(
        VolumeId=volume_id,
        VolumeType='gp3'
    )

Break-down of the Code:

  1. Importing the boto3 Library:

    This line imports the boto3 library, which is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python. It provides convenient methods for interacting with various AWS services, including EC2 (Elastic Compute Cloud).

  2. Defining the extractVolumeId Function:

    This function, extractVolumeId, takes an Amazon Resource Name (ARN) for an EBS volume (volume_arm) as input and extracts the volume ID. It does so by splitting the ARN string using colon (:) as a separator and then extracting the last part after the last colon.

  3. Defining the lambda_handler Function:

    The lambda_handler function is the main function that AWS Lambda invokes. It takes two parameters: event and context. In this case, the important part is the event parameter, which is a dictionary containing information about the triggering event.

    • print(event): This line prints the event information to the Lambda function's logs. It's helpful for debugging and understanding what triggered the Lambda function.

    • volume_arm = event['resources'][0]: Extracts the ARN of the EBS volume from the event dictionary. The assumption here is that the ARN of the volume is present in the resources field of the event, and the first item is selected ([0]).

    • volume_id = extractVolumeId(volume_arm): Calls the extractVolumeId function to obtain the volume ID from the extracted ARN.

    • client = boto3.client('ec2'): Creates an EC2 client using the boto3 library. This client will be used to interact with the EC2 service.

    • response = client.modify_volume(...): Calls the modify_volume method of the EC2 client to modify the specified EBS volume. In this case, it sets the volume type to 'gp3'.

The Lambda function, when triggered, will modify the specified EBS volume to use the 'gp3' volume type. The assumption is that the Lambda function is triggered with an event containing information about the EBS volume ARN.

💡Adding CloudWatch Rule for Monitoring

To watch the logs for the function, we need to create a CloudWatch rule which will trigger at the time of creating of a volume and trigger the lambda function to change the storage type to GP3.

  1. Go to CloudWatch, navigate to Rules in the under the Event group and Click on Create Rule.

  2. Give a name to the Rule, in the Build Event Pattern, select EC2 as the service and in the event type, select EBS Volume Notification, add a specific event createVolume as follow:

  3. This will trigger the Event whenever an EBS Volume will be created.

  4. In the target section, select Lambda and under function name, select ebs-volume-checker.

  5. Click Next, Review the Event Bridge and create the rule.

💡Adding Permissions to the role

Before testing the functions, we need to add necessary permissions to the role created by Lambda function, so that it can change the volume type.

First, we need to navigate to IAM -> Roles and navigate to the role which is created by Lambda function (ebs-volume-checker-XXXXXX where X is some random string).

Then we need to add necessary permissions of EC2 like modifyVolume and DescribeVolume to the Role as follows:

After Adding the permissions, you can go and create an EBS Volume

💡Create an EBS Volume

Go to EC2 Dashboard and Click on Volumes under Elastic Block Store

  1. Click Create Volume and give an availability zone and size but remember to make the volume type other than GP3.

  2. You can see that the volume type has been changed from your selected type to GP3 type.

    In case the volume type doesn't change, you can go to CloudWatch -> Logs groups, select your Lambdafunction(ebs-volume-checker) and see the logs for the errors.

💡 Conclusion

In conclusion, our journey through the creation and deployment of the ebs-volume-checker Lambda function underscores the power of automation in managing AWS resources. By leveraging the simplicity of Python and the robust capabilities of the boto3 library, we've crafted a solution that seamlessly converts EBS volume types to 'gp3,' optimizing storage performance with ease.

As organizations increasingly embrace cloud-native architectures, the need for efficient resource management becomes paramount. The ability to dynamically adapt EBS volumes to meet evolving performance requirements is a crucial aspect of this endeavor. Our Lambda function not only streamlines this process but also serves as a foundation for building more sophisticated automation workflows within AWS.

Remember, this journey doesn't end here. AWS offers a vast array of services, and the ebs-volume-checker Lambda function is just one example of what's achievable. Whether you're fine-tuning storage configurations, automating deployment pipelines, or orchestrating complex cloud workflows, the AWS ecosystem provides the tools for innovation and optimization.

As you embark on your own AWS automation adventures, may this blog serve as a valuable guide and inspiration. Feel free to explore further, customize the function to suit your specific needs, and, most importantly, continue pushing the boundaries of what's possible in the ever-evolving landscape of cloud computing.

Happy coding!"

Did you find this article valuable?

Support Pravesh's blog by becoming a sponsor. Any amount is appreciated!

Â