Python AWS Lambda Monorepo — Part 3: Test, Build, and Deploy
In Part 2 of this series, we learned how to create a custom Python package and use make to share it across the Lambda functions. We will work on testing our code locally, build it, deploy it to AWS and run it in on the cloud. In this part of the series we will use CircleCI along with the following tools:
- AWS CLI 1.16
- AWS SAM CLI 0.18
The AWS CLI is used to access and change AWS resources from our terminal. Follow the steps in the AWS CLI Docs to set up the developer user profile in your machine. AWS SAM CLI(Serverless Application Model) is a development framework that allows us to test our serverless application locally. With all installed, we can now start programming.
All the code can be downloaded on GitHub: https://github.com/bombillazo/python-lambda-monorepo
Is this how you make dinosaurs?
Serverless Application Model
With all our functions ready, we can test our functions locally to see if our program is working correctly. AWS SAM uses template files to define the AWS resources of the application. These templates can be used with CloudFormation to deploy the application. They can also be used to spin up a local Lambda environment to test our code. This will be our main use of SAM.
We start by adding a
template.yml at the root of our project.
These templates have a predefined structure. Let’s look at one of our function definitions:
AWSTemplateFormatVersion : '2010-09-09'
Description: Python Lambda Monorepo - Example
Description: Create Dinosaur Lambda
This describes the Lambda function running the code for
create_dinosaur. The resource is called
createdinosaur since SAM does not allow symbols in resource names. We also specify the Lambda function properties:
- Runtime refers to what programming language environment the Lambda function will be set up for.
- Handler specifies the code entry point for the Lambda function. The format is
functionis the entry function where our code starts executing and
fileis the script file containing our main function. In our case,
handleris the entry function.
- CodeUri specifies the directory path of the function relative to the template file.
- Timeout specifies the max time the Lambda can run for before timing out.
- Environment.Variables allow us to define variables to our Python environment. In our case, we add the
./packagespath to the
PYTHONPATHvariable since this will be the installation location of our packages.
- Events.Api.Type defines the event source that is attached to the function to execute it. In our case we are attaching an API Gateway endpoint to be used for testing
- Events.Api.Properties.Path defines the path endpoint for our function
- Events.Api.Properties.Method states what HTTP method to use
Note: The PYTHONPATH variable is defined relative to the root directory of the Lambda function
templates.yml file ready, we can use the AWS SAM CLI to run the Lambdas locally. Make sure your AWS CLI is set up to access your AWS account with an Access ID and Access Key. In our terminal, we’ll run the
invoke command from the project root directory:
sam local invoke createdinosaur --no-event
Note: We will run all our terminal commands from the project root directory unless specified otherwise
We see various things have happened. SAM invoked our Lambda locally. It did so by spinning up a local Docker container Python image in which our code ran. Then we see the actual code execution status: the START information, any console log messages printed by our code, the END information and REPORT information containing metadata about our code execution.
In this case, we got a
No dinosaur provided message. This is because we are not passing any event data to our Lambda function (note we used
--no-event in our command). Let’s add a new directory in the root directory named
| └─ tyrannosaurus.json
In here we will add
.json files which we will pass as event data to our Lambda function. Let’s add a request
.json file to create a Tyrannosaurus:
"name": "Tyrannosaurus rex",
Let’s run the command again but adding the event data (
sam local invoke createdinosaur -e requests/tyrannosaurus.json
The function received the dinosaur data from the
event parameter in the
handler function. Now we see the function printed our dinosaur data and the successful response from the DynamoDB insert. If we go to DynamoDB we will see our new dinosaur.
With this knowledge, we can add the rest of the Lambda functions to our template file to test them locally.
Build and Deploy
We have our code, we have our packages. It’s time to start pushing all these files to our AWS environment. To do so we’ll use the CircleCi setup we did in Part 1. To define the CircleCi jobs we want to run we need to add our CircleCI config script to our project.
| └─ config.yml
We will go over our config file part by part. For more in-depth information on the concepts that make up the CicleCi config file and process check out their docs. We’ll start with the environment:
- image: circleci/python:3.7.0
Orbs in CircleCI are pre-made packages of resources and jobs that can be reused in your deploy process. Orbs are stored in a registry and CircleCI provides various orbs for common steps. In this case we are using the circleci/aws-cli orb. This orb installs the AWS CLI in our deployment environment for us.
build job, we are using a Docker executor which defines the environment in which we will run our script and process. Similar to Orbs, there are many pre-made images stored in a registry for common processes. in our case we use a Python3.7 image. The
working_directory specifies where in this image environment does our script runs the steps. This takes us to the first steps:
name: Configure ENV files based on environment
echo "Setting AWS environment"
aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
aws configure set region us-east-1
First, we check out our Github code into the CircleCI environment and install the AWS CLI using the Orb. Then we run the
aws commands to use the AWS access key and secret from the CircleCI environment variables we set up in the beginning.
name: Zip and deploy Lambda Functions
# iterate through lambdas
for LAMBDA_PATH in services/*/
if [[ -f services/$FUNCTION_NAME/requirements.txt ]]; then
echo "Installing packages..."
pip install -r services/$FUNCTION_NAME/requirements.txt --target services/$FUNCTION_NAME/packages/ --find-links ./packages
echo "Building $FUNCTION_NAME..."
zip main.zip *.py */ -r
echo "Deploying $FUNCTION_NAME..."
aws lambda update-function-code --function-name $FUNCTION_NAME --region us-east-1 --zip-file fileb://main.zip
This is our whole build and deploy script. First, we store our current starting directory. We go through each Lambda function directory in the
services directory, storing the directory name. We check if the function has a
requirements.txt file present. If so, we run the install command we defined in our
Makefile to install our package into the Lambda function directory. Then we go into the function directory and run the
zip command. This command creates a
.zip file that contains any file and directory containing files (e.i. our packages) that end with the Python extension.
Finally, we run the
aws lambda update-function-code command to push this new zip file to the specified Lambda function. The process is repeated for each function. Now we define the workflow”
This is a simple workflow which runs the
build job on our master branch only. Now we are ready to use this! Once we push this config file to the repo and start making changes to the master branch, we can see the build run in the CircleCI console:
Inside our build, we see all the steps that executed as defined in our
config.yml. Open each section and view the console outputs for the details on each step.
The process completed successfully and we can confirm everything worked by going into the AWS console and looking at the Lambda functions.
Inside our Lambda function we see all our files are uploaded. They are now ready to run requests on AWS. Success!
We can run these Lambda functions manually by using the Test option in the console and pass a JSON object with request data to the function. We’ll test out the
fight_dinosaurs function. After importing a couple of herbivore and carnivore dinosaurs into our app, start a test with an empty JSON to run the function.
Once the function runs, it returns metadata, response data, and the console log output. If all is set up correctly, we’ll see function runs correctly and the console output shows the hunt process. In this case, Diplodocus survived the attack (phew).
This marks the end of our 3 part series. This was the overall structure of our monorepo:
| └─ config.yml
| └── /package1
| ├── /package1
| | ├── __init__.py
| | └── core.py
| └── setup.py
| ├─ /function1
| | ├─ main.py
| | └─ requirements.txt
From here on you can create serverless services that leverage your libraries in a manageable and scalable way. You can improve on the deployment process and version your releases by enhancing the deployment scripts.
This is my first foray into writing technical articles and I’ve learned a lot in the process. Hopefully, this was clear and helpful in how to set up a Lambda monorepo, create Python packages to reuse across your services and use CircleCI to deploy your code. Now, build on!