Deploying a simple Pokemon REST API to AWS ECS

Shan Pandya
5 min readOct 18, 2020

Dockerizing and deploying a simple API store, where you can GET and POST Pokemon seems like a fun way to learn about the basics of ECS and AWS services!

All code can be found here.

Prerequisites:

Let’s first create a simple DynamoDB database.

First create a new file called “create-table.json”, and paste in the json data below. We’re setting up a table with just two attributes (pokemon name, and pokemon type)

{
“TableName”: “Pokemon”,
“KeySchema”: [
{ “AttributeName”: “name”, “KeyType”: “HASH” },
{ “AttributeName”: “type”, “KeyType”: “RANGE” }

],
“AttributeDefinitions”: [
{ “AttributeName”: “name”, “AttributeType”: “S” },
{ “AttributeName”: “type”, “AttributeType”: “S” }
],
“ProvisionedThroughput”: {
“ReadCapacityUnits”: 5,
“WriteCapacityUnits”: 5
}
}

We can then create this table in AWS by running:

aws dynamodb create-table --cli-input-json file://create-table.json

Optionally, we can also pre-fill out table with some data:

{
"Pokemon": [
{
"PutRequest": {
"Item":
{
"name": {
"S": "pikachu"
},
"type": {
"S": "electric"
}
}
}
},
{
"PutRequest": {
"Item": {
"name": {
"S": "charmander"
},
"type": {
"S": "fire"
}
}
}
},
{
"PutRequest": {
"Item" : {
"name": {
"S": "blastoise"
},
"type": {
"S": "water"
}
}
}
}
]
}

Then run:

aws dynamodb batch-write-item --request-items file://batch-write.json

We now have a DynamoDB table, so onto writing some code for our endpoints!

mkdir flaskdynamodb 
cd flaskdynamodb
touch app.py aws_controller.py Dockerfile requirements.txt

We’ll put our application code in app.py, and controller code in aws_controller.py

Paste this into app.py:

from flask import Flask, jsonify, request
import aws_controller

app = Flask(__name__)

@app.route('/')
def index():
return "This is the main page."


@app.route('/pokemon', methods=['GET'])
def get_items():
return jsonify(aws_controller.get_items())

@app.route('/pokemon', methods=['POST'])
def post_items():
name = request.json.get('name')
type = request.json.get('type')

if not name or not type:
return jsonify({'error': 'Please provide name and type'}), 400

return aws_controller.post_items(name, type)


if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)

Now let’s write aws_controller.py (change “us-west-1” for the region you’re deploying to):

import boto3 as boto3

dynamo_client = boto3.client('dynamodb', "us-west-1")
TABLE_NAME='Pokemon'


def get_items():
return dynamo_client.scan(
TableName='Pokemon'
)

def post_items(name, type):
dynamo_client.put_item(TableName=TABLE_NAME, Item={'name': {'S': name}, 'type': {'S': type}})
return "Successfully added " + name + " to the database"

One thing to note, is that we’re using boto3. This should work if we run locally, as boto3 will try to read data from the credentials file that gets created when we run “aws configure”. But if we build this into a docker image, and then try to run the image locally, it won’t work as we won’t have access to our credentials. It will work if we deploy the image to AWS and set up our roles correctly (as boto3 will by default try to call out to AWS STS and can get permissions).

We can also write our Dockerfile:

FROM python:3.8-slim

COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt

ENTRYPOINT [ "python" ]
CMD ["app.py"]

And our requirements.txt file:

boto3==1.15.16
botocore==1.18.16
click==7.1.2
Flask==1.1.2
itsdangerous==1.1.0
Jinja2==2.11.2
jmespath==0.10.0
MarkupSafe==1.1.1
python-dateutil==2.8.1
s3transfer==0.3.3
six==1.15.0
urllib3==1.25.10
Werkzeug==1.0.1

Next up, we need we need to containerise our app, and upload it to ECR:

  1. Log into your AWS console and create a new ECR repo. To do this go to “Elastic Container Repository”, click “Create repository” and choose a name for your repo.
  2. One your repository is created, you can click on it and will see an option to “view push commands”. Following these instructions you will be able to build your docker image and upload it to ECR.

Now we’re ready to to get started with some AWS config!

  1. Set up a new security group in AWS, called ‘my-ecs-sg’. Allow all outbound connections, and allow inbound SSH.
  2. Create an ECS cluster. You can do this by following the creation wizard (let’s just use “EC2 Linux + Networking”, add it to the default VPC and add it to all availability zones). For security groups we can use the security group we just set up. Leave the rest of the values as default and let amazon create an “ecsInstanceRole” if you don’t have one already.
  3. Set up a new task definition. Type should be “EC2”. We’ll also have to create a new “task role”. We can use the IAM wizard for that and select our service as “Elastic Container Service” and use case as “Elastic Container Service Task”. For policy we can attach the AmazonDynamoDBFullAccess policy. We also want to make the host port 5000 available on the container. Don’t set this to map to any specific host port, as we’ll want to be flexible in case we want to run many of the same instance (by default it will pick a free host port).
  4. Create an Elastic Load balancer (ELB), and set up a new target group with port 80. For this stage, it’s worth creating a new security group for your elastic load balancer, and then modifying your ‘my-ecs-sg’ to allow all traffic inbound originating from this new security group that we’re creating. Don’t worry about adding any instances to the new target group that we create (this will be done automatically after we set up our service)
  5. Set up a new ECS Service. We should add the task definition we created to this service, and also set it up with the load balancer and target group that we just created. During this set up choose to run 1 task in replica mode.
  6. Now we can test things. If everything is working ok, we should be able to ssh into our EC2 instance and see a docker container running when we run “docker ps”. If we go to our target groups we should also see some tasks registered, and we should be able to send GET and POST requests to our API using the public DNS of our load balancer.
  7. To scale up, we can update our service to run more than one task.

Yay! We have an API running. If there are issues, the first thing to check would be security groups and checking if connectivity has been correctly set up between our different components.

Finally, if we were running this in production we’d need to think about a few more things. Specifically whether we should separate things out further (for example, putting our GET and POST endpoints in separate Dockerfiles). Then we can create separate tasks for each and have the ability to scale each part independently. We’d also obviously need to increase the amount of error checking in the code.

Thanks for reading!

--

--