Train and deploy Keras models with TensorFlow on Amazon SageMaker (Facial Emotion Detection)

Let's get started in learning how to detect human emotions or what we say face expression recognition using Keras models.

Beginners Tutorial To Perform Facial Emotion Detection Using Keras Models on Amazon SageMaker

Keras is a popular and well-documented open-source library for deep learning, while Amazon SageMaker provides you with easy tools to train and optimize machine learning models. Until now, you had to build a custom container to use both, but Keras is now part of the built-in TensorFlow environments for TensorFlow. Not only does this simplify the development process, but it also allows you to use standard Amazon SageMaker features like script mode or automatic model tuning.
Keras’s excellent documentation, numerous examples, and active community make it a great choice for beginners and experienced practitioners alike. The library provides a high-level API that makes it easy to build all kinds of deep learning architectures, with the option to use different backends for training and prediction: TensorFlow.
In this blog, we show you how to train and deploy Keras 2.x models on Amazon SageMaker, using the built-in TensorFlow environments for TensorFlow. In the process, you also learn the following:
  • To run the same Keras code on Amazon SageMaker that you run on your local machine, use script mode.
  • To optimize hyperparameters, launch automatic model tuning.
  • Deploy your models with Amazon Elastic Inference.

This example demonstrates training a simple convolutional neural network on the Facial Expression Recognition dataset. The data consists of 48×48 pixel grayscale images of faces. The task is to categorize each face based on the emotion shown in the facial expression into one of seven categories (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral).

First, set up TensorFlow as your Keras backend. For more information, see the script.
The process is straightforward:
  • Grab optional parameters from the command line, or use default values if they’re missing.
  • Download the dataset and save it to the /data directory.
  • Normalize the pixel values, and one hot encode labels.
  • Build the convolutional neural network.
  • Train the model.
  • Save the model to TensorFlow Serving format for deployment.

Now check that this code works by running it on a local machine, without using Amazon SageMaker.

Training and deploying the Keras model on AWS sagemaker. You must make a few minimal changes, but script mode does most of the work for you. Before invoking your code inside the TensorFlow environment, Amazon SageMaker sets four environment variables.
  • SM_NUM_GPUS—The number of GPUs present in the instance.
  • SM_MODEL_DIR— The output location for the model.
  • SM_CHANNEL_TRAINING— The location of the training dataset.
  • SM_CHANNEL_VALIDATION—The location of the validation dataset.
You can use these values in your training code with just a simple modification:

parser.add_argument(‘–gpu-count’, type=int, default=os.environ[‘SM_NUM_GPUS’])
parser.add_argument(‘–model-dir’, type=str, default=os.environ[‘SM_MODEL_DIR’])
parser.add_argument(‘–training’, type=str, default=os.environ[‘SM_CHANNEL_TRAINING’])
parser.add_argument(‘–validation’, type=str, default=os.environ[‘SM_CHANNEL_VALIDATION’])

What about hyperparameters? No work needed there. Amazon SageMaker passes them as command-line arguments to your code.
For more information, see the updated script,

Now We implement step by step

1. Create an S3 bucket instance on AWS
2. Sign in to the AWS Management Console and open the Amazon S3 console at
3. Choose Create bucket.
4. In the Bucket name field, type a unique DNS-compliant name for your new bucket. (The example screenshot uses the bucket name admin-created. You cannot use this name because S3 bucket names must be unique.)
5. For Region, choose US West (Oregon) as the region where you want the bucket to reside.
6. Choose Create and download all file from git hub and upload on a bucket.
7. Create Notebook instance on AWS

1. Open the Amazon SageMaker console at
2. Choose Notebook instances, then choose to Create notebook instance.
3. Set Notebook instances setting show below
4. After Create New IAM Role and enter your S3 bucket name
5. Then choose Create notebook instance and Wait Sometime when notebook instance created.

Training on Amazon SageMaker

After deploying your Keras model, you can begin training on Amazon SageMaker. For more information, see the fer-amazon-sagemakers.ipynb notebook.

The process is straightforward:
  • Load the dataset.
  • Define the training and validation channels.
  • Configure the TensorFlow estimator, enabling script mode and passing some hyperparameters.
  • Train, deploy and predict.
After running all cells of fer-amazon-sagemakers.ipynb. You can predict model output using lambda function and create API Gateway for production.


In Amazon sagemaker, we not maintain any server and also we not manually create API Gateway it can be automatic handle by Sagemaker so that we easily implement our model for production. In this blog, we implement emotion detection it used to identify real-time facial expression and also we deploy it to production.


The main goal of this project is to implement facial emotion detection using Keras framework and deploy it for production using Amazon Sagemaker so that everyone can access it realtime without any setup.