CANDY DISPENSER WITH EMOTION DETECTION

Artificial Intelligence is integrating mood-sensing as an intrinsic element of the product interface. Read all about this Candy dispenser that detects emotions to curate more intuitive experience.

CANDY DISPENSER WITH EMOTION DETECTION

Give the face expression to get your Candy!!! Wondering how the solution lies in this tutorial which will give you brief idea on implementing such CANDY DISPENSER WITH EMOTION DETECTION in real time. So let’s just get started with this exciting and fun learning…..

1. Overview:

We will design the system which consists of a hardware part including the Raspberry–Pi with the compatible camera module either pi- camera or the webcam to take your picture with expression and instead of coin acceptor we will use RFID reader and tags also the software includes the bunch of AWS services like AWS lambda function to fetch or retrieve the data from topic using MQTT, AWS IOT-core which connect your device to AWS and AWS Rekognition for the Emotion Detection.

2. The System demo as below:

3. Requirements:

  • AWS Rekognition, AWS Lambda , AWS IoT- Core services
  • Installing Open-CV and Flask
  • Python coding with Open-CV and Flask in python on AWS Lambda
  • Raspberry-pi with camera module or webcam
  • RFID reader and tags and LED’S

4. Pre-requisite basic knowledge in concepts like:

  • AWS account to get access to the AWS services
  • Python installing and Basic knowledge in Python coding
  • Flask in Python
  • Open-CV in Python
  • RFID reader and tags and LED’S

5. Hardware set-up for the system:

  • Raspberry-pi (Heart of system)
  • The list of devices to connect with the Raspberry-pi with best tutorial links
    below:-
  • Webcam or raspberry-pi camera
    (Here we have used a USB webcam for connecting it with the raspberry-pi camera)
  • RFID reader and Tags
    (https://medium.com/coinmonks/for-beginners-how-to-set-up-a-raspberry-pi-rfid-
    rc522-reader-and-record-data-on-iota-865f67843a2d)
  • Push Button and LED’S
    (https://www.hackster.io/hardikrathod/push-button-with-raspberry-pi-6b6928)
  • MG90S or SG90S motor
    (https://www.instructables.com/id/Servo-Motor-Control-With-Raspberry-Pi/)
[NOTE:- You can use the Pi camera module instead of a webcam and other NFC reader module available to design your system]

Before moving to the Implementation of the system you need to learn about the AWS services like AWS Lambda, AWS Dynamo-DB, AWS Rekognition , AWS IoT-core so that you can get the idea of using the services for setting up wonderful software part of this system hence refer this additional link below and get ready:-

Now after getting an idea about the concept of the system, devices being used in the system and the services of AWS which are being used in the system so what we are waiting for!!!! Let’s get into the Most interesting part of the system which is implementation in real time….

6. Software set-up with Implementation of system:

How it works:-

  1. First, it waits for scanning RFID card by the user. When RFID card is detected it is verified with the card registered in DynamoDB.
  2. It will take the picture of the person’s expression and send it to a bunch of AWS services to detect the emotion.
  3. Once an emotion is detected it will blink the concerned LED and rotate the servo motor to dispense the candy.

6.1 Registering RFID card.

Only registered card can be used to dispense your candy. So let’s Get started with Registration of RFID card. A Flask architecture will help to sign-up the user.
6.1.1 Data flow diagram is as follow:-
  • Raspberry Pi keeps listening for RFID card.
  • The serial number of detected RFID card is passed to the Registration page in Flask using AJAX.
  • When a user clicks on sign up button lambda function is invoked using API gateway with AJAX.
  • Lambda function receives the Registration details from a webpage through API gateway.
  • Lambda function stores this details in AWS Dynamo-DB.
6.1.2 Making of Registration architecture:
1. Setting up Dynamo-DB

On AWS, the first table is created in Dynamo-DB.
Use the Amazon Dynamo-DB console to create a new Dynamo-DB table. Give your table
name and partition key called RFID with type String. The table name and partition key are
case sensitive. The table has 2 other fields name and mail.

2. Lambda function to store data in Dynamo-DB
A lambda function is written to get the data from a webpage and store that data into Dynamo-DB table created.
Following is code to store data in Dynamo-DB:-
tableName = "Caandy_Your tablename"
table = dynamodb.Table(tableName)
table.put_item(
  Item={
    'RFID':event['RFID'],
    'Name':event['Name'],
    'Email':event['Email']
  }
)
(Note: Do not forget to attach policy for providing Lambda function access to Dynamo-DB by lambda execution role.)
3. API gateway for the above lambda function.

API gateway must be created to invoke lambda function.
(Note: Link to create API gateway for AWS lambda function:-
https://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-with-lambda-integration.html )

When signup is clicked Ajax post request is made to API gateway to invoke lambda function. Lambda function receives signup details through API gateway and stores it in Dynamo-DB
4. Creating Flask app on raspberry pi

Create a route to a function (in main.py) which keeps scanning for RFID card. This function returns the serial id of last detected RFID card.
On the registration page, Ajax calls are made to the above method
A Flask app runs registration page to register card-id as the user. Ajax calls are made to the RFID fetch function.
(This flask app can be accessed on the webpage on any computer connected in the same network as pi.)

The Registration page will look like this after scanning the RFID card:-
When signup is clicked a request is made to API gateway URL with ajax call. Lambda function receives the data from the webpage through API gateway and stores the data in Dynamo-DB
Hence we have completed the first part for the software implementation which includes the Registration of your RFID card with the help of Flask application on raspberry pi.
Once RFID card is registered it can be used with the system by multiple users with a different id.
So, now let’s move onto the system’s core functionality which describes the main idea for software set-up and connecting your devices together and working with them in real time implementation and also with AWS services.
The basic idea of the system core functionality is shown as architecture below and also the 10 steps are explained for the core functionality which will help you understand better and then you can move for making the system.
6.2 System
6.2.1 Architecture as below:-
The basic 10 steps explanation for the above system:-
1. First when RFID is scanned then the serial number is published to an MQTT topic.
2. A pre-declared Rule triggers the lambda function and passes MQTT message as an argument.
3. This lambda function fetches the details from Dynamo-DB related to receive RFID
4. Fetched details are published on MQTT topic if no details are fetched from scanned RFID card than NO data found is published.
5. Pi has subscribed to the above topic and receives the message.xq
6. If details are received then it waits till push button is pressed.
7. When the push button is pressed image is captured and first, this image is converted to base64 string. This string is published to MQTT topic.
8. Another pre-declared rule triggers and other lambda function to pass the message.
9. Lambda function first decodes the received base64 string into an image file. This lambda function contains the API for AWS emotion recognition service. Converted image is sent to Emotion recognition. From the response, the Expression is fetched and published as MQTT topic.
10. Pi has subscribed to this topic. It blinks the LED according to received expression and dispenses the candy out by rotating servo motor.
6.2.1 Architecture as below:-
It involves the 5 steps as below:-
1. Writing a lambda function which receives the RFID and returns the relevant details. Let’s call it “RFID_fetcher”.
2. A rule to invoke the “RFID_fetcher” lambda function on MQTT publish.
3. Writing a lambda function which receives the image as a base64 string and detects the expression. Let’s call it “Emotion_Detector”.
4. A rule to invoke the “Emotion_Detector” lambda function on MQTT publish.
5. Writing a python program to interact with hardware and AWS.
Discussing each step in detail to get you a more clear idea:-
1. Writing a lambda function which receives the RFID and returns the relevant details. Let’s call it “RFID_fetcher”.
  • This function checks for the validity of RFID card i.e. whether Scanned card is registered in the database successfully or not.
  • If the details exist for the received RFID in Dynamo-DB then it publishes the details on the MQTT topic else it publishes “No data found”
2. A rule to invoke the “RFID_fetcher” lambda function on MQTT publish.
  • Raspberry-pi is only going to interact with AWS through MQTT. AWS provides a service called rule to perform the action when the message is received on any MQTT
    topic. We can create a rule to call the lambda “RFID_fetcher” function when
    Raspberry-pi publishes the message to topic “RFID_fetch”
3. Writing a lambda function which receives the image as a base64 string and detects the expression. Let’s call it “Emotion_Detector”.
  • This function receives a base64 encoded image. First, it is decoded and then sent to the AWS recognition to generate the required output for system
4. A rule to invoke the “Emotion_Detector” lambda function on MQTT publish.
  • A rule that invokes Emotion_Detector lambda function.
5. Writing a python program to interact with hardware and AWS.
  • This is the core program which allows pi to communicate with hardware as well as AWS with MQTT.
  • Connections as below:
    • Push button: Between VCC and GPIO10
  • LEDs:
  • Happy: GPIO29
  • Sad: GPIO31
  • Calm: GPIO33
  • Angry: GPIO3
  • Disgusted: GPIO37
  • Servo motor: GPIO3
System working in real time:-
First Raspberry- Pi is connected to AWS thing with MQTT protocol using the certificate and private key of the thing created on AWS IoT core.

Next, the program keeps listening for the RFID card. When RFID card is detected it’s Serial number is published to MQTT topic. Then it waits for the reply on the subscribed topic. If the reply is “No data found” then again program goes for scanning the card. If valid data is received the process is further continued.

After receiving the valid data about scanned RFID card program waits till Button is pressed. Just after the button is pressed, the image is captured. The captured image is converted to base64 encoded String. This string is published to another MQTT topic. Afterwards, program waits for detected emotion. When emotion is received LED is blinked and servo motor is rotated to dispense the candy out.
The flowchart for the program is as follow:-

7. Some snapshots of the Candy Dispenser box:

8. Download the code zip file from below link:

[Note:-Don’t forget to make necessary changes in the code as per your requirement and AWS lambda functions and MQTT topic ]

9. Conclusion and Parting words:

The purpose of this tutorial was to give you a brief idea about the various AWS services and combining them with various IoT devices and implementing the real-time system. So it is very much possible that the logic of this system can be added to the existing mechanism of the candy or other dispenser machines to make them emotion sensitive.
Suggestions and corrections are always welcome!!!! waiting for your more innovative ideas on the same …Hope it was fun learning and exciting to all the readers and finally implementing the Candy Dispenser with Emotion Detection to the more greater extent.