Amazon Rekognition is a pre-trained machine learning service which performs object identification within images and video. Rekognition is categorized as a part of the AWS Artificial Intelligence Services. Interaction with these services happens through their APIs, and they are ready-made for integration into applications without the need for expertise in the underlying machine learning concepts.
Amazon Rekognition allows you to classify the elements present in your images and videos. The categories of identification within this pre-trained model are:
Rekognition contains an API endpoint specifically for recognizing celebrity faces. This API call performs facial recognition on the provided images or video and the underlying Rekognition model scans for the presence of any face from its per-trained celebrity database.
Live URL: https://celeb.gregorrichardson.com
This demo project was constructed to explore the RecognizeCeleberties API. A diagram of the applications architecture can be seen here.
The RecognizeCelebrities API call takes a base64 encoded image or an S3 bucket. Below is an example call using the JavascriptSDK:
var params = {
Image: {
S3Object: {
Bucket: process.env.BUCKET,
Name: input
}
}
};
rekognition.recognizeCelebrities(params, function(err, data) {
if (err) console.log(err, err.stack);
else console.log(data);
});
In its response, an array named CelebrityFaces
is returned containing up to 100 identified celebrity faces. Inside CelebrityFaces
is an array of facial landmarks, a confidence score and additional metadata. If faces are found but are no recognized as celebrities, a separate array will be returned named UnrecognizedFaces
.
Below is an example response object from the JavascriptSDK:
{
"image": "celeb.gregorrichardson.com/uploads/fe1f77ca-8564-4d03-a5bc-eba29299880f.jpg",
"celeb": {
"CelebrityFaces": [
{
"Urls": [
"www.imdb.com/name/nm0252961"
],
"Name": "Idris Elba",
"Id": "HK6N6s",
"Face": {
"BoundingBox": {
"Width": 0.6577777862548828,
"Height": 0.6577777862548828,
"Left": 0.19555555284023285,
"Top": 0.2133333384990692
},
"Confidence": 99.98933410644531,
"Landmarks": [
{
"Type": "eyeLeft",
"X": 0.4131435751914978,
"Y": 0.47297367453575134
},
{
"Type": "eyeRight",
"X": 0.637545108795166,
"Y": 0.48074987530708313
},
{
"Type": "nose",
"X": 0.5375990271568298,
"Y": 0.6096621751785278
},
{
"Type": "mouthLeft",
"X": 0.41185352206230164,
"Y": 0.7022572755813599
},
{
"Type": "mouthRight",
"X": 0.6160147190093994,
"Y": 0.7049451470375061
}
],
"Pose": {
"Roll": 1.047629475593567,
"Yaw": 9.665703773498535,
"Pitch": 1.2859808206558228
},
"Quality": {
"Brightness": 43.41075134277344,
"Sharpness": 98.09814453125
}
},
"MatchConfidence": 99
}
],
"UnrecognizedFaces": [],
"OrientationCorrection": "ROTATE_0"
}
}
https://docs.aws.amazon.com/rekognition/latest/dg/what-is.html
Rekognition Tutorials:
Detect, Analyze, and Compare Faces
Analyze Video & Extract Rich Metadata