
In this Azure tutorial, we will discuss the Azure Cognitive Services Face API JavaScript Example. Along with this, we will also discuss a few other topics below.
- Create an Azure Cognitive Services Face API in the Azure Portal
- Develop a JavaScript to detect faces in an image and use the Azure Face API
Table of Contents
Azure Cognitive Services Face API JavaScript Example
Well, in this article, we will discuss an excellent example of how to detect faces in an image using the REST API and JavaScript Azure Cognitive Services. As part of the example, we will work on the below functionalities
- Create an Azure Cognitive Services Face API in the Azure Portal
- Develop a JavaScript to detect faces in an image and use the Azure Face API key and EndPoint.
Before starting the actual functionality, we need to know what are the What is Azure Cognitive Services Face API ? and prerequisites needed to start the actual development.
- How To Use Azure Text Analytics In Power BI
- How To Create Azure Cognitive Services PowerShell
- Language Detection With Azure Cognitive Service
What is Azure Cognitive Services Face API ?
The Azure Cognitive Services Face API provides an advanced algorithm that helps you to detect or read the human faces in different digital images. That includes detecting the emotions and facial expressions like happiness, fear, etc. It also helps to apply the person identification feature that matches an individual up to 1 million people.
You can also directly integrate the Azure Cognitive Services Face API with your application. It solves a lot of business problems for your application. Most importantly, when you want to implement the authentication for your Application, then implementing the authentication with the Azure Cognitive Services Face API is one of the best ways. Check out more information on Azure Cognitive Services Face API now.
Prerequisites
- You must have a valid Azure Subscription or a Valid Azure Account. If you don’t have till now, create an Azure Free Account now.
- You must install a code editor like Visual Studio Code or NotePad.
Now assuming that you have all the prerequisites needed for the actual functionality, we will start the actual development. The first step is to Create an Azure Cognitive Services Face API in the Azure Portal.
Create an Azure Cognitive Services Face API in the Azure Portal
The first step is to Create an Azure Cognitive Services Face API in the Azure Portal. Follow the below steps to Create an Azure Cognitive Services Face API in the Azure Portal.
Login to the Azure Portal (https://portal.azure.com/)
Once you logged in to the Azure Portal, Now from the left side menu, click on the + Create a Resource button as high lighted below.

Now for the next steps, follow my article to Create the Azure Face API on the Azure Portal.
Assuming that you have created the Azure Cognitive Services Azure Face API on the Azure Portal following the above article. Now my Azure Face API is ready. You can see it below.

Now you have created the Azure Face API. The next step is to copy the key value of the Azure Face API. So, copy the Key value of the Azure Face API. To copy that, you need to navigate to the Face API page, click on the Keys and Endpoint from the left navigation, and now, you can able to see Key1 and Key2. Click on the Regenerate Key1 or Regenerate Key2 button to generate the new keys if you want. you can copy the value of the Key1 and keep it in a notepad.
If you want to see the key values then you need to click on the Show Keys button. Now, You can click on the Copy button as highlighted to copy the value of the Key1. This key-value we need to use while developing the javascript in the next section.

Now our first step is completed, we have our Azure Face API is ready. We have also copied the Key value of our Azure Face API and kept in a notepad. Now we will move to the next step i.e Developing a javascript and REST API to detect faces in an image and use the Azure Face API key and EndPoint.
Develop a JavaScript to detect faces in an image and use the Azure Face API
Before developing the script, make sure to upload the image into the blob storage and make sure it should have public access i.e public server access. You can also use any of the Public servers to upload the image and use that URL.
Below is the image of my family that I have uploaded the image in the blob storage. Few things to note down here
- The format of the image should be JPG, png, peg, gif (the first frame), and BMP.
- The size of the image must be within 1 KB to 6 MB.
- The size of the face should be between 36×36 pixels to 1920×1080 pixels.
- You can able to return up to 100 faces in an image.
- Most importantly, The face in the image should be clear and frontal.
These are key points you need to consider while deciding the image. Remember one important thing here, if the faces in the image are not clear then you might not get the expected output. Below is the complete URL for my image that I have uploaded in Azure Blob Storage.
https://her33fffff.blob.core.windows.net/new123/nwphoto.png

Now the next step is to start developing the script.
Create an HTML file name as FaceAPI.html. you can add the below code. In fact, you can open a notepad and add all the below complete code and save it as the .html file.
<!DOCTYPE html>
<html>
<head>
<title>Face Detection APP</title>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script>
</head>
<body></body>
</html>
Then add the below code inside the body
element
<script type="text/javascript">
function ImageProcessing() {
var Key = "bd3ad7ff0d0e467ebf24fd3abd69b39a";
var uri =
"https://eastus.api.cognitive.microsoft.com/face/v1.0/detect";
// Request parameters.
var params = {
"detectionModel": "detection_02",
"returnFaceId": "true"
};
// Display the image.
var ImageUrl = document.getElementById("inputImage").value;
document.querySelector("#sourceImage").src = ImageUrl;
// Perform the REST API call.
$.ajax({
url: uri + "?" + $.param(params),
// Request headers.
beforeSend: function(xhrObj){
xhrObj.setRequestHeader("Content-Type","application/json");
xhrObj.setRequestHeader("Ocp-Apim-Subscription-Key", Key);
},
type: "POST",
// Request body.
data: '{"url": ' + '"' + ImageUrl + '"}',
})
.done(function(data) {
// Show formatted JSON on webpage.
$("#responsetxt").val(JSON.stringify(data, null, 2));
})
.fail(function(jqXHR, textStatus, errorThrown) {
// Display error message.
var errorString = (errorThrown === "") ?
"Error. " : errorThrown + " (" + jqXHR.status + "): ";
errorString += (jqXHR.responseText === "") ?
"" : (jQuery.parseJSON(jqXHR.responseText).message) ?
jQuery.parseJSON(jqXHR.responseText).message :
jQuery.parseJSON(jqXHR.responseText).error.message;
alert(errorString);
});
};
</script>
<h1>Faces Analysis</h1>
Provide the URL to an image, then click
the <strong>Detect</strong> button.<br><br>
Image: <input type="text" name="inputImage" id="inputImage"
value="https://her33fffff.blob.core.windows.net/new123/nwphoto.png" />
<button onclick="ImageProcessing()">Detect</button><br><br>
<div id="wrapper" style="width:1020px; display:table;">
<div id="jsonOutput" style="width:600px; display:table-cell;">
Response:<br><br>
<textarea id="responsetxt" class="UIInput"
style="width:590px; height:410px;"></textarea>
</div>
<div id="imageDiv" style="width:410px; display:table-cell;">
Source image:<br><br>
<img id="sourceImage" width="400" />
</div>
</div>
Now the complete code is as below. copy the below code to the notepad and make sure to change the Key, EndPoint url and the image path based on yours.
<!DOCTYPE html>
<html>
<head>
<title>Face Detection APP</title>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script>
</head>
<body>
<script type="text/javascript">
function ImageProcessing() {
var Key = "bd3ad7ff0d0e467ebf24fd3abd69b39a";
var uri =
"https://eastus.api.cognitive.microsoft.com/face/v1.0/detect";
// Request parameters.
var params = {
"detectionModel": "detection_02",
"returnFaceId": "true"
};
// Display the image.
var ImageUrl = document.getElementById("inputImage").value;
document.querySelector("#sourceImage").src = ImageUrl;
// Perform the REST API call.
$.ajax({
url: uri + "?" + $.param(params),
// Request headers.
beforeSend: function(xhrObj){
xhrObj.setRequestHeader("Content-Type","application/json");
xhrObj.setRequestHeader("Ocp-Apim-Subscription-Key", Key);
},
type: "POST",
// Request body.
data: '{"url": ' + '"' + ImageUrl + '"}',
})
.done(function(data) {
// Show formatted JSON on webpage.
$("#responsetxt").val(JSON.stringify(data, null, 2));
})
.fail(function(jqXHR, textStatus, errorThrown) {
// Display error message.
var errorString = (errorThrown === "") ?
"Error. " : errorThrown + " (" + jqXHR.status + "): ";
errorString += (jqXHR.responseText === "") ?
"" : (jQuery.parseJSON(jqXHR.responseText).message) ?
jQuery.parseJSON(jqXHR.responseText).message :
jQuery.parseJSON(jqXHR.responseText).error.message;
alert(errorString);
});
};
</script>
<h1>Faces Analysis</h1>
Provide the URL to an image, then click
the <strong>Detect</strong> button.<br><br>
Image: <input type="text" name="inputImage" id="inputImage"
value="https://her33fffff.blob.core.windows.net/new123/nwphoto.png" />
<button onclick="ImageProcessing()">Detect</button><br><br>
<div id="wrapper" style="width:1020px; display:table;">
<div id="jsonOutput" style="width:600px; display:table-cell;">
Response:<br><br>
<textarea id="responsetxt" class="UIInput"
style="width:590px; height:410px;"></textarea>
</div>
<div id="imageDiv" style="width:410px; display:table-cell;">
Source image:<br><br>
<img id="sourceImage" width="400" />
</div>
</div>
</body>
</html>
Now save the notepad file as .html and open it and then click on the Detect button and you can able to see, we got the expected output as below.

Below is the complete response that we got for this image
[
{
"faceId": "8c7b0c21-ea59-4294-a111-cc4d69d549ba",
"faceRectangle": {
"top": 524,
"left": 586,
"width": 115,
"height": 153
}
},
{
"faceId": "aed6aabb-06e6-4cb1-9d16-ca643ef97d92",
"faceRectangle": {
"top": 73,
"left": 216,
"width": 96,
"height": 126
}
},
{
"faceId": "01b07c9e-8942-4872-b83a-22ff95a4aca7",
"faceRectangle": {
"top": 292,
"left": 119,
"width": 76,
"height": 100
}
},
{
"faceId": "bdce33af-0cad-487c-9fa5-0bd205322087",
"faceRectangle": {
"top": 70,
"left": 660,
"width": 66,
"height": 86
}
}
]
This is the way you can Develop a JavaScript to detect faces in an image and use the Azure Face API and one of the easy Azure Cognitive Services Face API JavaScript Example.
You may also like following the below articles
- CS1061 C# ‘HttpRequest’ does not contain a definition for ‘Content’ and no accessible extension method ‘Content’ accepting a first argument of type ‘HttpRequest’ could be found
- Azure Cognitive Services Modules For Python
- Build Intelligent C# Apps With Azure Cognitive Services
- Failed To Validate The Notification URL SharePoint Webhook Azure Function Endpoint
- How To Convert m4a File To Text Using Azure Cognitive Services
- How to Create And Consume Azure Function From ASP.NET Core
- Web deployment task failed – cannot modify the file on the destination because it is locked by an external process
Wrapping Up
Well, in this article, we have discussed Azure Cognitive Services Face API JavaScript Example, Create an Azure Cognitive Services Face API in the Azure Portal, Develop a JavaScript to detect faces in an image and use the Azure Face API key and EndPoint where we have written the JavaScript to detect faces in an image and use the Azure Face API. Hope this article will help you !!!