
In this Azure tutorial, we will discuss How To Implement Azure Face API Using Visual Studio 2019. Along with this, we will also discuss a few other topics like Creating Microsoft Azure Face API using the Azure Portal and we will also discuss Creating a WPF application using Visual Studio 2019 and C# to implement the Azure Face API, Detection of The Face In An Image Azure Face API And C#.
How To Implement Azure Face API Using Visual Studio 2019? Follow the below steps to implement the Face API using Visual Studio 2019
- Create an Azure Face API on Azure Portal
- Create a WPF application using C# in Visual Studio 2019
We will discuss the implementation in detail below.
Table of Contents
Azure Face API Example
Well, here we will discuss an Implementation Of Face API Using C# in Visual Studio 2019. As part of the implementation, we will perform the below functionalities here.
- Creating Microsoft Azure Face API using the Azure Portal.
- Creating a WPF application using Visual Studio 2019 and C# to implement the Face API. As part of the implementation, It will detect the face in an image and will show a red frame around each face also if you will mouse over the detected face in the image, it will show you all the attributes like Gender, Age, Emotion, Glasses, etc.
Well, it is an interesting topic. Before starting the actual implementation, We should know the prerequisites to start the actual development.
Prerequisites
Below are the prerequisites for Implementing Azure Face API Using Visual Studio 2019.
- You must have an Azure Valid Subscription or a valid Azure Account. If you don’t have the Azure Account till now, Create an Azure Free account now.
- You must have Visual Studio 2019 installed on your dev machine. If you don’t have Visual Studio 2019 installed on your machine till now. Install Visual Studio 2019 in your dev machine now.
Creating Microsoft Azure Face API using the Azure Portal
Follow the below steps.
Assuming that you have a Valid Azure Account or Azure Subscription, now let’s start creating the Azure Cognitive Services Face API On Azure Portal.
Log in to the Azure Portal (https://portal.azure.com/)
Once you logged in to the Azure Portal, Now from the left side menu, click on the + Create a Resource button as highlighted below.

Now for the next steps, follow my article to Create the Azure Face API on the Azure Portal.
Assuming that you have created the Azure Cognitive Services Azure Face API on the Azure Portal following the above article. Now my Azure Face API is ready. You can see it below.

Once you have created the Azure Face API. The next step is to copy the Key value of the Azure Face API. To copy that, Navigate to the Face API page, click on the Keys and Endpoint from the left navigation and now, you can able to see Key1 and Key2. you can copy the value of the Key1 and keep it in a notepad.
You can click on the Copy button as highlighted to copy the key value of the Key1. This key value we need to use while creating the WPF application using Visual Studio 2019 to implement the Azure Face API in the next section.

Now our first step is completed, we have our Azure Face API is ready now. We have also copied the Key value of our Azure Face API and kept it in a notepad. Now we will move to the next step i.e. Creating a WPF application using Visual Studio 2019 and C#.
Creating a WPF application using Visual Studio 2019 and C#
Follow the below steps to create a WPF application.
Open Visual Studio 2019 on your dev machine
Click on the Create a new Project button on the Getting Started window.
Choose the project template as the WPF APP (.NET Framework) and then click on the Next button.

On the Configure your new project window, provide the below details
- Project Name: Provide a name for your WPF application
- Location: Choose a location where you want to save your WPF application.
- Framework: Select the latest .Net framework. As of now, the latest version of the .Net framework is .Net Framework 4.7.2 version.
Finally, click on the Create button to create the new project.

Now, you can able to see the project got created successfully without any issues.

Detection of The Face In An Image Azure Face API And C#
Once the project got created successfully, the next step is, we will have to add two NuGet packages to your Project. To add the NuGet Packages,
Right-click on the Project and then click on the Manage NuGet Packages option as shown below.

Now click on the Browse tab, search for the Newtonsoft.json NuGet package and select the package and then click on the Install button to install the Newtonsoft.json NuGet package.

In the same way, we need to add one more NuGet package i.e. Torutek.Microsoft.ProjectOxford.Face NuGet package. To install that search for the Torutek.Microsoft.ProjectOxford.Face and select the NuGet package and then click on the Install button.

Now we have installed the required Nuget Package. Now is the time to add the code to implement the main functionality.
Open the MainWindow.xaml and add the below code
Note: Make sure to change the project name or class name as per yours on the first line.
<Window x:Class="WpfAppFaceAPI.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:local="clr-namespace:WpfAppFaceAPI"
mc:Ignorable="d"
Title="MainWindow" Height="700" Width="960">
<Grid x:Name="BackPanel">
<Image x:Name="MyFace" Stretch="Uniform" Margin="0,0,0,55" MouseMove="MousePointer" />
<DockPanel DockPanel.Dock="Bottom">
<Button x:Name="BrowseButton" Width="79" Height="25" VerticalAlignment="Bottom" HorizontalAlignment="Left"
Content="Upload Image"
Click="BrowseButton_Click" />
<StatusBar VerticalAlignment="Bottom">
<StatusBarItem>
<TextBlock Name="statusBar" />
</StatusBarItem>
</StatusBar>
</DockPanel>
</Grid>
</Window>
Now the next change is for the MainWindow.xaml.cs file. Add the below code in your MainWindow.xaml.cs file. Make sure to Change the namespace or class name as per yours.
using Microsoft.ProjectOxford.Common.Contract;
using Microsoft.ProjectOxford.Face;
using Microsoft.ProjectOxford.Face.Contract;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;
namespace WpfAppFaceAPI
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
private readonly IFaceServiceClient faceServiceClient =
new FaceServiceClient("191d05e8f6xxxxxxxxcb4fc9373", "https://eastus.api.cognitive.microsoft.com/face/v1.0/");
Face[] facesDetected;
String[] faceDesc;
double factresize;
public MainWindow()
{
InitializeComponent();
}
private async void BrowseButton_Click(object sender, RoutedEventArgs e)
{
var openDialog = new Microsoft.Win32.OpenFileDialog();
openDialog.Filter = "JPEG Image(*.jpg)|*.jpg";
bool? rslt = openDialog.ShowDialog(this);
if (!(bool)rslt)
{
return;
}
string imagePath = openDialog.FileName;
Uri imageUri = new Uri(imagePath);
BitmapImage src = new BitmapImage();
src.BeginInit();
src.CacheOption = BitmapCacheOption.None;
src.UriSource = imageUri;
src.EndInit();
MyFace.Source = src;
Title = "Detecting the Faces...";
facesDetected = await UploadImageFaces(imagePath);
Title = String.Format("Finished Detecting the Faces. {0} face(s) detected", facesDetected.Length);
if (facesDetected.Length > 0)
{
DrawingVisual v = new DrawingVisual();
DrawingContext dc = v.RenderOpen();
dc.DrawImage(src,
new Rect(0, 0, src.Width, src.Height));
double dpi = src.DpiX;
factresize = 96 / dpi;
faceDesc = new String[facesDetected.Length];
for (int i = 0; i < facesDetected.Length; ++i)
{
Face face = facesDetected[i];
// Logic to Draw Rectangle Shape
dc.DrawRectangle(
Brushes.Transparent,
new Pen(Brushes.Red, 2),
new Rect(
face.FaceRectangle.Left * factresize,
face.FaceRectangle.Top * factresize,
face.FaceRectangle.Width * factresize,
face.FaceRectangle.Height * factresize
)
);
faceDesc[i] = Description(face);
}
dc.Close();
RenderTargetBitmap frb = new RenderTargetBitmap(
(int)(src.PixelWidth * factresize),
(int)(src.PixelHeight * factresize),
96,
96,
PixelFormats.Pbgra32);
frb.Render(v);
MyFace.Source = frb;
statusBar.Text = "You can Place the mouse pointer over any face to see that face description in details.";
}
}
private void MousePointer(object sender, MouseEventArgs e)
{
if (facesDetected == null)
return;
Point ms = e.GetPosition(MyFace);
ImageSource imageSource = MyFace.Source;
BitmapSource bitmapSource = (BitmapSource)imageSource;
var scale = MyFace.ActualWidth / (bitmapSource.PixelWidth / factresize);
bool mouseOnFace = false;
for (int i = 0; i < facesDetected.Length; ++i)
{
FaceRectangle fr = facesDetected[i].FaceRectangle;
double left = fr.Left * scale;
double top = fr.Top * scale;
double width = fr.Width * scale;
double height = fr.Height * scale;
if (ms.X >= left && ms.X <= left + width && ms.Y >= top && ms.Y <= top + height)
{
statusBar.Text = faceDesc[i];
mouseOnFace = true;
break;
}
}
if (!mouseOnFace)
statusBar.Text = "Place the mouse pointer over a face to see the face description.";
}
private async Task<Face[]> UploadImageFaces(string imageFilePath)
{
// All the Face attributes
IEnumerable<FaceAttributeType> attributes =
new FaceAttributeType[] { FaceAttributeType.Gender, FaceAttributeType.Age, FaceAttributeType.Smile, FaceAttributeType.Emotion, FaceAttributeType.Glasses, FaceAttributeType.Hair };
try
{
using (Stream imageFileStream = File.OpenRead(imageFilePath))
{
Face[] myfaces = await faceServiceClient.DetectAsync(imageFileStream, returnFaceId: true, returnFaceLandmarks: false, returnFaceAttributes: attributes);
return myfaces;
}
}
catch (FaceAPIException f)
{
MessageBox.Show(f.ErrorMessage, f.ErrorCode);
return new Face[0];
}
catch (Exception e)
{
MessageBox.Show(e.Message, "Error");
return new Face[0];
}
}
private string Description(Face face)
{
StringBuilder sb = new StringBuilder();
sb.Append("Face: ");
sb.Append(face.FaceAttributes.Gender);
sb.Append(", ");
sb.Append(face.FaceAttributes.Age);
sb.Append(", ");
sb.Append(String.Format("smile {0:F1}%, ", face.FaceAttributes.Smile * 100));
// Display all emotions if it is over 10%.
sb.Append("Emotion Level: ");
EmotionScores emotionScores = face.FaceAttributes.Emotion;
if (emotionScores.Anger >= 0.1f) sb.Append(String.Format("anger level {0:F1}%, ", emotionScores.Anger * 100));
if (emotionScores.Contempt >= 0.1f) sb.Append(String.Format("contempt {0:F1}%, ", emotionScores.Contempt * 100));
if (emotionScores.Disgust >= 0.1f) sb.Append(String.Format("disgust {0:F1}%, ", emotionScores.Disgust * 100));
if (emotionScores.Fear >= 0.1f) sb.Append(String.Format("fear level {0:F1}%, ", emotionScores.Fear * 100));
if (emotionScores.Happiness >= 0.1f) sb.Append(String.Format("happiness level {0:F1}%, ", emotionScores.Happiness * 100));
if (emotionScores.Neutral >= 0.1f) sb.Append(String.Format("neutral {0:F1}%, ", emotionScores.Neutral * 100));
if (emotionScores.Sadness >= 0.1f) sb.Append(String.Format("sadness level {0:F1}%, ", emotionScores.Sadness * 100));
if (emotionScores.Surprise >= 0.1f) sb.Append(String.Format("surprise {0:F1}%, ", emotionScores.Surprise * 100));
sb.Append(face.FaceAttributes.Glasses);
sb.Append(", ");
sb.Append("Hair: ");
if (face.FaceAttributes.Hair.Bald >= 0.01f)
sb.Append(String.Format("bald {0:F1}% ", face.FaceAttributes.Hair.Bald * 100));
HairColor[] hcolors = face.FaceAttributes.Hair.HairColor;
foreach (HairColor hairColor in hcolors)
{
if (hairColor.Confidence >= 0.1f)
{
sb.Append(hairColor.Color.ToString());
sb.Append(String.Format(" {0:F1}% ", hairColor.Confidence * 100));
}
}
return sb.ToString();
}
}
}
Note: Make sure to change the key as per your Azure Cognitive Services Face API that you have created above and copied the Key1 value in a notepad. The next is to make sure to change the EndPoint URL for your Azure Face API and region selected.
private readonly IFaceServiceClient faceServiceClient =
new FaceServiceClient("Your Azure FaceAPI key value", "Your Azure FaceAPI EndPoint URL");
Now you can able to see the code changes here as below

Now, we are done with the code changes. Now time to run the application and see if the functionality is working as expected. So Press F5 to run the WPF application.
Once you will run the application, you can able to see the below window and click on the Upload Image button to browse the image from your local machine. (Assuming you have a Valid image in your local machine)

Now it will show your local machine path, you can navigate to the path where you have stored the image and then click on the Open button to open the image.
Now you can able to see them, It is reading the image successfully and identified that there are 3 faces in the given image and it is also able to draw the Rectangle on each face in Red color.
If you can able to see, Next to the Upload Image button, It is showing a message that Place the mouse pointer over a face to see the face description.

So now, once you will put the mouse pointer in any of the images, you can able to see it is showing all the attribute details like Gender, Age, Emotion, Hair, etc for that specific image next to the Upload Image button as highlighted below.

Wrapping Up
Well, In this article, we discussed How To Implement Azure Face API Using Visual Studio 2019, Creating Microsoft Azure Face API using the Azure Portal and we also discussed Creating a WPF application using Visual Studio 2019 and C# to implement the Azure Face API, Detection of The Face In An Image Azure Face API And C#. Hope you have enjoyed this article !!!