Creating a Facial Recognition-enabled Angular Web App on ASP.NET Core

Introduction

With the busy workload, I am finally able to draw some of the time to prepare this article to share something that I had learnt by developing a Facial Recognition-enabled web app with Angular on ASP.NET Core.

ASP.NET Core 2.1 Updates

The latest update to ASP.NET Core – the version 2.1 brought several important improvements to web developers, especially web and Angular developers:

  • Updated Single Page Application – Angular, React, and React + Redux templates
  • ARM Support – .NET Core is now supported on Linux ARM32 distros, like Raspbian and Ubuntu.
  • SignalR – Allows bi-directional communication between server and client.
  • Razor Class Libraries
  • GDPR Template
HMHENG - ASP.NET Core

HMHENG – ASP.NET Core Feature List

However, today’s coverage isn’t very much on ASP.NET Core update, but instead, on the Face API of Microsoft Cognitive Service!

Microsoft Cognitive Services

Let’s recap some of the key features of Microsoft Cognitive Services. People often ask, why Microsoft Cognitive Services?

Answer from myself, my opinion, is Cognitive Services offers a very comprehensive and developer-friendly AI and consume-and-develop API that allows you to focus on developing Smart Artificial Intelligence application.

It covers almost of the cool AI functions that you would expect, from Video analysis, Picture analysis, Speech-to-text conversion & vice-versa, Search, chat bot, etc.

You can read more @ “Experience Intelligence of Technology with Microsoft Cognitive Service“.

There are several topics that I had been covered to use all these services:

Facial Analysis Integrates in Angular Web App

Though you do have to expect I am not providing the entire walk through on my entire source code, but I will run through the concept and also providing you the source code for reference.

Brief Intro to Face API

Face API provides great off-the-shelves AI functions, face detection, emotion analysis, verification & identification.

HMHENG - FACE API

HMHENG – FACE API – Detection

HMHENG - Face API

HMHENG – Face API

HMHENG - Face API

HMHENG – Face API

HMHENG - Face API

HMHENG – Face API

 

Solution Design

In order to develop the app, we must understand what a web app can do and what doesn’t. The different from mobile application is, web app doesn’t access hardware directly. It runs on browser, says Microsoft Edge, then browser will prompt and ask for your permission to turn on a web cam in order to take photo.

Therefore, in order to allow web app to access hardware, we need to make use of javascript/ typescript’s navigator.mediaDevices.getUserMedia function in order to permit web app to capture photo/video or event audio.

HMHENG - Face API

HMHENG – Face API – Navigator.mediaDevices.getUserMedia

 

Then, we will stream the video to the front end using HTML5 Video element.

HMHENG - Face API

HMHENG – Face API – HTML5 video tagAfter that, by using Angular’s ViewChild, we can access the #video  from the backend, perform a capture and display it temporary on HTML5’s canvas as a picture.

HMHENG - Face API

HMHENG – Face API – ViewChild

Within my source code, you would also noticed that I am retrieving the snapped photo from Canvas element then turn it into a blob data. Once converted into blob, the web app will utilize the httpClient service to send this piece of information to Face API for analysis and identification.

Face Emotion Analysis and Identification

My source code demonstrates how Face API can be used to analyze a human emotion and also identify whether two different snapped facial photo is identical or not. Find the source code here and starts to play around with your own creative ideas.

HMHENG – Face API

Source Code: Download from GitHub

Offline Slide: SlideShare

You may also like...

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: