Creating a Facial Recognition-enabled Angular Web App on ASP.NET Core
With the busy workload, I am finally able to draw some of the time to prepare this article to share something that I had learnt by developing a Facial Recognition-enabled web app with Angular on ASP.NET Core.
ASP.NET Core 2.1 Updates
The latest update to ASP.NET Core – the version 2.1 brought several important improvements to web developers, especially web and Angular developers:
- Updated Single Page Application – Angular, React, and React + Redux templates
- ARM Support – .NET Core is now supported on Linux ARM32 distros, like Raspbian and Ubuntu.
- SignalR – Allows bi-directional communication between server and client.
- Razor Class Libraries
- GDPR Template
However, today’s coverage isn’t very much on ASP.NET Core update, but instead, on the Face API of Microsoft Cognitive Service!
Microsoft Cognitive Services
Let’s recap some of the key features of Microsoft Cognitive Services. People often ask, why Microsoft Cognitive Services?
Answer from myself, my opinion, is Cognitive Services offers a very comprehensive and developer-friendly AI and consume-and-develop API that allows you to focus on developing Smart Artificial Intelligence application.
It covers almost of the cool AI functions that you would expect, from Video analysis, Picture analysis, Speech-to-text conversion & vice-versa, Search, chat bot, etc.
You can read more @ “Experience Intelligence of Technology with Microsoft Cognitive Service“.
There are several topics that I had been covered to use all these services:
- Mobile App with Computer Vision API
- Search Application with Bing Search API
- Light Control with Speech API
Facial Analysis Integrates in Angular Web App
Though you do have to expect I am not providing the entire walk through on my entire source code, but I will run through the concept and also providing you the source code for reference.
Brief Intro to Face API
Face API provides great off-the-shelves AI functions, face detection, emotion analysis, verification & identification.
In order to develop the app, we must understand what a web app can do and what doesn’t. The different from mobile application is, web app doesn’t access hardware directly. It runs on browser, says Microsoft Edge, then browser will prompt and ask for your permission to turn on a web cam in order to take photo.
Then, we will stream the video to the front end using HTML5 Video element.
HMHENG – Face API – HTML5 video tagAfter that, by using Angular’s ViewChild, we can access the #video from the backend, perform a capture and display it temporary on HTML5’s canvas as a picture.
Within my source code, you would also noticed that I am retrieving the snapped photo from Canvas element then turn it into a blob data. Once converted into blob, the web app will utilize the httpClient service to send this piece of information to Face API for analysis and identification.
Face Emotion Analysis and Identification
My source code demonstrates how Face API can be used to analyze a human emotion and also identify whether two different snapped facial photo is identical or not. Find the source code here and starts to play around with your own creative ideas.
Source Code: Download from GitHub
Offline Slide: SlideShare