How to use face-api in React
A live demo by the developer of face-api can be accessed here.
Face-api is an incredible JavaScript API for Face Detection, Face Recognition and Face Landmark Detection. This API is actually built on top of tensorflow.js and implements various CNNs (Convolutional Neural Networks) to detect a face, recognize it and draw landmarks on it. It has been optimized for web browsers and mobile applications.
The most amazing part about this API is that it does not require any external dependencies to do what it does, and also it is GPU accelerated, hence running all kinds of operations on a WebGL backend. Vincent Mühler is the genius behind this wonderful yet simple API, and is well liked in the JavaScript community for his open source contributions.
So, let’s see how we can leverage this in a React Application,
Go to a comfortable location in your computer and type this in command line to spin up a simple react application,
npx create-react-app
The first thing we need to do is download the face-api node package,
npm install face-api.js
Or if you are using yarn,
yarn add face-api.js
Now, at starting we have a basic react app like this,
In order to fully utilize the capabilities of face-api, we need to download certain .json manifests based on ‘tfjs’ models. Tensorflow.js is a library for machine learning in JavaScript and allows us to develop and use machine learning models directly in the Node environment or in the browser.
Go to this link and download all the files in this folder and save them to the public/models folder in your react application.
You can use this website to download a specific folder from Github.
Here is a screenshot of what the file structure looks like at this point,
Now, Update the code in App.js file in file directory as follows,
In order to use the face-api, you will need to allow webcam access to capture your face in real time.
Here, we have added a trigger button to open the webcam. When clicked, live feed from the webcam will be pasted on a canvas. When the react app renders for the first time or after a refresh, useEffect() is called and in there we are loading all the tensorflow models downloaded earlier required to detect the face and its features. After all the models have been loaded completely, we use the functions provided by face-api to draw the detections on the canvas along with the video stream. To know more about different functions provided by face-api, please refer to its official documentation here.
Now, reload the web application and there you have it. Press the “Open Webcam” button and see the magic happen, keep in mind that every frame of that live video stream is being processed and added to the canvas along with all its detections and masks.
I hope this was informational and benefited you in some way, keep coding :)