When I first read Paul Kinlan’s blog post about native face detection, the movie Ghost in the Shell was about to be released. I’m a big GitS fan, and was willing to simulate the Laughing Man L337 H4x0r powers as an excuse to play with the API.
I ran to Chrome Canary on macOS to test it and…it didn’t work. It was only working for Android at
the time. But now with
FaceDetector working reliably in Chrome for macOS, I can finally pretend to
have some Laughing Man skills.
Face detection was already possible on the web through 3rd party libraries like Tracking.js. But sharing the same thread for user interaction and object detection plus the lack of hardware acceleration makes the experience a bit janky.
Enabling the API
Since this is still an experimental API, make sure to have the latest Chrome browser and enable it
using the URL
Below an example of face detection on a
If you want to see it in action before we go deeper with the code, try the below links:
The API is fairly simple. First step is to create a
It exposes only one method called
detect that accepts a
<img> element as argument.
The method will not use the main thread and reports the end of computation via a
Detecting faces on a video frame
For video we need a couple more steps. Because
FaceDetector doesn’t work with
<video> tags, we
need to use
<canvas> to draw a frame.
First step is to create a
<canvas> element with same
dimensions as the
<video>. Then we make the
<canvas> invisible, since it will be used only as a
<canvas> element created, we can draw a video frame on it and call
the drawed imaged.
detect method returns an array of
DetectFace is an object with
boundingBox describes a square related to the detected faced.
landmark is an array of places of interest on the detected face like eye, mouth, …
With that information, we can now put an image on top of the detected face.
Detecting faces on every video frame
With the above code, we are able to detect faces in a video frame. In order to get face detection on every frame, we have to run the same code above in a loop, since a video is just a sucession of frames.
requestAnimationFrame on the first line we, well, request that the
loop function run
again on the next frame. And so on. Endlessly.
faceDetector.detect takes time to finish the face detection on a given frame, and we don’t want
to start another call before the previous one finished. For that we use the flag
to prevent a new call.
And now we can reuse the same code above to move the overlay on top of the detected face.
For a complete working solution, check the GitHub repo. All source code is available there. Any doubts or feedback, please create an issue.
An npm package called
laughing-man is also
available. It’s just a wrapper to put an overlay image on top of a detected face using the