Build an AR Drawing App with React

Patricia Arnedo
JavaScript in Plain English
5 min readMar 31, 2021

--

If you’ve just learned a new framework or language, the leap from building simple, straightforward apps to building something with lots of moving parts can seem insurmountable. At least, that’s how I felt when implementing one of my first projects in react. I had an idea I was passionate about, but no roadmap, examples, or even certainty that it was possible.

My goal was to build a web app that enables users to draw AR face filters, but I was new to react and javascript and I wasn’t sure if this was even something I could do at my skill level. My first step was finding helpful third party libraries. Thankfully I found an awesome open source javascript library called Jeeliz. They even had a face filter with a built in drawing component and facial tracking! You can try it here.

The stock library face filter

This was a wonderful starting point for what I wanted to do. Even though the drawing component was inadequate for my purposes, it gave me the confidence to continue with this project. That being said, I still didn’t know how I was going to implement a usable drawing interface with different stroke colors, sizes, erasing, and saving when finished (how do you save an image mapped to a video feed!? It was all a mystery to me.)

I used create-react-app for my front end, with redux for state management and a rails API for my backend. The first hurdle was getting the Jeeliz AI facial recognition library to cooperate with my react components. The part of the library I used was one large file that contained all of the behavior you see in the sample of the face filter above. I spent a lot of time reading the file over, trying to understand how I could make it work with react. I eventually found out that the library worked by using two HTML canvases. One canvas needed to be created in my react camera view component in order to display the camera feed. The other canvas would be mapped to the user’s face, and would remain centered on their face even when the user tilts or rotates their head. This inner canvas is where drawing takes place.

This simple component brought my react app and the facial recognition library together!

This component looks simple, but since I had no experience with HTML canvases, or displaying a video feed, it took me a long time to figure this out, so when I finally got the video feed on my site I was ecstatic. I could now address the issue of creating a proper drawing component. After researching HMTL canvases I realized just how complicated it is to build a “good” drawing app. I wanted the app to have an intuitive interface and smooth strokes. I soon realized I would need an external library for my drawing component as well. I was lucky enough to find another great open source library, Atrament.js.

The strokes are smooth!

At this point I had two libraries that are unrelated, and I essentially wanted them to become one. My plan was to replace the drawing component already present in the Jeeliz face filter with this new Atrament drawing library.

Scientific diagram of how I planned to mesh the two libraries.

Of course, that turned out to be easier said than done. The libraries on their own are simple enough to use. If all I wanted was to create a canvas to draw on with the Atrament library, I would need to import the library, create a canvas as per their documentation, and the library would take care of the rest. The problem was that the Jeeliz face filter library has layers of complexity, and I was unable to simply plug Atrament’s drawing components in. The Jeeliz library had its own drawing component, as well as calls to a built in API that performed essential actions like starting and stopping the camera, refreshing the canvas after each change, and checking if a face was currently being detected. It was doing so much on its own that I had to figure out the correct place to input drawing information from Atrament, in order to make sure that Jeeliz would place it on the 2D HTML canvas that was being pinned to the detected face.

This was my first time having to read someone else’s foreign complex code, understand it, and transform it. In the end, I was able to make the two libraries work because the Atrament drawing library has a nifty programmatic drawing option. This allowed me to connect the libraries by feeding the coordinates of a drawn stroke to the Jeeliz face filter library’s event listeners.

Mouse down and Mouse move event listeners from Jeeliz library.

In the code above, you can see event listeners for mouse down and mouse move. Previously, they used the canvas API (CTX) to draw, but now, they are calling atrament.draw with the coordinates of the mouse to display strokes made by the user. I also removed the ornate frame originally in the Jeeliz face filter, but the ability to map an image to the face made it easy to implement trying on filters after they’ve been drawn.

The result was an app where you can draw face filters, save them, try them on later, and browse filters drawn by other users. Here is what the drawing interface ended up looking like:

Conclusion

Over all, I learned a lot from building this app. There were so many moments when I felt it was impossible and I wouldn’t be able to make it work, but the finished product makes me so happy, and I’m happy to share it with friends and see what they create. You can try the app here.

More content at plainenglish.io

--

--