AR Spongebob Selfie App

Firt of all, I had the idea of making an AR selfie app for my UX Design class at New York University, which aims to make something for real people in Dumbo, Brooklyn, and addresses on climate change issues.

My plan was to make an AR selfie app for Dumbo visitors to engage them to explore the area by collecting AR selfie stickers at different spots. The stickers can only be unlocked on site. When taking a selfie, the app will automatically generate a climate-change-themed poster, saying something like "This is how Dumbo looks like in 80 years due to the sea level rising caused by climate change." The app will be part of the "Dumbo Selfie Challenge" social campaign, like the ice bucket challenge, to get viral on internet and lead to a fundraising to help protect our planet.

(Here is a very early prototype of the app)

After a series of user testing, I figured that in order to make the stickers collectable, I need to cooperate with a popular franchise, like PokemonGo. So I chose Spongebob. 2019 happens to be Spongebob Squarepants' 20th anniversary, and the undersea animation is a perfect match for showing Dumbo undersea.

la-et-hc-spongebob-20th-best-year-ever-2019021-001.jpeg

To make a prototype for the selfie app, I found Daniel Shiffman's workshop materials on Face Detection in Processing.

There was a lot to read and watch, since everything was brand new to me. But, after a week of playing back and forth of the preset examples, I figured out how to add images to my face and change the size of image according to the depth between me and the camera.

I tried out the traditional square to crop the faces, the emoji (of course), and even Kim Jeong-eun's face.

Square to test out face detection

Square to test out face detection

Frog emoji to test out face detection.

Frog emoji to test out face detection.

Kim’s face to test out face detection.

Kim’s face to test out face detection.

Try out Spongebob sticker

Try out Spongebob sticker

To make the app more interactive, I made a group of bubbles with Array and PVector. The bubbles will respond to mouse clicks and feels like real bobbles undersea.

Test out interactive bubbles at NYU library.

Test out interactive bubbles at NYU library.

Everything looks great until I was trying to put everything together. First of all, Processing crashes every time when I tried to import two libraries at the same time. I presented my work in class and got help with enlarging the program storage.

Secondly, when I was finally able to put everything together, the facial detection functions won't show up on the screen. I tried multiple ways of rearranging the code, but neither way works. So I have to get some help from the teaching assistant.

The TA was super helpful. He told me the problem was the difference between video size and openCV size. They must remain exactly the same. He also showed me how to use pushMatrix and popMatrix to properly frame the images.

The TA also showed me how to make simple animations by using PVector to rotate the stickers.

The spinning frog emoji

The spinning frog emoji

In the final product, I didn't use the stickers to cover the user's face, but to let the characters from Spongebob stand on user's shoulder.

During the coding process, I also had some problems with the math, since the video needs to be scaled twice (e.x scale(2);) and pixels need to be translated horizontally to the right. (e.x translate(-200,0);) When I was trying to make the buttons, I have to put in the numbers to try out whether they are properly lined.

Screenshot of the codes

Screenshot of the codes

I also confused myself a little bit with the function of mouseClick. I imagined this prototype will be put in use on a smartphone, so I coded mouseClick for all the interaction. When I try to reset the screen with mousePressed, I forgot it was also used for pushing away bubbles. So I realized I need to make separate buttons for each function.

Bubbles

Bubbles

Fly away with Patrick star

Fly away with Patrick star

AR selfie at the Jane’s Carousel in Dumbo.

AR selfie at the Jane’s Carousel in Dumbo.

In the end, I am very satisfied with the final product. I fulfilled my goal of making an AR selfie prototype that can load vector images according to facial detection; the stickers can be switched with buttons; the bubbles are interactive with user's touch; the photos can be saved with the camera icon... Now, I am confident to use variables, arrays, classes, conditional statements, and code some sort of interaction, and even teach myself how to do things that I didn't learn from the class. I still could not believe I could reach this point only half way through the semester. Of course, there are still a lot to improve with the code structure and readability. For example, I could have created a class for the buttons, instead of coding one by one. I could have leave one more spot for reseting the screen with no stickers. Still, I am super satisfied with what I am capable of doing right now.