Which Character Are You in LuminoCity

In March 2020, I started my position as a Multimedia Content Creator for LuminoCity, a multimedia entertainment company that transforms captivating stories into multidimensional experiences. The company hosts LuminoCity Festival, a month-long holiday event with an exhibition of spectacular light art displays, live performances, and a celebration of cultures.

To engage their IP storytelling, I created a “Which character are you in LuminoCity” Instagram filter.

The first step is to create png files for each character. I photoshopped 18 characters in total.

Screen Shot 2020-04-02 at 12.11.39 AM.png

Then, I use Spark AR to code the flow. Originally, I designed the filter to be triggered by touching on the screen.

Screen Shot 2020-04-02 at 1.30.58 AM.png

I also added face distortion to retouch the face. (Yes, of course.) It turns out great!

no tap.gif

Then I realize that the user will have to use two fingers to trigger the effect and record the effect at the same time. So I changed the patch:

Screen Shot 2020-04-09 at 1.31.54 AM.png

I also added an instruction text “press and hold the record button to launch” in case the user doesn’t know how to play. Here is the final version:

final version.gif

Here is the final UI design for the filter:

Screen Shot 2020-04-08 at 10.48.30 AM.png

AR Pong Game

I have been exploring the video and computer vision in Processing and find it very interesting to engage human-computer interactions.

I work and live few blocks away from the Coolture Impact, an interactive public art platform at Port Authority Terminal. One of the interactive artwork featured recently is Stardust Wishes. It offers visitors a unique experience of this emerging art form. By moving, dancing, waving, or pointing, visitors create their own spectacular light show. Whether shooting holiday fireworks across the massive screen, effortlessly creating swirls of kaleidoscopic colors with a wave of the hand, or swaying an abstract deco cityscape of light, they are essential participants in a unique artistic experience.

People interact with the Coolture Impact at the Port Authority in New York City. VIDEO CREDIT: JOHN FRATASSI

People interact with the Coolture Impact at the Port Authority in New York City. VIDEO CREDIT: JOHN FRATASSI

Every time I walked by the art installation, I slowed down my pace and interacted with the virtual elements in the screen. Even a small movement of a simple image can trigger a lot of fun. So, I really want to make something as simple and as fun.

Jude’s doodle on the project

Jude’s doodle on the project

So, my idea is very simple. I want to make an interactive program that can turn users into a virtual object to interact with the screen.

First of all, I tried motion tracking.

Screenshot of the code for motion tracking.

Screenshot of the code for motion tracking.

The core idea of motion tracking in Processing is to go over all the pixels and look for the things that I want to track, either it’s brightness of a color or the movement between previous pixels and current pixels.

It turns out like this:

MOTION TRACKING 1.gif

My movement was mapped out by white points, and the colorful ball moves to where the pixels are changing.

Then I tried out simple color tracking.

Colors can only be compared in terms of their red, green, and blue components, so it’s necessary to separate out these values. To compare the colors is to calculate the distance between two points via the Pythagorean Theorem, which is to think of color as a point in three-dimensional space and instead of (x,y,z) we have (r,g,b). If two colors are near each other in this color space, they are similar; if they are far, they are different.

Tracking my skin color and map out the bones with blue points.

Tracking my skin color and map out the bones with blue points.

Ok, now it’s time to add the virtual element to interact with. So, the first thing came to my mind was a ball. I can make an AR Pong game to pay tribute to my first try of programing. (Pong game is the very first computer game in human history.)

An automatically generated virtual ball

An automatically generated virtual ball

And now it’s time to turn my red pen into a paddle to hit the virtual ball:

The color of the red pen was tracked and turned into a virtual paddle.

The color of the red pen was tracked and turned into a virtual paddle.

Then I used ArrayList to create multiple virtual paddles:

Screenshot of the ArrayList code to create multiple rectangular paddles.

Screenshot of the ArrayList code to create multiple rectangular paddles.

Now my two red pen are turned into multiple paddles. I also created a function to merge the squares into one when they collide each other.

Now my two red pen are turned into multiple paddles. I also created a function to merge the squares into one when they collide each other.

To make the Pong game, I need the ball to bounce to four directions in reaction to the four edges of the square paddle, so I divided the paddle into four parts:

Ball detection.jpg

The code looks like this:

Screenshot of the ball detection codes.

Screenshot of the ball detection codes.

I also added a virtual explosion to exaggerate the ball-paddle collision.

Boom! It works!

Check out the AR Pong Game that can turn anything you have into a virtual paddle to hit the ball:

Single mode using a red pen

Single mode using a red pen

Of course there is multiple player mode:

Multiple mode with two red pens

Multiple mode with two red pens

It’s too pathetic to only play by myself. So I invited my colleague to test run with me:

And yes, I was doing star war moves…

And yes, I was doing star war moves…

OK, I am done with the pen. Now it’s time to try out turning face into a paddle.

My colleagues must think I am crazy doing this all day…

My colleagues must think I am crazy doing this all day…

And of course I tried to play in multiple player mode:

I feel so sorry for the ball. It seems like nowhere to run.

I feel so sorry for the ball. It seems like nowhere to run.

To add levels of difficulties, I coded a function to speed up the ball when the score reaches 100, 300, 500, and 1000.

I think I am getting a headache.

I think I am getting a headache.

I am very satisfied with the result. I was able to use ArrayList to create multiple squares and explosion, Computer Vision core ideas to track motion and color, functions and conditional loops to make a simple game. Yeah!

AR Spongebob Selfie App

Firt of all, I had the idea of making an AR selfie app for my UX Design class at New York University, which aims to make something for real people in Dumbo, Brooklyn, and addresses on climate change issues.

My plan was to make an AR selfie app for Dumbo visitors to engage them to explore the area by collecting AR selfie stickers at different spots. The stickers can only be unlocked on site. When taking a selfie, the app will automatically generate a climate-change-themed poster, saying something like "This is how Dumbo looks like in 80 years due to the sea level rising caused by climate change." The app will be part of the "Dumbo Selfie Challenge" social campaign, like the ice bucket challenge, to get viral on internet and lead to a fundraising to help protect our planet.

(Here is a very early prototype of the app)

After a series of user testing, I figured that in order to make the stickers collectable, I need to cooperate with a popular franchise, like PokemonGo. So I chose Spongebob. 2019 happens to be Spongebob Squarepants' 20th anniversary, and the undersea animation is a perfect match for showing Dumbo undersea.

la-et-hc-spongebob-20th-best-year-ever-2019021-001.jpeg

To make a prototype for the selfie app, I found Daniel Shiffman's workshop materials on Face Detection in Processing.

There was a lot to read and watch, since everything was brand new to me. But, after a week of playing back and forth of the preset examples, I figured out how to add images to my face and change the size of image according to the depth between me and the camera.

I tried out the traditional square to crop the faces, the emoji (of course), and even Kim Jeong-eun's face.

Square to test out face detection

Square to test out face detection

Frog emoji to test out face detection.

Frog emoji to test out face detection.

Kim’s face to test out face detection.

Kim’s face to test out face detection.

Try out Spongebob sticker

Try out Spongebob sticker

To make the app more interactive, I made a group of bubbles with Array and PVector. The bubbles will respond to mouse clicks and feels like real bobbles undersea.

Test out interactive bubbles at NYU library.

Test out interactive bubbles at NYU library.

Everything looks great until I was trying to put everything together. First of all, Processing crashes every time when I tried to import two libraries at the same time. I presented my work in class and got help with enlarging the program storage.

Secondly, when I was finally able to put everything together, the facial detection functions won't show up on the screen. I tried multiple ways of rearranging the code, but neither way works. So I have to get some help from the teaching assistant.

The TA was super helpful. He told me the problem was the difference between video size and openCV size. They must remain exactly the same. He also showed me how to use pushMatrix and popMatrix to properly frame the images.

The TA also showed me how to make simple animations by using PVector to rotate the stickers.

The spinning frog emoji

The spinning frog emoji

In the final product, I didn't use the stickers to cover the user's face, but to let the characters from Spongebob stand on user's shoulder.

During the coding process, I also had some problems with the math, since the video needs to be scaled twice (e.x scale(2);) and pixels need to be translated horizontally to the right. (e.x translate(-200,0);) When I was trying to make the buttons, I have to put in the numbers to try out whether they are properly lined.

Screenshot of the codes

Screenshot of the codes

I also confused myself a little bit with the function of mouseClick. I imagined this prototype will be put in use on a smartphone, so I coded mouseClick for all the interaction. When I try to reset the screen with mousePressed, I forgot it was also used for pushing away bubbles. So I realized I need to make separate buttons for each function.

Bubbles

Bubbles

Fly away with Patrick star

Fly away with Patrick star

AR selfie at the Jane’s Carousel in Dumbo.

AR selfie at the Jane’s Carousel in Dumbo.

In the end, I am very satisfied with the final product. I fulfilled my goal of making an AR selfie prototype that can load vector images according to facial detection; the stickers can be switched with buttons; the bubbles are interactive with user's touch; the photos can be saved with the camera icon... Now, I am confident to use variables, arrays, classes, conditional statements, and code some sort of interaction, and even teach myself how to do things that I didn't learn from the class. I still could not believe I could reach this point only half way through the semester. Of course, there are still a lot to improve with the code structure and readability. For example, I could have created a class for the buttons, instead of coding one by one. I could have leave one more spot for reseting the screen with no stickers. Still, I am super satisfied with what I am capable of doing right now.