When people are in long-distance relationships, their existence to each other is limited, like a 5 cm character living inside the phone.
Created by Yundi Judy Zhu
Starred by Lynn Li
When people are in long-distance relationships, their existence to each other is limited, like a 5 cm character living inside the phone.
Created by Yundi Judy Zhu
Starred by Lynn Li
I have been a reporter with China Daily for more than three years. One of the most heartbreaking stories that I covered was the kidnapping and murder of the 26-year-old Chinese student Yingying Zhang.
In this project, I created an AR version of the story. Scan the news photos, readers can also see the press conference video, background information and court drawings.
Video by Yundi Jude Zhu
In March 2020, I started my position as a Multimedia Content Creator for LuminoCity, a multimedia entertainment company that transforms captivating stories into multidimensional experiences. The company hosts LuminoCity Festival, a month-long holiday event with an exhibition of spectacular light art displays, live performances, and a celebration of cultures.
To engage their IP storytelling, I created a “Which character are you in LuminoCity” Instagram filter.
The first step is to create png files for each character. I photoshopped 18 characters in total.
Then, I use Spark AR to code the flow. Originally, I designed the filter to be triggered by touching on the screen.
I also added face distortion to retouch the face. (Yes, of course.) It turns out great!
Then I realize that the user will have to use two fingers to trigger the effect and record the effect at the same time. So I changed the patch:
I also added an instruction text “press and hold the record button to launch” in case the user doesn’t know how to play. Here is the final version:
Here is the final UI design for the filter:
In 2015, Pokemon Go became a phenomenon around the globe.
As a multimedia reporter, I wrote an in-depth story and created a video about the popularity of this AR game.
Article link: http://usa.chinadaily.com.cn/epaper/2016-07/15/content_26103284.htm
But, I want to do more.
So, I ended up designing one myself.
I designed the AR game in Unity and Vuforia, using the NYC metro card as a target image, and it turns into a gun that can automatically shoot Pikachu towards Charmanders.
I know it looks very cheap, but it's very fun to play.
Of course it kills a lot of time while waiting for MTA delays.
I have been exploring the video and computer vision in Processing and find it very interesting to engage human-computer interactions.
I work and live few blocks away from the Coolture Impact, an interactive public art platform at Port Authority Terminal. One of the interactive artwork featured recently is Stardust Wishes. It offers visitors a unique experience of this emerging art form. By moving, dancing, waving, or pointing, visitors create their own spectacular light show. Whether shooting holiday fireworks across the massive screen, effortlessly creating swirls of kaleidoscopic colors with a wave of the hand, or swaying an abstract deco cityscape of light, they are essential participants in a unique artistic experience.
Every time I walked by the art installation, I slowed down my pace and interacted with the virtual elements in the screen. Even a small movement of a simple image can trigger a lot of fun. So, I really want to make something as simple and as fun.
So, my idea is very simple. I want to make an interactive program that can turn users into a virtual object to interact with the screen.
First of all, I tried motion tracking.
The core idea of motion tracking in Processing is to go over all the pixels and look for the things that I want to track, either it’s brightness of a color or the movement between previous pixels and current pixels.
It turns out like this:
My movement was mapped out by white points, and the colorful ball moves to where the pixels are changing.
Then I tried out simple color tracking.
Colors can only be compared in terms of their red, green, and blue components, so it’s necessary to separate out these values. To compare the colors is to calculate the distance between two points via the Pythagorean Theorem, which is to think of color as a point in three-dimensional space and instead of (x,y,z) we have (r,g,b). If two colors are near each other in this color space, they are similar; if they are far, they are different.
Ok, now it’s time to add the virtual element to interact with. So, the first thing came to my mind was a ball. I can make an AR Pong game to pay tribute to my first try of programing. (Pong game is the very first computer game in human history.)
And now it’s time to turn my red pen into a paddle to hit the virtual ball:
Then I used ArrayList to create multiple virtual paddles:
To make the Pong game, I need the ball to bounce to four directions in reaction to the four edges of the square paddle, so I divided the paddle into four parts:
The code looks like this:
I also added a virtual explosion to exaggerate the ball-paddle collision.
Boom! It works!
Check out the AR Pong Game that can turn anything you have into a virtual paddle to hit the ball:
Of course there is multiple player mode:
It’s too pathetic to only play by myself. So I invited my colleague to test run with me:
OK, I am done with the pen. Now it’s time to try out turning face into a paddle.
And of course I tried to play in multiple player mode:
To add levels of difficulties, I coded a function to speed up the ball when the score reaches 100, 300, 500, and 1000.
I am very satisfied with the result. I was able to use ArrayList to create multiple squares and explosion, Computer Vision core ideas to track motion and color, functions and conditional loops to make a simple game. Yeah!
Firt of all, I had the idea of making an AR selfie app for my UX Design class at New York University, which aims to make something for real people in Dumbo, Brooklyn, and addresses on climate change issues.
My plan was to make an AR selfie app for Dumbo visitors to engage them to explore the area by collecting AR selfie stickers at different spots. The stickers can only be unlocked on site. When taking a selfie, the app will automatically generate a climate-change-themed poster, saying something like "This is how Dumbo looks like in 80 years due to the sea level rising caused by climate change." The app will be part of the "Dumbo Selfie Challenge" social campaign, like the ice bucket challenge, to get viral on internet and lead to a fundraising to help protect our planet.
(Here is a very early prototype of the app)
After a series of user testing, I figured that in order to make the stickers collectable, I need to cooperate with a popular franchise, like PokemonGo. So I chose Spongebob. 2019 happens to be Spongebob Squarepants' 20th anniversary, and the undersea animation is a perfect match for showing Dumbo undersea.
To make a prototype for the selfie app, I found Daniel Shiffman's workshop materials on Face Detection in Processing.
There was a lot to read and watch, since everything was brand new to me. But, after a week of playing back and forth of the preset examples, I figured out how to add images to my face and change the size of image according to the depth between me and the camera.
I tried out the traditional square to crop the faces, the emoji (of course), and even Kim Jeong-eun's face.
To make the app more interactive, I made a group of bubbles with Array and PVector. The bubbles will respond to mouse clicks and feels like real bobbles undersea.
Everything looks great until I was trying to put everything together. First of all, Processing crashes every time when I tried to import two libraries at the same time. I presented my work in class and got help with enlarging the program storage.
Secondly, when I was finally able to put everything together, the facial detection functions won't show up on the screen. I tried multiple ways of rearranging the code, but neither way works. So I have to get some help from the teaching assistant.
The TA was super helpful. He told me the problem was the difference between video size and openCV size. They must remain exactly the same. He also showed me how to use pushMatrix and popMatrix to properly frame the images.
The TA also showed me how to make simple animations by using PVector to rotate the stickers.
In the final product, I didn't use the stickers to cover the user's face, but to let the characters from Spongebob stand on user's shoulder.
During the coding process, I also had some problems with the math, since the video needs to be scaled twice (e.x scale(2);) and pixels need to be translated horizontally to the right. (e.x translate(-200,0);) When I was trying to make the buttons, I have to put in the numbers to try out whether they are properly lined.
I also confused myself a little bit with the function of mouseClick. I imagined this prototype will be put in use on a smartphone, so I coded mouseClick for all the interaction. When I try to reset the screen with mousePressed, I forgot it was also used for pushing away bubbles. So I realized I need to make separate buttons for each function.
In the end, I am very satisfied with the final product. I fulfilled my goal of making an AR selfie prototype that can load vector images according to facial detection; the stickers can be switched with buttons; the bubbles are interactive with user's touch; the photos can be saved with the camera icon... Now, I am confident to use variables, arrays, classes, conditional statements, and code some sort of interaction, and even teach myself how to do things that I didn't learn from the class. I still could not believe I could reach this point only half way through the semester. Of course, there are still a lot to improve with the code structure and readability. For example, I could have created a class for the buttons, instead of coding one by one. I could have leave one more spot for reseting the screen with no stickers. Still, I am super satisfied with what I am capable of doing right now.