Week 4: Movement for Motion Capture

MOTION CAPTURE LAB DOCUMENTATION:

Project: Data Cleaning and Retargeting

Date: 2019.09.28 4 p.m.

Location: NYU Black Box Theatre

Participant: Yundi Jude Zhu, Chaoyue Huang, Dana Elkis, Chester Ma

Goals:

  • Find a partner or two (or 3 or 4). Review all videos before coming into the Black Box. Each group come up with 3 different scenes. Each scene must either have a restriction in the virtual world that must be dealt with in the physical world OR a skeleton must reveal part of its character (it has hip issues, it’s moving through a dense forest, it’s 3 years old, it’s part of the Royal family, half of its body is filled with helium, etc). Document the experience. Record & export the scene to bring into Unreal for further documentation.

Steps:

  • Get motion capture record and clean the data

  • Make 3D model with MakeHuman

  • Get model rigged with Mixamo

  • Sync motion capture data with 3D model and export as .fbx

  • Use the data to make animation in Unreal

Motion Capture and Data Cleaning

Glad that we invited Lynn from CS program to be our performer again. We separate our roles as follows:

Performer: Chaoyue Huang, Lynn Li

Director: Dana Elkis

Motion capture technician: Chester Ma

Video documentation and production: Yundi Jude Zhu

Key Notes:

  • We are the last group got in the lab that day. Probably because of that, the cameras are sensing a lot of ghost markers, which make the data cleaning very difficult. We spent entire Saturday 6-9pm and Sunday 2-6pm (7 hours!!) at the Black box theatre to clean the data.

  • Make sure the interaction is necessary and as simple as it could be. Any intersection and overlapping will cause gaps between the data.

  • Also, it’s important to make each take as short as possible. 15 seconds should be maximum.

  • When cleaning the data, I found that deleting some rapid curves and auto fill the selected section with smoothing will help a lot.

  • Always have a video reference!

Chaoyue is getting dressed. We are the last group got in the lab that day. Probably because of that, the cameras are sensing a lot of ghost markers, which make the data cleaning very difficult. You can see from the projected screen that Chaoyue’s he…

Chaoyue is getting dressed. We are the last group got in the lab that day. Probably because of that, the cameras are sensing a lot of ghost markers, which make the data cleaning very difficult. You can see from the projected screen that Chaoyue’s head was not positioned right.

Scene 1: Cliff climbing

In this scene, Chaoyue is climbing the cliff and trying to reach Lynn.

We used the table, ladder and chairs to simulate a cliff for climbing. Dana also provides artificial support to make Chaoyue’s movement more shaky.

We used the table, ladder and chairs to simulate a cliff for climbing. Dana also provides artificial support to make Chaoyue’s movement more shaky.

The hand grabbing is very tricky. We have to be very careful to not cover the marker.

The hand grabbing is very tricky. We have to be very careful to not cover the marker.

Scene 2: Run and jump over

After realizing we were running out of time, we decided to do a short scene which is to just run and jump over something. We used the diagonal line to maximum the running distance. We have to ask the performers to run multiple times, to make sure their landing position is captured. It’s very easy to lose track when they are close to the edge.

Chaoyue and Lynn run and jump over stools.

Chaoyue and Lynn run and jump over stools.

Scene 3: Argument and kicking something

This scene works really well. The performers were actually arguing with each other. To make the take simple and short, Dana was sitting in front of the performers and clap to remind them engage the plot. The interaction looks super real, and the data is almost 100% perfect. It only took me 5 min to clean it.

Awesome acting! I wish we could capture their facial expression as well.

Awesome acting! I wish we could capture their facial expression as well.


This is the screenshot of scene 1’s data which literally took us 7 hours to clean it:

Because of the ghost markers and interaction between two performers, the data we captured has a lot of gaps and unlabeled markers.

Because of the ghost markers and interaction between two performers, the data we captured has a lot of gaps and unlabeled markers.

Retargeting

I created two new avatars for these three scenes. Originally we want to make them as two hikers. But I think if I make all these interactions happening in a farm would be funny.

Here is my avatar farmer created by MakeHuman:

I should have make the skin color darker. The default color of skin looks very pale in Unreal.

I should have make the skin color darker. The default color of skin looks very pale in Unreal.

Retargeting the avatars to the motion capture data is very tricky. I found it easier to retarget one avatar at a time. Because we rigged the avatars in Mixamo, all the hip bones are shared if we retarget both of them together.

Retargeting Chaoyue in Motion Builder.

Retargeting Chaoyue in Motion Builder.

Retargeting Lynn in Motion Builder.

Retargeting Lynn in Motion Builder.

3D animation in Unreal

When importing the fbx into Unreal, materials are often missing. I found it will work if I just import the retargeted animation with mesh. Everything will be neat and tidy in one place.

Here are my performers in Unreal! Man, woman, sheep, pig. What a paradise.

Here are my performers in Unreal! Man, woman, sheep, pig. What a paradise.

All done! Enjoy our final work: