This is a game developed with Java, kinect and 3D Maya with Simone Carcone, Giulia Dell’armi and Adriano Muraca: Let’s Make guys. We have created this for Let’s Make innauguration to entertain people who come.
This is my last facial performance capture test.
In the previous test I sent the x,y position of each markers inside the frame (720p) and I assigned these position values directly to the cubes. Now I send the offset vector of each marker calculated by subtracting the rest position from the new position.
In these new software’s version I created an interface to tell maya how many and which markers i’m using in the current capture session.
I tried to recreate weta’s system for facial performance capture using an helmet camera.
I wrote a simple software in java, based on color blob tracking concepts , that searchs the markers placed on the face and sends their x,y position to maya. These informations are sent with udp protocol, that is faster than tcp/ip because there is no control of the packets.That’s why it’s used for streaming.
The hardware is an helmet camera made by me, using a skater helmet and a hd webcam.
It’s in a simple simple version because I wrote in short time, but I’m happy anyway.
The next step is connect with Kinect and get z-depth in gray-scale instead of white or black.
little problem…I don’t have kinect sigh