Tutti gli articoli di alessandro.borelli

facial performance capture test #02

This is my last facial performance capture test.

In the previous test I sent the x,y position of each markers inside the frame (720p) and I assigned these position values directly to the cubes. Now I send the offset vector of each marker calculated by subtracting the rest position from the new position.

In these new software’s version I created an interface to tell maya how many and which markers i’m using in the current capture session.


facial performance capture test #01

I tried to recreate weta’s system for facial performance capture using an helmet camera.


I wrote a simple software in java, based on color blob tracking concepts , that searchs the markers  placed on the face and sends their x,y position to maya. These informations are sent with udp protocol, that is faster than tcp/ip because there is no control of the packets.That’s why it’s used for streaming.

The hardware is an helmet camera made by me, using a skater helmet and a hd webcam.

helmet_cameraThe markers are the blue pieces of scotch tape, for now, but i’m creating a version with blue make up pencil  (stolen from my girlfriend :D ) and 3d head in place of cubes.