In the previous test I sent the x,y position of each markers inside the frame (720p) and I assigned these position values directly to the cubes. Now I send the offset vector of each marker calculated by subtracting the rest position from the new position.
In these new software’s version I created an interface to tell maya how many and which markers i’m using in the current capture session.
I tried to recreate weta’s system for facial performance capture using an helmet camera.
I wrote a simple software in java, based on color blob tracking concepts , that searchs the markers placed on the face and sends their x,y position to maya. These informations are sent with udp protocol, that is faster than tcp/ip because there is no control of the packets.That’s why it’s used for streaming.
The hardware is an helmet camera made by me, using a skater helmet and a hd webcam.
The markers are the blue pieces of scotch tape, for now, but i’m creating a version with blue make up pencil (stolen from my girlfriend ) and 3d head in place of cubes.