MocoCompositor: First Real Test

So I finally got a hold of a green screen cloth from Krishna (Thanks Krishna!) and draped it over our office door. I setup the camera in front of it, found a cool sci-fi desktop wallpaper to use as my background plate, the live camera viewfinder as the foreground plate and ran the MocoCompositor with the greenscreen/chroma-key filter GLSL shader. The output from the compositor streams to a new viewfinder called compositor so any browser should be able to view the final composite also.

The computers were setup such that the netbook Dell Mini9 was running the server, background plate stream and the MocoBot (the live camera feed) programs. Meanwhile my desktop (which is a bit more heavy duty with an nVidia GPU card) was running the MocoCompositor which connected to the server on the Mini9 to pull the camera viewfinder feed as well as the background plate feed (oh ya I didn’t mention that I also wrote a quick static image streamer publisher that will later become a video/frame player in the UI). At each pyglet on_draw event the MocoCompositor applies/blits the current images to the appropriate plates/textures and composites them using the shader. The resulting texture is then pulled from the GPU framebuffer and compressed to jpeg (via pyglet via PIL) and streamed out to the server via a publisher.

It works surprisingly well and fast given the amount of network traffic it produces. And these are the results (I print-screen captured these while viewing the compositor viewfinder in a browser on yet another computer :). Note that the background plate is using a sci-fi wallpaper of the Earth I found on Google images… (I claim no copyright to it, but do claim fair-use for educational purposes):

Here’s me smiling cheesily (Note: my left shoulder isn’t abnormally low, I was reaching for the printscreen button on the keyboard below 🙂 )

And here’s me looking into the Universe contemplating existence or looking for Dr. Who.

Obviously the lighting was crappy and uneven hence the green pixels still present from the foreground plate. I need to come up with a way of passing the shader parameters to the server and to the UI for the user to fine-tune its settings rather than hardcoding it into the shader like I do now. Playing with green screening is getting to be more fun than I’d expected… need to get back to work.

Edit: I wonder if the JPEG compression/decompression could be done as a shader… Looks like NVIDIA’s site has an example of the DCT algorithm as a Cg shader (we use GLSL)… but I’m not sure of what the rest of the JPEG compression algorithm/format would need. But I think a future extension to this software could be a JPEG compression shader to be able to get rid of PIL doing the jpeg compression and speed things up even more… just a thought, but outside of the scope of this project obviously.

Comments are closed.