Early this year the ever inspiring team at Berg published their interesting exploration called ‘Lamps’, which was a research project they undertook for Google. Lamps speculates on the future of ambient media where Google could inhabit the world around you with the help of computer vision and projection technologies.
I was fascinated by their theory of ‘smart light’ and especially their use of Fiducial markers as switches and knobs.
It brought to my mind an interesting challenge being faced at work (SI Labs) about bringing large scale prints to life by means of adding interactions to static media. So I set about using the techniques used by Berg to create my own set of interactive tokens with media and interactions embedded in them, brought to life by projection mapping over fiducial markers.
Here is a short clip, displaying a few interactions & the character of the setup with a web camera placed over a projector.
I use a Processing sketch utilising the ‘Reactivation’ library to identify and embed media/interactions to fiducial markers. Using the projector an individual marker is brought to life on the opposite wall.
We could develop a language for the use of these markers, some of which are seen in the video. With movement, angular displacement, proximity and possibly touch, we could use the markers as switches, knobs, dials for manipulating media. As shown below:
One could also imagine that these markers could be printed with ink visible only in the IR spectrum, easily picked up by cameras and invisible to the naked eye.(Quick update: Disney Research have used this system in their experiment linked below)
We could also think of controls like these distributed around a space, not just to control media (video, audio) but for other purposes. Like an universal physical interface, for example controlling all the electronic objects in your kitchen. A centralised hub with cameras for eyes, with/without need for projection.
There were of course difficulties with the setup.
1. Mapping the field of view of a camera with projected area is quite cumbersome and needed quite a bit of thought.
My final solution was an empirical one where I had to continually displace the positions between them to find an optimal solution. As seen in the video it is still not exact. I presume that with more work this could be resolved for a particular space.
2. There are issues with hardware. Camera fidelity and resolution coupled with frame-rates is a big factor in the set-up, and need to be carefully considered. So is the projector’s resolution, but this was a lesser issue when compared to the camera.
This was a short 3 day exploration at SI-labs. Thanks to Ben for giving me the time and space to experiment 🙂 I hope to use the kinect in conjunction with the projector in the next stages, as I think it throws up more possibilities. Stay tuned for more!