Ash at CIID

Ashwin Rajan's blog while at the Copenhagen Institute of Interaction Design.

Posts Tagged ‘gaming

Toy View Workshop – Computer Vision Project 1

leave a comment »

Continuing from describing the intent of the fascinating Toy View workshop in my last post, here are some images from the first experimental project we built – a  collaborative dance game called Face-Off. The game was developed using motion tracking in Adobe Flash. It is played by two dancers who are prompted to dance or move based on visual feedback (while dance music plays in the background). Only the dancer who is prompted visually must move, while the other must stay absolutely still. The team of two dancers wins if they are each able to dance at the right time and keep the music going until the end of the song.

Initial sketches of the Face Off game

Initial sketches of the Face Off game

Simple storyboards to work out the specifics of gameplay.

Simple storyboards to work out the specifics of gameplay.

Working out details in motion tracking on the webcam of a laptop.

Working out details of motion tracking on the webcam.

Screenshot of the game, which is projected on a vertical surface such as wall. A green box appears around the player whose turn it is to move.

Screenshot of the game, which is projected on a vertical surface such as wall. A green box appears around the player whose turn it is to move.

Toy View Workshop – Computer Vision Project 2

leave a comment »

In my last post I described the intent of the Toy View workshop held at CIID in December ’08, taught by Yaniv Stiener . Here are some of the projects we developed in the course of the workshop.

Reactable Game
Here is a video on the fascinating open-source Reactable technology – a collaborative music instrument with a multi-touch tangible surface-based interface – developed in Barcelona, Spain.

reacTIVision is an open source, cross-platform computer vision framework for the fast and robust tracking of fiducial markers attached onto physical objects, as well as for multi-touch finger tracking. The reacTIVision Engine and sample code, as well as Feducials (the markers used for tracking that can be attached to physical objects) can all be downloaded for free here.

//mtg.upf.es/reactable/?software)

How the reacTIVision technology works (courtesy http://mtg.upf.es/reactable/?software)

//mtg.upf.es/reactable/?software)

Annotated markers called 'Fiducials' can be pasted onto physical objects to faciliate tracking of object location and movements. (Courtesy http://mtg.upf.es/reactable/?software)

We built our own reacTIVision set up to develop a simple game during the course of this short workshop.

The reactable set-up

Experimenting with the reactable set-up.

We developed  a simple game using images of CIID students and faculty including Yaniv. The game mixed the heads, torsos and legs of different people depending on the how the feducials were placed and turned. A lot of fun and laughs all around!

Reactable game of mixed body parts.

Screenshot of reactable game that mixes body parts from different pictures.

Written by Ashwin Rajan

February 1, 2009 at 10:11 pm

People as instruction processors – extended implications

leave a comment »

An exercise I found deeply interesting that we did some weeks back at CIID was called ‘People as instruction processors’. Dennis and Patrick, gestalten at the unique ‘the-product‘, gave us this brief:

“Write down three instructions-sets. These instructions will then be dictated to three other participants. The other participants will process the instruction by drawing on a piece of paper with a red, green or blue marker. The exercise aims at introducing the participants to programming as an everyday exercise, a translation from intention into language into action. The result will be a set of very analog procedural drawings.”

A first-cut instruction I came up with ran something like this:
‘Start in the middle of the page. Mark the point.
Draw a circle with any one point of its circumference lying on the marked point.
Draw a square touching the circle …’ and so on

As you can guess, no sooner than you write out the first set of instructions do you realize that you will need to be much more precise in further iterations. What exactly is the ‘middle’ of the page? Does that refer to the center point on the page as plotted from all four corners? Or, in the second and seemingly more explicit statement – what should the size of the circle be? Each detail provided can set the context and nudge the ‘instruction processor’ to execute results closer to the original intention. When the same instruction set is executed by multiple people, the results can be very similar to, or more often, radically different from one another, depending on the instructions given, and its understanding and execution by the subject.

Some of the results that came out of the exercise looked like this:

People as Instruction Processors - Results 01

People as Instruction Processors - Results 01

People as Instruction Processors - Results 02

People as Instruction Processors - Results 02

I thought this was a very powerful exercise because it communicated the fundamental challenges of providing instructions to processors – whether human or machine – in a manner that achieves intended outcomes, while also underlining the importance of ‘syntax’ or grammar, specificity and detail-orientation, interpretation and translation. The exercise also gave me a fascinating glimpse of how instructions and their interpretation can facilitate (or stifle) emergent phenomena.

Extended implications: I can think of at least two other domains where interaction designers can benefit highly from exploring how people behave as instruction processors: user research, and robotics.

To elucidate the first, here are six examples of contexts from a user research perspective where I can see value in learning how people process instructions:
1. Road signals that control traffic and commuters
2. Call centers: where operators perform (and are evaluated) based on a wide variety of parameters which are essentially instructional in nature, commencing with basic training.
3. e-learning: there is a reason the work of creating e-learning content is called ‘instructional design’.
4. Car Rally: Driving based on navigation instructions
5.  Mass co-ordinated, precision, and time-sensitive operations such as the emergency evacuation of a building by a team of firefighters, or a combat situation – with an extended analogy into virtual worlds of MMORPGs and team gaming; any number of examples can be given here.
6. The patient as an instruction processor who is required to follow the doctor’s prescription of the medication-diet-exercise-lifestyle mix as precisely as possible.

My second connection to this exercise is from a robotics perspective. I will keep it short by pointing you to this video by Rodney Brooks. He pivots a significant bit of his presentation on human-robot interaction, so watch out for that. Half way through his talk, the professor demonstrates how he and his team build artificial intelligence to mimic human instructional processing capabilities by calling a member of audience to the stage. Brooks’ AGI (artificial general intelligence) stance that ‘humans are essentially machines‘ makes for compelling reading. His own take on the singularity contention is neatly summed up in his statement “the singularity will be a period, not an event.”