Ash at CIID

Ashwin Rajan's blog while at the Copenhagen Institute of Interaction Design.

Posts Tagged ‘physical computing

Frontline Gloves – prototype presented for TUI exhibition at CIID

with one comment

This is the description of the Frontline gloves concept for firefighters as presented in the Tangible User Interface exhibition at CIID on 31st Jan 2009.

Frontline - Networked gloves for firefighters

Frontline - Networked gloves for firefighters

What is it?
A pair of networked gloves that allow two firefighters to communicate with each other by using hand gestures in a  firefighting situation.

Who is it for?
Firemen working in teams of two.

Why is it valuable?
Typically firemen need to operate as a tightly knit unit in a firefighting situation. Constant communication with one another and rapid assessment of the changing environment is key to their safety and effectiveness. The conditions can be extreme, with hazardous objects in their path, or with smoke so thick that visibility is too low to scope the size of the space they are operating in.

The Frontline Gloves enable firemen to quickly scope a zero-visibility space by means of direct visual feedback about obstacles and clearances. Further, the gloves allow them to send instructions to the teammate by means of simple hand gestures. This reduces the need for spoken communication, saving the firemen precious air that would be used up in talking, and overcomes challenges of radio such as cross-talk.

How does it work?

Each glove contains custom made electronics and sensors that allows communication between them via a wireless protocol. The glowing of ultra-bright LEDs built into the glove indicate specific instructions.

What were the key learnings?
–  Tangible User Interfaces have vast potential to address challenges faced by small teams of rescue workers, such as firemen, scuba divers etc. Screen-based interfaces demand a high degree of attention to operate, often challenging in the conditions these users find themselves in. A powerful answer is replacing screen driven-driven interaction with natural and gestural interaction.
– Designing solutions for niche user contexts like this one demands thorough user research and prototype iteration. This needs to be intrinsic to the design process adopted in the project. We interviewed researchers in the field of wearable computing for firefighting at the Fraunhofer Institute, Germany. This helped validate and refine our core assumptions about both the context of use and the design of the product a good deal.

Team Members
Ashwin Rajan and Kevin Cannon

Advertisements

Toy View Workshop – Intent

leave a comment »

Often the intent of a experimental workshop can have everything to do with what its participants learn, and the depth and innovation of its outcomes. One such was the ‘Toy View’ workshop, which we at CIID attented in late December ’08 with Yaniv Steiner of Nasty Pixel. It was an exercise in contemplating, iterating and building concepts for toys and games. I thought the intent of the workshop was very interesting and decided to provide it here:

Toy View – Description of the workshop

For thousands of years mankind kept on crafting toys, with very little change in shape or form. Can a new generation of toys emerge, combining both the physical aspect and the diversity of the digital world?

This workshop will provide students with both practical and theoretical knowledge in the field of computer vision, in relation to: Play, Games and Toys. The expected results are toys or games in which a person uses a physical object to interact with a digital environment.
Students will attempt to create innovative “magical” toys, that are physical – mostly appearing as physical objects or artifacts made from different natural and synthetic materials – and at the same time serve as controllers and actuators for functions dealing with digital data. Digital data can be a wide set of elements, starting from pure text and ending in audio, videos, images, and at times even social particles. The emphasis is on creating a new hybrid of physical computer games.

Illustration made during brainstorm about what makes toys playful and how interactivity plays a role in playfulness.

Illustration made during brainstorm about what makes toys playful and how interactivity plays a role in playfulness.

Structure
At first, students will learn to harness and manipulate different computer-vision tools by the use of a camera, that provides machines with the ability to ‘see’. This part of the workshop will focus on building artificial systems that obtain information from images in order to understand their surrounding environment. The camera in this case is correlated with the human eye. However, the human organ that actually decodes this information is the brain and not the eye – interpreting images as what humans grasps as ‘vision’. This first step will explore ways and techniques to craft such machine-vision.

The second part of the workshop will drive students into the world of Games and Play. By investigating classical computer games, physical games and toys, students will brainstorm the topic, experimenting in original ideas that will combine both physical ‘toy’ objects, and the digital world. Conceptually speaking – at this point, new interactions and games will emerge.

The third and final part of the workshop will focus on realization and crafting the above concepts. At the end of the workshop student will have a physical working prototype of their idea, the result, in the form of a new game that will be presented at the end of the course to fellow students and colleague.

The whole workshop description including technical and conceptual aspects is here.

Rock is the new swivel

leave a comment »

One of the chief goals of technology has been to make tasks more efficient and as a result save time. But this has only meant an increased pace of life, as we try harder to pack more into less time and effort.

The swivel chair is a classic ‘efficiency technology’ that has left aching backs and stress in its wake. In this prototype, we seek to introduce a powerful antidote into the domain dominated by the swivel. We emphasize how ‘rock’, an interaction thats all but disappeared from the modern ‘sitting’ context, can be a mantra to soothe frayed nerves, and at the same time serve up a widely appreciated need, thanks to networked digital technology.

The theme we worked with is: ‘Guerilla free time – how can technology, which has been designed to heighten our efficiency and productivity facilitate break time, helpful laziness, etc.’

Via this prototype, the everyday mundane act of fetching coffee during a hectic schedule is transformed into an act of relaxation, a forced ‘quiet time’ that encourages you to use every coffee drinking opportunity to take a break, listen to some music, and simply chill. 🙂

Rock is the new swivel -  The completed prototype

Rock is the new swivel - The completed prototype

How does it work?

The rocking chair has an Arduino microcontroller and a Xbee wireless communications device attached under its seat. Also attached is a small MP3 player with some preloaded music, and a small mobile phone speaker. The coffee machine is controlled through a hacked power box, which contains a second Xbee . The two Xbees are in communication with each other. An accelerometer is attached under the chair. When the chair is rocked, the Arduino sends a message via the Xbees to turn on power flow to the coffee machine, thus starting the brewing process. The MP3 player is simultaneously turned on. The result is – soothing music and a merry brew in the making, while the person rocks on!

Mimi enjoys a rock accompanied by some soothing music, while making herself an afternoon coffee.

Mimi enjoys a rock accompanied by some soothing music, while making herself an afternoon coffee.

Early prototype of the rocking chair setup

Early prototype of the rocking chair setup

Written by Ashwin Rajan

December 15, 2008 at 11:32 pm

Emotional Considerations in UbiComp

leave a comment »

Mark Weiser’s seminal article has proved to be sweeping in title as well as consequence. It is widely argued today that “The Computer for the 21st Century” was the first of its kind to effectively capture the essence of the idea about what is today known variously as ..ahem – pervasive computing/ubiquitous computing/ambient intelligence/physical computing/the internet of things/things that think/haptic computing and so on and so forth. And why not? I love loaded words, especially those that also mean something!

But the point here is that this space gets more and more real all the time. Its serious enough now to prompt such long-term projects such as the, again, quite sweepingly titled “The Disappearing Computer” initiative, while figures such as Bill Gates have made efforts in the past to publicly address this emerging domain.

Needless to say, I am excited about this stuff, and before jumping headlong into the embedded, gesturally-triggered world, its probably also worth asking (as Gwen Floyd brought up in class the other day): what technologies should be allowed to disappear, become hidden? And what artifacts should stay external, be apparent, tactile? Which interactions provide those intrinsic emotional connections we love to have with the world of things around us? And what extrinsic behaviors exhibited by our products enrich our everyday experience in yet unnoticed and un-researched ways? How come some people want to pay all their bills with one flick of a wrist, only so they can go back to building that IKEA bookshelf one shelf at a time?