Maker Faire Observation

Introduction to Physical Computing

25 Sep 2018

This weekend I volunteered at the NYC Maker Faire 2018. It was a lot of fun to see the projects that makers around the world are working on. I saw robotic dinosaurs, giant hands crushing cars, 8 year olds teaching soldering and much more. My volunteering duties involved helping students from ITP who were presenting some projects. I was put on Dan Oved’s project, Presence.

View this post on Instagram

A post shared by Dan (@stangogh) on

Presence is a reactive piece that responded to the movements of the viewer. Tom Igoe would classify this project as a mirror type object as it serves as a reflection from the looker. A large display of tubes were constructed with each tube having a black stripe that slightly spiraled down. They were made alive by a network of servos that would rotate the tubes and at the top of the display was a regular USB webcam that pointed at the audience. The camera detected people’s body positions using a javascript library called Posenet which relayed the position points to TouchDesigner which controlled the servos and matched the spiral arc to the viewer’s position.

At the faire I spent a lot of time watching people interact with the display and at times stepping in to instruct people how to engage with the work. There were certain people who understood what to do right away and walk up and start waving their arms, but for others they were puzzled by the lack of instruction. Over the course of the weekend I saw scores of different interactions. Some people engaged with it for 2 seconds and walked away while others would be engrossed and stand there for several minutes. As I was manning the booth there was a mixture of both tinkerers who tried to understand the mechanics of the work and others accepted the piece as magic. I would like to discuss a few areas of failure where interaction did not work.

First, there were no instructions to the engagement and people were there to discover the engagement on their own. However, this posed a problem for some of the more authoritarian inclined who waited for myself or Dan to instruct what to do. We would tell them to move and the piece would react to them, however, this loose instructions had people hesitantly waving their hands or making slight movements that would not be picked up by the program. For these people I think a little card or display that tells the viewer to make large movements would be a solution.

Another area of interaction was the installation of the piece. For the piece to work best it required a single person standing close to the webcam, too many people would confuse the camera. Dan installed the work with an X taped in front for the ideal distance. While generally I think this would work well for prototyping, the exhibition environment did not suit this setup. People would often tour the faire in large groups and (unless instructed) would often stop together in front of the display which did not work. I think that isolating the actor by visual markers or physical barriers would get better results. For the installation have been installed with a curtain or wall to clearly delineate where people should stand.

Talking with Dan and his material choices, he could have used different technology as well to isolate the actor. Using a kinect or depth sensing camera would allow him to isolate actors better. In this instance, Dan opted to used posenet because that was a technology that he was working on with Google. I think this raises an interesting question regarding exhibitions where you can choose to display your technology or your project. Particularly in instances when you are working with bleeding age tech where bugs are expected.