In introducing the second Physical Computing lab, Dan O'Sullivan demonstrated a variety of analog inputs: stretch sensors, photoresistors, flex sensors, force sensitive resistors, knobs of all kinds. An hour later, I found myself standing in front of the ITP shelf in the NYU computer store, staring blankly forward and trying to figure out what to build for this project. Some minutes passed and I realized, while staring and thinking about the sensors, I'd started making a vague pinching gesture with my right hand. I was imaging a flex sensor running along the inside of my index finger and an FSR on the pad of my thumb detecting all the nuances of a pinch. And using that gesture — so bodily intimate and musically evocative — to control sounds and movements.
So, the first thing to do was to learn to read in data from the two sensors. I started with the flex sensor, plugging it into the breadboard in the standard setup for analogRead, using a 10k ohm resistor as suggested in the physcomp wiki page on flex sensors. After a little tweaking and some experiments with Arduino's invaluable map() function, I was smoothly controlling a servo with a flex sensor:
At this point, I wasn't sure what project I was actually trying to build. As you can see from my early concept sketch above, I had ideas about controlling sound as well as motion. Even though I eventually settled on sound as the more exciting thing to control, I'm still very interested in applying this interface to motion control and I intend to experiment more with that soon.
After I had the flex sensor working, I plugged in the FSR and got that working as well.
After testing the FSR's output over serial I tuned the parameters to map() in order to successfully normalize it. Once that was done, I decided to start experimenting with another kind of output: sound. I'd noticed an exchange on the Arduino mailing list about the Tone library for generating musical sounds and was excited to try it out. I had some old broken headphones sitting around in my junk box so I tore one of them apart to extract the speaker.
And then soldered on my own wires so I could add it to the circuit:
Once the speaker was connected to the circuit, I followed Tom Igoe's tutorial on the Arduino Tone library and quickly got it working. After some trial and error, I settled on an initial interaction pattern where squeezing the FSR beyond a certain level would allow sound to be issued and the flex sensor would control its pitch. Since the speaker was designed to be held right up to the ear and since I was probably driving it as less than its normal voltage, the resulting audio was very quiet. If you listen closely, you can get a sense of it in this video:
A basic kind of theremin!
I quickly realized, though, that this setup was underutilizing the FSR, treating it more as a switch than an analog input. Granted, the FSR is much more difficult to control continuously than the flex sensor, it's much touchier and more jumpy, but it still felt cheap to treat it as a simple switch.
So, I decided to experiment with controlling two continuous outputs, one with the FSR and one with the flex sensor. As before, I started with servos. This time, I mounted my tiny blue hobby servo to a slightly larger one at a perpendicular angle so that the two inputs could control both axes of motion of a single point. Here was the result:
You can see that it's much easier for me to move the top servo between its top and bottom positions than to stop at any in between. Another factor of the sensitivity of the FSR.
Having seen success with the servos, I decided to try to add a second speaker. I had noticed that the Tone library allowed notes in the the tempered scale as well as continuous theremin-like sounds. I wanted to try out two-voice composition: singing a duet with just my two fingers.
I disassembled another spare headphone speaker, ripped out the actual speaker element, plugged it into the board and got it producing sound. For some reason, the second speaker was quieter than the first. Even more mysteriously, when I tried the first speaker itself in the second PWM output channel I'd setup, it was quieter as well. To every inspection, the two channels seemed identical. I was never actually able to figure out what caused the volume difference.
Anyway, once I had both speakers hooked up and working, the device was basically done. I dubbed it "Pinch Control" because it allowed you to control two pitches using a pinch-like gesture. All that remained was to extricate it from the breadboard into the control and listening interfaces I actually had in mind: the pinch interface for the hand and headphones for the ears.
I also went ahead and changed the software to generate specific notes instead of continuous pitches. I setup an array of note constants as provided by Tone and then used map() to reduce each input to an index for the note array (0-4) instead of a random value in the audible frequency range. I choose the notes selectively, giving each control different parts of a reduced scale (the flex sensor had the bass and the FSR the tenor or soprano bits). This way, no matter what combination of notes was chosen, the resulting music wouldn't become too dissonant. The result, when I listened to it and played it myself was much like bagpipe music: each voice sustaining a pitch as a drone as the other one melismatically danced around it. Here's how the code turned out:
After playing with the device myself for awhile, I wanted to see some other people's reaction to the musical palette I was giving them. I soldered the speaker elements onto some longer bits of wire and stuck them inside my headphones. I asked Noah to come over and try the thing on.
I was excited to see him play it with fascination for a little while, getting lost in the experience.
The musical choices made and the sensors working, I started in on making the headphones more permanent. I went to the store and bought an NBA headband and a sewing kit. I'd saved the soft ear-protecting pads from the disassembled headphones and I wanted to reattach the speaker elements to them to protect the ears while listening. Unfortunately, I basically had never sewn before. Thankfully, Chica was sitting right across for me and offered to teach me. With her help, I was able to sew each element back into its pad:
Once the speakers were reattached, what remained was to figure out how to put them on peoples' heads. The trick is that not everyone's head is the same size so my headphone setup would need to be adjustable. In addition the NBA headband, I'd bought some velcro. I attached circles of velcro to the back of each headphone.
The terry cloth of the headband made a perfect target for the velcro's little hooks and with the pads, you could affix the headphones at any point along the sweatband, making the whole setup adjustable for nearly any size head:
You can see how the velcro attached to the terry cloth here:
And here's me modeling the setup while playing with the controls on the breadboard:
As I started to look so visibly ridiculous in lab, a number of people started coming over and wanting to play with the device themselves and I was delighted to give them a chance:
I was fascinated to watch peoples faces while they played this music that was only audible to them. Some people would smile, visibly delighted, but others, like Miriam here, would wrinkle their brows and take on intense looks of concentration:
I think there's something to this kind of extremely intimate interface where only you can hear the results of your interactions that causes the user to kind of disappear from the room in favor of the mental space their constructing of how the device works and how the pitches move around.
Now that I'd gotten the headphone setup working, all that remained was to extract the sensors from the breadboard and get them hooked up to the hand. I soldered on some longer wires and started playing around. I'd bought an ace bandage, thinking I could use that to attach the flex sensor to the pad of the index finger and the palm so that it could have its full range of flexing.
After experimenting around some, this turned out to be much more difficult than I expected. It was hard to get the sensor to stay attached to the hand in a position that was comfortable and still allowed it its full range of motion to generate all of the possible notes. It was also extremely challenging to adjust the position of the flex sensor on my own hand with only one other hand free.
Eventually, I abandoned the notion of connecting the flex sensor to the palm of my hand and simply attached it to a loop of ace bandage that into which I could slide my finger. The un-soldered end of the sensor would hit my palm in a straight line and then flex as a bent my finger, generating an acceptable range of inputs. This was a highly non-perfect solution as part of the sensor extended off the front of my finger and got in the way of squeezing the FSR and the whole setup was just very awkward. I gave up on attaching the FSR to the thumb at all and merely held in place with the pinch.
Also the long strands of wire I used to connect both sensors to the Arduino were quite annoying — their tension constantly pulling the sensors out of position. Even though the setup worked well enough to let me play with the pinch gesture control I was hoping for, this aspect of the project could be dramatically improved in a number of ways with further work.
Once I'd gotten to this stage, I started asking over interested passers-by to play with the device as well. Here's Mike giving it a go:
And Lisa Maria:
As a musician herself, Lisa Maria's fine motor control was very elegant and she couldn't get enough of the instrument. I basically had to pull it off of her head after five or ten minutes.
I, of course, had very much the same problem:
And, last but definitely not least, what did all of this sound like? I checked out an audio recorder and stereo mic from the ER and stuck them between the headphones. Here's the result: Pinch Control Audio Sample. Because of the difficulty of getting the mic equally close to both speakers and the difference in volume between them to start with, it can be somewhat hard to hear the quieter high voice in this recording. If you listen on headphones, you'll hear that it is in the left speaker though — I intentionally made the recording in a way that would attempt to preserve the very strong stereo separation that the actual setup gives.
Finally, I look forward to getting some advice from Dan O' about how to improve the ergonomics of the hand interface and how to clean up some of the wire mess. After all, I'm going to be using this same interface for controlling some other things in the coming week, so it would be nice to improve it.