Tom Simmons (my tutor for the hybridized practices unit) told me about an event I might be interested in, involving MaxMSP programming and dance, similar to the project I worked on in the last unit. So on the 12/1/07, I went to see the presentation of Sarah Rubridge and Stan Wijnan’s work at Chichester University. They had been working on a project investigating the spatialisation of sound. The system they had developed involved tracking the motion of an object in 3D space, using that information to affect sound within parameters set by the programmer/musician in a maxMsp patch. The sound was played back in real time with surround sound. They used dancers to ‘compose’ the music using their bodies to play the space like an instrument.
The set up was a four pointed rig on the ceiling with one sensor on each of the points and one in the centre. The dancers carried sensors (‘crickets’ which Stan had developed with the help of ) in their hands, sometimes transferring them to other parts of their bodies. The sensors communicated with each other using ultra sound and radio frequencies to pin point the location in 3D space of the crickets being carried by the dancer. This worked fairly well, but the crickets needed to be in the range of three sensors to be located accurately. The sensors also had a height range, so there was a space about 10cm from the floor that could not be seen. The crickets also had to be seen by the sensors – if the dancers got their bodies between the cricket and the sensor the signal would be lost.
Interestingly, these limitations of the system were turned to choreographic and sonic advantage by the dancers because it enabled them to hide; when they found a sound they liked they could let it play continuously by disappearing, reappearing elsewhere when they wanted to change it.
The main influence on the sound seemed to be the location of the dancers. Raising, lowering or moving around within the space would alter pitch, frequency and modulation. But Stan had also introduced subtleties in the programming to encourage the dancers to alter their (improvised) choreography. By introducing different areas, or hot spots, in one area, the sensors were more sensitive to small movements; in another movement triggered new sounds. Volume was altered depending how close together the dancers (crickets) were.
The dancers spoke a little afterwards about the experience, one thing I found interesting was that they said when their sounds were too similar to each other’s, they felt they somehow lost themselves, lost their identity. Also that they felt quite nurturing towards these expensive little pieces of hardware they were carrying, which affected their choreography.
As an outsider observing, it was very difficult to tell who was making which sound, but evidently the experience from within was quite different, I would have liked to try it myself. The technology is in it’s infancy, with so many different technologies (the motion tracking, the sound creation, the surround sound, the rig, the programming…) It needed a team of technicians. To have achieved this much with just two artists was quite something, but it was still disappointing to me that this far into the project (three years) it was still not sensitive enough to pick up the subtleties that were being expressed by the bodies of the dancers. Eg, the dancers might make an opening gesture, which you would like to see illustrated by the sound, or express an emotion with a movement that could be represented with a key change or sequence of notes. These are things a dancer could pick up from music they were dancing to, but not yet have it the other way round; their compositional power was limited.