You will need 1 or more of each of the following: performer, contact microphone, omnidirectional microphone, mic stand.
You will also need a computer running a real-time audio processing environment such as Supercollider or PureData, and some way of outputting sounds from the computer (inbuilt or external speakers, a PA system, etc.).
- The contact microphone is placed on the floor to be performed on, and the omnidirectional microphone is positioned on the mic stand somewhere out of the way.
- The performer begins to improvise movements.
- After a specified duration the audio processing environment analyses the acoustic data collected via the microphones (determining, for example, averages and ranges for volume and frequency since the beginning of the performance), and begins generating and outputting sounds based on this analysis.
- The performer adjusts his/her improvisation in response to the generated sounds.
- After a second specified duration, the audio processing environment performs another analysis; this time, the inputs from the microphones will include both sounds from the performer and the environment’s own generated sounds. The second analysis produces modifications to the environment’s outputs.
- The performer again adjusts his/her performance in response to the modifications made by the audio processing environment.
- Repeat steps 5 and 6 as many times as required.
What is interesting about this score? For a long time Western art production was driven by principles of mimesis in which the physical characteristics of external objects and environments (such as appearance, sound, movements) set productive limits on what artists could and couldn’t do. In this score, the human performer is the external variable setting limits on the productive operations of the audio processing program; the sounds produced by the program constitute a representation of these limits. In a sense, the performer becomes the program’s Mont Sainte-Victoire.
- Replace or augment the program’s sonic outputs with other kinds of output, for example light.
- Experiment with different durations between analyses, including real-time analysis. At what point is the cadence of a movement phrase lost?
- How could the complexity of the performer’s movements be detected more accurately, without the need for cumbersome measuring instruments attached to the performer’s body?
If anyone is interested in contributing to a realisation of this score, please contact me.