Nava Waxman: Sound composition, creative vision, performative research
NewAgain is an audio-focused research project about embodied interaction between humans and machines. Users (body) interact
with the computer in an ongoing feedback loop of movement and sound. The audio feedback is organised into dynamic compositions
that react instantly to the movement and posture of the body. The technology runs on a modern smartphone (iPhone) and is completely mobile.
Rules of sound layers Ambient
A static loop.
Raise hands over head: Voice gets stretched, delay effect
Move any hand: Voice gets clearer (lowpass and reverb effect toned down)
Distortion (bitcrusher effect) when the primary* hand is near the face (from a front view)
Voice is muted when the back of the player faces the camera, except when the hands are
stretched over the head
Played on motion impulse of the secondary* hand.
Delay effect activated by rotational movement.
Activated by rotational movement. Distortion (guitar amp) effect applied when 2d distance of head and secondary* hand increases.
Slowed down when distance of knee and head decreases.
Activated by the motion of the primary* hand
* „Primary“ hand: Hand which was raised to give the start signal; the other hand is regarded the „secondary“
Final mix ^
| <- reverb <- bitcrusher <- delay <- timepitch <-lowpass <- voice |
| <- delay <- string pluck
| <- guitar effect <- timepitch <- percussion | | <- glass |
| <- ambient
I WANT TO BE NEW
The ambient layer was created using the EXA VR Infinite Instrument, as well as the sounding jars, which are experimental percussion instruments that I've been using in my performances (and they are painting on their right)
The final two dynamic layers were created using the darbuka, a Middle Eastern drum.
The flag is really intriguing. When the flag moves and sound is produced, there is a beautiful contradiction or contrast between the flag's substance and the sound. It also reminds me that raising a flag can have physical consequences, which might be accompanied by all kinds of wounds.
I also feel that the sound stretching is e-affective (in the middle). It can take a while for it to announce itself. This is because of a "gap" at the end of the loop. I'll edit that and create a new version that works better. However, in general, I think the sound fits well with the stretching of the body.
I'm experimenting with screen recording on the third iteration. Cinzia and Mahsa are being followed by the camera in Mobius (2019).
The vocal gesture has been elevated to a whole new level.
The machine is clearly unable to recognize the physical element, converting the abstracted form into a fluid, open-horizon gesture.
The tracking is significant despite the imprecise translation.
Second composition, without stretching effect.
The camera is sensing my movement despite being outside the frame. The drum slows when falling down.