Even within seemingly limited sound sources, such as using an inductive pickup to amplify the internal workings of a computer/mobile phone/hard-drive, there are ways in which they can be combined to produce a more effective sound-based acousmatic composition.
I am currently exploring ways in which I can perform this composition Live.
The immediacy and correlation between audio and visual will inform the listener the exact source of the sound. These sounds would not be recognizable within human hearing without the help of amplification. The broad sound family (laptop and inductive pickup) has an instantly recognizable ‘electronic’ timbral quality, which helps to direct the listener into understanding that the sound is sourced from a computer. These obvious causal aspects within the composition will often be blurred when transformation into other listening modes occur. I don’t want the causal aspects to be removed, hence the choice for this to be performed live, and the kinetic nature of pickup movement.
I intend reduced listening to occur at certain points within the composition. This will most commonly happen when multiple sounds are layered together, which will create a series of sound objects that are complex in texture and rhythm and do not correlate with the visual/kinetic movements of the pickup. These sound objects should sound autonomous of the visual frame (my physical actions) in the performance.