Shane Byrne: Mindful Music
As you may be aware by now, I am very much interested in finding new ways of interfacing with the computers and electronic devices that I use to compose my musical works. So far I have spoke about using infrared cameras, washing machines and the weather to help influence and direct the process of composing electronic music. All these tools can be seen as an extension of the performer or composer into the digital world of ones and zeros, a means for a new kind of discourse between art and technology. With that in mind I want to speak briefly about further advancements in the refining the art of conversation between humans and machines.
I recently became aware of a project undertaken by Australian composer Guy Ben-Ary. He has taken the notion of extensions and interfacing with technology to the next level, employing disciplines ranging from electrophysiology to audio synthesis to stem cell research into his latest project CellF. He began this work in 2012 by taking fibroblasts, which are skin cells essential in the healing of wounds and the production of collagen, from his arm in order to grow skin cells and have them cryogenically frozen. These 10,000,000 or so cells were then sent to the Pluripotency Laboratory in Barcelona to then be re-programmed into stem cells. These newly transformed stem cells were then encouraged to develop into neural stem cells with a view to constructing a neural network of cells that behaved in much the same way as our brains do.
This newly constructed neural network was then hooked up to a matrix of electrodes that served to provide stimulation to the network and gather information about stimulus being received by the neurons. The output signals from this network were then sent to amplifiers to boost the signal and subsequently sent to an analog synth. In fact this reinterpretation of data occurs in manner analogous to the Weather for the Blind device I wrote about in a previous post. An array of microphones are used to detect sound produced by other musicians and these acoustic signals are then transduced into electrical signals that in turn stimulate the neurons in the neural network. There are certain conditions that must be met in order for
the neurons to act upon the received electrical signal much like the way we react to sound whilst performing music (does the musical call require a response, what volume is suitable to respond to the given gesture etc.).
I had mentioned in another previous post that I often employ generative techniques to create musical passages based on a set of laws and algorithms. Although such routines may give the illusion that the computer is writing the music, it is still just obeying a set of programmed instructions. This is perhaps the first bio-technological generative synth and represents a step toward instruments that can truly think for themselves. While it’s true that at this stage the external brain used in CellF also needs a certain amount of programming in order for it to make meaningful and musical decisions, it is certainly a step towards the future of generative synthesis. Indeed it is possible to imagine that ethnographer Ana Viseu could have been speaking about this very project she made this statement in her article Simulation and Augmentation: Issues of Wearable Computers:
“The body is not simply extended by information and communication technologies (ICTs), but also becomes their intimate host. This represents a new step in the conceptualisation of the synergy between individual (body) and technology (environment), and also affects the ways in which the role and nature of each actor are defined”
Below is a link to Guy Ben-Ary’s website where he describes the project in greater detail: