What is this about?
I wonder if it's possible to create a musical experience where music robots can behave and make decisions based on their interpretation of human gestures captured in real time. How can this type of interaction influence and model the way music can be written and performed?
I would like to explore and find out if it's possible to provide an environment where human creative expression along with programmed robots can yield new ways of interacting through sounds and gestures.
Until now there have been few examples of this type of experiments mostly due to technology limitations and costs. But recent advances in computer processing power and increasing access to new technologies make for fertile ground for such explorations.
The objective is to introduce new technologies for programming, composing music and capturing human gestures from different physical interfaces in an attempt to interpret human emotions and express them through sounds.
To start out, of course one of the first things I set out to do was to write a musical piece where I could play along with music robots. This exercise would certainly provide me with the most basic considerations and issues when it comes to writing for physical musical machines as well as performing with them.
I was faced with many obstacles and problems that are not considered when playing with humans, like mechanical latency which translates into timing problems or how to implement dynamics on a machine.
|