With the release of the iPhone 8 and Iphone X new technologies became available. What will facial recognition/control offer to my system. What is of interest are the possibilities of this tech (especially in low light – critical for dance). Could this be used to control synth parameters ? trigger samples ? Will the device become even more integral in the performance mode ? Time will tell, as developers look for ways to purpose this functionality.
In my professional practice as a musician, composer and educator, I have worked extensively with Dance students to create original musical works for choreography. As rewarding as this has been, there has been a limited input into the musical content from the dancers perspective. At best, I may get an emotive description of what the Dancers are going for and in most cases the artist would prefer you to compose the musical content in isolation. There maybe some review, but generally the dancer/choreographer will choreograph to the fixed composition. I have become increasingly interested in the dancer as a musician. Musical content created in real-time by the performer. This is not a new concept and there are many examples of systems that are in place to allow Dancers to control sonic media through the movement of their body within a defined space. The prohibitive factor in many of these systems is the requirement for both costly hardware and software. My research looks to provide a suite of tools for movement controlled and composed sonic media with readily available and cost effective solutions.