With the Artificial Intelligence revolution well underway, numerous and varied techniques for machine learning are already in general use. An unexpected offshoot of one of these branches of AI research is ‘Google Deepdream’, which reverse-engineers Google’s image categorisation algorithms to alter images, accentuating features that resemble animals, faces or any other predetermined subject, similar to how humans see shapes in clouds.
My project intends to take inspiration from the Deepdream paradigm and apply a similar technique to musical composition using spectrograms (a visual representation of an audio file). By training an AI system to recognise musical components through these spectrograms, such as instrumentation, timbre and potentially genre signifiers, we will be able to alter existing spectrograms to blend these various musical features in different ways, or generate new pieces in a unique way.
Apart from purely compositional aims, this technique could be applied to AI speech recognition and reproduction to help better understand accents and inflections, or for the creation of immersive apps that could blend existing musical pieces or styles together in fun and novel ways.
Supervision Team: Professor Eduardo Miranda (Director of Studies), Dr Alexis Kirke and Dr Edward Braund