Learning and generation of slow sequences: an application to music composition

Human brains can deal with sequences with temporal dependencies on a broad range of timescales, many of which are several order of magnitude longer than neuronal timescales. Here we introduce an artificial intelligence that learns and produces the complex structure of music, a specific type of slow sequence. Our model employs a separation of fundamental features and multi-layer networks of gated recurrent units. We separate the information contained in monophonic melodies into their rhythm and melody features. The model processes these features in parallel while modelling the relation between them, effectively splitting the joint distribution over note duration and pitch into conditional probabilities. Using such an approach, we were able to automatically learn the temporal dependencies inherent of large corpora of Irish folk songs. We could use the extracted structural rules to generate interesting complete melodies or suggest possible continuations of melody fragments that are coherent with the characteristics of the fragments themselves.

Gerstner, Wulfram
Presented at:
Lemanic Neuroscience Annual Meeting 2016, Les Diablerets, Switzerland, September 2-3, 2016
Poster presented at the Lemanic Neuroscience Annual Meeting 2016 in Les Diablerets, Switzerland.

 Record created 2017-02-01, last modified 2019-03-17

Download fulltextPDF
External link:
Download fulltextURL
Rate this document:

Rate this document:
(Not yet reviewed)