Neural Generation of Structured and Coherent Music
Samples generated from RNN-based language models rarely exhibit structure or coherency. In the music domain, we posit that directly
modeling repetition, a fundamental property of music, will improve models' abilities to generate coherent and mellifluous music. To this end,
we present the results of novel neural architectures trained on both synthetic and real music data.