Mulling over your post, I imagined/realized a parallel with our modern-day (actually, not so modern anymore) synthesizer models: FM, additive, subtractive, wavetable, etc. The keyboard instrument was such a game changer, and latter-day 18th/19th century engineers jumped in to push their models and make their fortunes. Same with synth designers of the last few decades.
Down the road a decade or two (sooner?), robotic music engineers will be promoting their competing visions of how AI should make music: Copycat (learned), template (learned style), fusion, random, iconoclast, etc.
Bonus points: will AI music algorithms of the future make choices about the style of synthesis that they use/prefer? Will they stick to keyboard-centric models (as many AI music simulators do today) or will they favor string/fretted or wind models or percussive models?) and, if keyboard-centric, will they ascribe to preferences for certain historic keyboard escapements, or come up with their own "Nannette Stein-Streicher Grand Action" diagrams?