For me, the point of making music is making it myself. If want to have something done for me I could just play someone else's record and pretend like I made it.
For me, the point of making music is making it myself. If want to have something done for me I could just play someone else's record and pretend like I made it.
The is the age-old music parochial thing. "Oh, he's just in a cover band, he doesn't write anything" / "Oh, she's just a composer, she can't even play the stuff she writes" / "Oh, he writes and plays his own stuff but knows fuck all about theory so it's not real music" / etc.
Me, I'm having a blast with claude code, MCP, and Ableton. I'm directing harmony and asking for arrangements and variations in rhythm, mixing, and production. Don't know if that counts as "making it myself", but then I was writing music before I could actually play any instrument at all, so :shrug:
Previous generations might have said the same thing about Ableton itself, vs playing a physical instrument. In that regard, AI might become just another power tool for creative expression.
I’ve always said that the more divergent the input is from the resulting output, then the less personal expression you have. For me, in order of moving away from meaningful control in generative models, it goes: “text → code,” “text → picture,” and, at the very bottom, “text → music.”
For me personally, music composition begins and ends with the motif - the melody itself. It’s the part I enjoy the most, and it’s also the part I have the most individual control over since I can sing.
Everybody makes music differently, but if you lack the ability to play an instrument and you also can’t whistle or sing, it’s hard for me to imagine how you’d have any meaningful control over the melody.
How would a non‑musician express an actual melody that they came up with (beyond simple things like instrumentation and general “feelings”) in text? RED RED RED BLUE. (Sorry couldn't resist a Mission Hill reference here.)
With all that out of the way, there's still lots of room for using AI in music. I’ve used it to take some of my existing songs, mostly pianistic in nature, and swap out instrumentation and arrangements just to play around with different soundscapes. It's like BIAB on steroids.
Agree to some extent. At some point though we jump the thin line between creative expression and… magic?
Like if at some point I can just say “Generate a song similar to Smooth Criminal, different enough to not trigger copyright claims” and it just works, and everyone loves it… well is that creative thinking?
I will caveat my first comment by also noting that I am well versed in computer music history, and read many many papers in CMJ[1] and elsewhere about generative and automatic composition tools such as Emily Howell[2]. I do NOT have a problem with generative, algorithmic and automatic composition in this sense, as an extension of the creative intentions of the human composer, in the right context. See also Autechre[3] for what can be done with Markov chains and good taste. What we are discussing here is the musical equivalent of a dishwasher.
[1] http://www.computermusicjournal.org/
[2] https://en.wikipedia.org/wiki/David_Cope#Emily_Howell
[3] http://autechre.ws/
Addendum: I would highly recommend the Margaret Boden book referenced in the wiki on David Cope/Emily Howell, which is an absolutely fascinating read and was incredibly far-sighted in its enquiries on this topic.
Can I ask what the specific markers / qualifiers are for you to consider (let's call them) 'classical' generative and algorithmic techniques fair game in creative composition, but LLM agent based techniques not so?
To me, it seems like the "do it for me" aspect is similar, just at different levels of abstraction.
Firstly, they all came to the use of those techniques after having been through years of work the 'hard way', often being able to play to a conservatoire standard, and had a very extensive grounding in the tradition that came with that. Then they owned* or designed the thing they were asking to 'do it for me' and could modify it at their discretion, effectively making it an integral element of the composition. The prior training was crucial in getting anything good out of any of it IMO (high level reflection based on canon knowledge and deeply considered personal sensibility, etc.)
* I suppose in the early days, running on an mainframe would belie the definition of ownership per se, as it required access and was limited to that specific machine/institution, but then we are talking about a time where personal computing wasn't available.
The main difference is tweakability: With classical generative and algorithmic composition, the human can change parameters in real time and more closely guide the shape of the piece.
This as well. Most 'classical' algorithmic music had an element of expressiveness allowed to the composer in the moment.
I get why people make gut statements like this, and to me something does feel different about AI.
But I realize I have not seen any criticisms of AI generated music that are meaningfully different from criticisms I've heard of other advances/changes in music technology, whether performance or recording.
Sampling, scratching, drum machines, autotune, electric guitars even.
Well in "traditional" music production every individual component of a song has the creative intent of the artist in it. With AI you have no idea if there is any intent or if its just something an LLM spat out.
If all you care about is the raw sound file created and you don't care about the connection you might feel with the artist behind it then maybe intent isn't relevant to you.
Welcome to the era of instant gratification.