I can see that future generations are going to think that I'm boomer for preferring the performances of real actors instead of AI slop.

The music industry already went through this with AutoTune and we know how that turned out.

I don't get the autotune argument. It's like saying we shouldn't be using electronic instruments because it's not real or we shouldn't use digital audio instruments because they're not real etc.

It's just a way to get different kind of sound. It won't make you good tracks.

I think AI is starting to verge on making actual good music. The latest Suno release is wild.

An example here: https://v.redd.it/fqlqrgumo5rf1

I find this one interesting because Rap has classically been difficult for these models (I think because it's technically difficult to find the right rhythms and flow for a given set of lyrics).

I wonder what the prompt was for that.

The instrumental part is quite interesting but the lyrics/vocals...

it's just AI slop, like the median

like if you just put a bunch of words together and shipped that. Quantity was never what people wanted imo.

It is impressive if the instrumental track was made with just some prompts though

I actually think the vocals from ~2:00-~2:35 are pretty impressive there. It's wild to me that the models can play with tempo like that.

I've been listening to this across a variety of genres though, maybe these lyrics and vocals are more to your taste:

(similar to Opeth) https://suno.com/song/9ab8da05-c3f2-412d-80b4-c7d0b3ae840f?s...

(indie rock) https://suno.com/song/756dd139-4cba-4e40-b29c-03ace1c69673

I don't know but it doesn't impress me one bit? like I'm not trying to hate, but it just seems kind of like the model is given the track and then it tries to just follow it by matching words and then spitting them out, like as if it could talk about making a sandwich over some epic track and it'd sound the same?

like, LLMs are fantastic at generating patterns, so words that match and same with images etc.

But there's not much uniqueness? it's "impressive" like a savantic kind of ability to come up with rap, but it doesn't really product something I'd want to listen to..?

I listened to the metal thing and kind of the same thing?

It's very high fidelity, like the quality of the drums and etc it's quite impressive, but the vocals seem off? it's like a poem being read by TTS then transformed into "metal voice"

and kind of just an averaging of "metal music" kind of like stock photos and into a track, very formulaic

not to mention many metal bands etc they do formulaic stuff especially if they have an identifying kind of hit

But to me this is cool tech, but I wouldn't listen to it

I've listened music for a long time but I don't listen to a wide variety today, however for example with pop it can be very complex or very simple, but average or "almost" will really not make a good song, it can seem simple in hindsight but probably blood sweat and tears went into such songs, or creative energy that might never come back as strong.

just my raw thoughts though. it could be me being biased knowing it's AI, but I don't think so. I think my brain has kind of adapted to a point where I can feel if something is AI because it always seems super "average"/mid?

I'd love to see a blind study comparing a wide spectrum of these AI tracks to lesser known real artists (so the participants don't just recognize the songs) to see whether people can tell or if knowledge of the source biases them. I'm genuinely curious as to the results.

I don't think people would think anything strange of a lot of these tracks if they just randomly heard them on the radio.

When you listen to music that has been AutoTuned, you don't know if the singer can actually... sing. If you put them in a room and asked them to sing a song without artificial aid, would you actually enjoy their performance or not? You don't know!

This marked a divergence from thousands of years of vocal performances where singing ability and enjoyment of the music were one and the same.

AutoTune was the first slop, and the general population seems to like it.

The arguments against auto-tune are typically different, since it's obvious to anyone that autotune can't make you sound like a soprano if you're nowhere near - so skill is still required.

The problem with autotune is more that it removes a lot of nuance from singers' voices, it's like listening to MIDI instead of listening to a real piano. This is, however, something that can be improved. Synthesizers can produce wonderful musical effects, and there's lots of highly virtuoso music on synthesizers (including voice distortions, pretty similar technically to autotune) for those that are into it. Progrock, for example, was all about using new technology in complex and extremely interesting ways. Maybe more interestingly for your particular objection, you can look at early electronic music, say Vangelis or Isao Tomita or Kraftwerk. For at least parts of their songs, they could have just programmed their synthesizers ahead of time and played concerts without even being on stage - but that doesn't take away from the fact the music itself.

Ultimately, if the music sounds good and elicits some feelings and thoughts, it's good music. Whether the musicians can reproduce it live or it's done 90% in a studio doesn't really matter here. Of course, it does mean it may not be worth going to a live show from some particular performer, and it also means that the performer is not necessarily the most relevant artist - the person programming the "auto"tune should at least be considered part of the band.

That's like saying movies are not good cause they're not live action-only performances

For me the biggest thing is actually the production, there's many people involved usually and sometimes real magic gets made, and that magic might not even contain any vocals at first

like what is acceptable music? only raw vocals & acoustic instruments?

> The music industry already went through this with AutoTune and we know how that turned out.

Yeah, it turned out that almost all mainstream tracks nowadays have post-processing on vocals (the extent varying between genres and styles).

>The music industry already went through this with AutoTune and we know how that turned out.

they use it, everyone uses it, it got better to the point where most people dont know its used, ever heard of melodyne? well AI made it even better.

And then there has been about 20 years of people using it even as their style of music, notably in hip hop, reggaeton, urbano, country, etc.

Boomers like to think it was just an annoying fad in 2008-2011 or something, but it never went away, now everyone uses it, whether obvious or not

I don't understand why you think you'll be able to tell that far from now.