There’s a lot to it, but one big change is the reliance in the most popular genres on digital instruments to create sounds. These instruments are almost entirely programmed, not performed in real time. It turns out that digital instruments are not very “expressive“ as performance instrument.
Absolutely untrue. The first professional digital synth that sold in units of 1000s was the Yamaha DX7. It uses FM synthesis, which enables very large changes in the harmonic structure during the course of a note, as well as changes with velocity (how hard a key is struck), the modulation wheel (the position of which affects a note that's already started) and came with a breath controller (again, control notes that have already been triggered.) Many professional keyboards also have "aftertouch" which means that again the note is modulated (changed) depending on how hard the keys are pressed, again, after note have already been triggered.
It's possible with digital synths to have an essentially arbitrarily complicated system of modulation which allows for as much expressiveness as desired.
It is true that the basic controller interface (and feedback, or lack thereof) hasn't really progressed beyond piano-style keyboard (plus knobs.) Different kinds of controller do exist, for example the
Haken Continuum, the market for "different" controllers has tended to be, shall we say, modest. The breath controller that came with the DX7 wasn't used much, either.
(Caveat: I admit I'm not entirely up to speed with the market, e.g. there are various controllers from the likes of Native Instruments.)
Digital instruments are also abstract themselves. They are embodied in matter, but there is no “analog“ between the motion of the molecules in the instrument (a computer) and the sound that comes out of the speakers They don’t actually make sound, they make numbers. In many cases, when listening to electronic music you are hearing first generation sound production.
In response to user input, they generate PCM streams which are a digitally coded form of the sound, yes.
BTW, synthesis actually began in the analogue world. (The old modular Moogs and the like barely kept in tune, or some have said they were a real pain to get and keep in tune, if you want "expressive!")
Also, "programming" can be done at different stages--when creating a sound itself (e.g. how it evolves and how it responds to modulation commands) or conversely inputting notes into a sequencer and adding modulation to control the sound in the sequencer, too.
Anyway, this is at risk of completely losing the plot. There have definitely been changes to the way music is made, but "studio" recordings are
not "live" recordings. Every trick in the book has always been used just as soon as it was possible to do so, from compiling multiple vocal takes, double tracking vocals, to erasing the start of a bass guitar note to be in time with the kick, and long before "Auto-tune," adjusting tape speed to get vocals in tune!
For example, Blondie, apparently, were NOT happy with producer Mike Chapman when recording "Parallel Lines," as never ending takes were required to get everything perfectly timed.
As a quick reference to what I’m talking about, compare any modern pop to the greats of classic rock. Apples and oranges.
"Pop" music, at least by my definition, is not "rock" (i.e. guitar band) music.
e.g.
This is pop music...
This is not:
OK, so I've cited "bubblegum" pop vs. "progressive" rock. But the point remains: Teenage girls don't buy Pink Floyd records and sing along to them with a hairbrush... wrong market!