Instead of power math, it is better/more intuitive to do the gain math.
0 dB is 2Vrms on my DAC
+ 86 dB is the sensitivity of Revel towers at 2V
+ 26 dB is the gain of my amp for RCA input
+ 6 dB for stereo speakers playing in phase
- 6 dB for sitting at 2 meters
The result is that full blast on my system is 40V in room on modern recordings from my CD player = 114dB (at 1m/speaker) = 50W peak
I absolutely use -40 dB attenuation which gets me into the 60s dB for background music, TV, etc. And I don’t even have particularly sensitive speakers nor do I use my amps high gain switch.
60dB average = has peaks between 75dB and 80dB so you would be needing around 35dB attenuation.
But that is for a DAC with 2V out which may have a S/N ratio (white noise or even shaped) and that may be in the order of 100dB to 120dB or so.
In this case there is no pre-amp and all the noise is only the self noise of the poweramp which is likely to be slightly worse than the source.
This means that when reach 114dB at full blast the S/N ratio will be around 100dB or so = 14dB SPL but 100dB is loud and 14dB is inaudible around those (100dB peak) levels as the human dynamic range is around 70dB.
When the volume control is turned down -35dB (background music) that S/N ratio will have dropped and is now solely determined by the self-noise of the power amp.
When you hear no 'noise' from your speakers (at 2m distance) that noise floor is below your hearing threshold. It is not a numbers game anymore.
Here's the thing.. when you can hear some faint noise with your ear close to the speaker (say 10cm) then at the listening spot it will be 30dB down.
It will differ when, as the OP coined, a vinyl pre is used.
As most phono pre's have around 400mV out (at 5mV in @ 1kHz) which is -14dB opposite 2V.
The S/N ratio for a good phone pre is around -72dB but this is mainly low frequency noise (RIAA feedback) and not as objectionable.
That one has a rather high noise level but that noise is pink *so not as objectionable'.
When we would only be looking at 1kHz it would be around -90dB.
But your phono pre would put out 14dB less so your attenuation would not be -35dB but around -20dB.
The S/N ratio thus would be 72dB as it would be determined by the first stage.
Now... there is the studio noise and vinyl surface noise which will be higher than that of any decent pre so the S/N ratio of the pre, while being higher than the CDP, is nothing compared to that noise so is moot, just as it is moot for the CD as noise in the recording is already higher than that of the recording.
Just drop the needle in a blank groove and suddenly you hear noise where you won't (at 'normal' to 'somewhat loud' volume control level) with a good phono pre.
At full blast you would now be getting 95dB peak (with the volume open) which isn't loud as it would be around 80dB average and noise level would be 8dB SPL but mainly low frequency so not audible to begin with.
The problem here thus isn't self noise of the 2nd gain stage (power amp) nor that of a good phono pre but rather the surface noise and that of the recording.
But S/N ratio wise the phono pre definitely determines the (measured) S/N ratio. In practice (with a good phono pre) it is good enough and the S/N ratio is determined by the vinyl noise and for CD by the noise in the recording itself.
In the end heard noise levels is what matters and that is perception dependent (and this also age dependent) and speaker sensitivity as well as listening distance dependent.
Beyond a certain (still measurable) level it all becomes irrelevant and that's what counts.
The numbers game is fun but its audio and perception is the arbiter.