Those tools affect the audible range (on purpose).
It appears we have different definitions of "audible range". Let me try explaining once again what I mean, from another angle. Apologies if you already know all of it - what follows could be still interesting to others.
Imagine a PCM file at 192KHz with this sequence of four repeating values: [0,+Max,0,-Max, ...]. Let's say this sequence has 2,000 samples. What would you hear when it is played? Obviously, this is a representation of 192,000/4=48,000 KHz sinusoid. Even if the audio chain you use could reproduce it, you wouldn't be able to hear it. Clearly, this signal lasting slightly over 10 milliseconds is out of the human audible range.
Let's start removing samples one by one from the tail of the sequence, and play the signal again. A some point you'd start hearing clicks, and by the time there are only three samples left [0,+Max,0], we'd arrive at the signal that is very clearly in the human audible range, while being derived by a simple procedure from a signal out of the audible range.
Some people argued - and this is partially valid - that what we actually hear are oscillations in the transducer(s), excited by the pulse. To check for that, very accurate and highly damped transducers were introduced instead of regular ones, reproducing the pulse without subsequent ringing. The click was still heard.
Next argument was that the oscillating device we actually hear is the conglomeration of bones in the middle ear. Experiments were conducted (on rodents) when the bones were eliminated and the cochlea was excited with a pulse directly. The measured neuro-physiological effect was still consistent with the animal hearing a click.
The next argument after that was that it is the cochlea itself that vibrates after being excited with a pulse, and that's what we actually hear. This turned out to be true, yet only partially. The cochlea does get excited by the pulse, yet it doesn't settle into a vibration pattern prolonged enough to be detected by the regular frequency-sensitive neural machinery.
What was ultimately found is described in the peer-reviewed papers I referenced in this thread, and in many other papers: there is a different, anatomically distinct neuro-physiological mechanism, specifically tuned to detect very short transients, which are out of audible range of the frequency-sensitive neural machinery.
The evolutionary value of this mechanism is clear: an animal able to detect a very faint and very short transient - for instance generated by a dry leaf broken under a leg of an approaching predator still hidden from view - has better chance to survive, compared to an animal only capable of detecting relatively-long-running harmonic oscillations.
With the transients removed, or morphed into harmonic waveforms and/or noise, music subjectively changes - sometimes for the better actually, but in the context of some genres hugely for worse. For instance, removal of the sounds of nail hitting a close-miked acoustic guitar string is often appreciated. Removal of sharp attacks in rock music usually is not.
The misappropriation of the click-removing plugins can demonstrate this effect, which is dependent on your audio reproduction chain and the music piece, so it is not a universally applicable ABX test but rather a "slowly turn the knob until you suddenly hear a qualitative change" personal experience.