So let's review your claim:
Unheard frequencies cause effects in the audible range. But when we record and play back what's in the audible range, the effect is gone.
Getting kind of tired of *Audio Science* Review participants promoting such nonsense.
A vinyl record, for example, can record up to 15kHz-17kHz. .. Because of the tip of the recording stylus is not small enough to make a clear 'drawing' of frequencies above this. But it does record harmonics of higher frequencies as -periodical- small notches (what they become when projected into the surface of a vinyl master) that when played back ad substance to the sound.
Tape does record these higher frequencies naturally and it does adds them during the playback.
Harmonics occur as part of a sound wave and audio signal in a different fashion (although explainable using the same principles). When a wave occurs it is because it carries certain amplitude which is sufficient to create the next cycle of the same in the longitudinal sense of propagation. Waves do not isolate their energy in a certain portion of the medium but rather they propagate in all direction. A secondary oscillation that occurs from a central one, will carry some of the energy of the main wavelength and will have a very similar frequency value. In the space of the medium between these two, if these carry enough amplitude, some superior harmonics will form. Most sound waves indeed carry higher harmonics. Lower harmonics, always. The difference and correlation of the phase and amplitude between any fundamental and it's harmonics is very specific and no current digital systems are able to capture it. Although this can be remediated to a point using higher sampling rates and more bit depth.
Another part of the story is that there's no feasible way for any current digital system to match their sampling clock or microprocessor clock to a certain value of phase for any audio frequencies. Let's say that you have a tone of a constant frequency coming out of a violin. .. This tone is A 440. This tone is going to acquire any 360 degrees of phase 440 times in one second. No digital system can tell if they started recording at any given value of phase or another. Or, in other words, when the ADC generates a code value, they are not delivering the phase value of the voltage of the audio signal.
(Certainly you can see in many DAW's a beutiful graphic which is somehow an approximation of the voltage value and phase of any signals. But the margin of error is huge, depending how you wanna calculate it, it can be as huge as 600-1200% or more).
Most sound fields are composed by different tones of different frequencies, at different amplitudes. When these frequencies travel together in the same medium and when they reach together the element of a microphone, they are going to superpose to each other 'transiently' at interger points and create numerous intermodulations.
Similarly to the effect of 2 equal tones cancelling or reinforcing each other, any two simultaneous frequencies will alter each other in some way even if they are far apart in the spectrum. And the values of the resulting intermodulations are very phase-specific (and not always a factorial of 2x2x3x3x5x5x7x7).
Oftentimes this transients harmonics as well as the natural harmonic could be 'seen' in a graphic as appearing for a couple of miliseconds every some five or six, ten or more miliseconds.
And the precise reproduction of this harmonic content is what makes a recording to sound more natural. Most digital systems will cut the frequency spectrum at 20kHz. No digital systems are able to deliver a neat reproduction of the harmonic content of a recording. Although, it does not means it's all lost. Loudspeakers and preamps will -to some extent- re-create artificially the lost harmonic content by reinforcement of fundamentals.
And anyways the sound quality is not everything. I prefer to listen to my favorite song at a listenable level of compression than a recording I don't like nevermind what level of resolution.