NP
You can look at an online calculator to assess SPL vs. input power for a given loudspeaker sensitivity, listening distance, and placement with respect to walls (boundaries). But you don't even really need that for this. You can ignore loudness curves for an initial approximation, and for a typical speaker with reasonably flat response just assume the same power input at any frequency leads to the same output SPL. That is, if the speaker has flat frequency response, and is rated for 90 dB/W/m SPL, than for 1 W input at any frequency the sound level will be 90 dB when you are 1 m from the speaker. The level will fall off with distance at a rate determined by the speaker's directivity (dispersion pattern) and your room's size and treatments etc. You can ignore that too, though usually the fall-off is greater at higher frequencies than lower frequencies, but you are asking for a relative comparison between a single amp and multiple amps split across frequency bands.
Just think about 1 W at any frequency producing 90 dB at 1 m from your speaker. This is very simplistic but good enough for a quick hand-waving analysis. Now if you use two amps and they are identical, then the signal producing 1 W at 100 Hz or 10 kHz should be the same. If the amps are different, you need to determine their gain, and decide if you need to adjust their levels. This is where it gets complicated and requires some math.
The parameter most likely to be specified are input sensitivity and power output, which determine the gain of the amp (unless you get one of those that actually lists gain as well). Looking at an example:
Amp A = 100 W into 8 ohms with 1 V input
Amp B = 25 W into 8 ohms with 1 V input
Gain in V/V is Vout/Vin. Vin is specified as 1 V (rms) and since power output Pout = V^2 / R then Vout = sqrt(Pout * R) where R = 8 ohms
Thus Amp A gain = 28.28 V/V (29 dB) and Amp B gain = 14.97 V/V (23 dB). Since 1 V is a common sensitivity spec, reflecting a typical 1 Vrms output of many consumer products, the amps have different gains to achieve full output power from 1 V input. Note 6 dB is a pretty large difference, so if you put the same input level into both amps, the smaller amp will be 6 dB quieter -- until it clips. That is likely the "high" amp and likely does not need as much power, but you do not want the lower mids and bass to be that much louder than the upper mids and highs, either.
To be at the same level you'll need to add 6 dB attenuation before the 100 W ("bass") amp. Either add a series attenuator of 6 dB, turn the level control down 6 dB, or adjust the trims or channel levels in your preamp or processor to align the bass and treble gains (-6/0, -3/3, etc. in dB). Then 1 V into each amp channel will produce 25 W, though at the amps' inputs there will be 1 V into the 25 W amp and only 0.5 V into the 100 W amp's input connectors. When actually using the system, loudness curves say the bass will be much higher (10~30 dB) in level than the treble, so you may not lose power by aligning the gains. That is, when the output is 2 V, the bass amp will get 1 V and drive to 100 W, and likely the tweeter amp is still getting only a fraction of a volt because higher signals sound louder to us and thus require less signal (power).
Since "passive" bi-amping using an AVR sends the same full-range signal to both amplifiers, i.e. no crossover inside the AVR in this case, you must use identical amps, or attenuate he amp with higher gain. Attenuation fixes the level problem, but since both amps have the same signal, you don't gain anything by using a low- and high-powered amp for highs and lows because the input signal is the same from the AVR, there is no frequency splitting to let lows be louder than highs without clipping the AVR's output. Trinnov and other high-end processors allow you to set crossovers inside the box so you could use different amps. But it's a lot of effort for generally minimal if any gains.
As for audibility, I have usually said 1 dB change is not really noticeable when changing the volume, but in this case a fraction of a dB can be audible since it is applied to a very broad bandwidth (the range each amp is covering). For this, I would use the listening test criteria of 0.1 dB or less between amp channels to avoid perceptible changes in frequency response. And note changes can be good or bad; many people prefer non-flat response, and adjusting the response to account for room response and preference is perfectly valid though may not be desirable from the standpoint of altering what the artist intended for you to hear.
Again, all my handwaving, YMMV - Don