If you used a HPF and LPF, the track itself would probably sound narrower (less wide) because engineers use EQ to spread things around the stereo image, which is what I refer to has soundstage, although EQ can also make sounds seem higher and lower too, not just width. So say you have two instruments panned to close proximity to one-another, you may add some 8khz to one of those to separate it a little bit more. Cut this with a LPF and the separation will be lessened and the ears ability to discern the position as well in that stereo field.
Adding stereo reverb to a mono sound and creating a stereo track will effect the stereo image and thus soundstage (as I consider them the same or closely linked at least). If the track remains mono, then no, the soundstage will be static as there is no stereo field.
Not sure wha to say about the question on the headphones. all i can say is that the AKG's sound like the sound is wider, like the speakers in a room being 10 feet apart and the AT's sound like the speakers are 6 feet apart. I know that's not the best description but hopefully makes sense.
I consider a mono signal to be static as It cannot move anywhere, it can only get louder or quieter. Using reverb can make something feel further away, but it would still feel like the sound is in front of you, not to the sides etc. Imagine a band placed in from of you playing acoustically. Drummer in the centre, guitar on the right with a sax a few feet further right and piano on the left with percussionist a few feet further left. A good soundstage maintains that image. A mono signal could not convey that at all they would all be in the centre.
Not sure what you mean by the last question. Do you mean a recording is made in two different locations, a church and a sound booth? Or the same recording is listened to in both locations on the same headphones?
If you record a band either in a booth or a church, the soundstage of each recording will be different.
Now I'm more confused. So frequency response also has something to do with soundstage on your view? But I thought you said it was stereo separation? And I agreed with that, when you said headphones (because you're listening to music mastered for speakers, it would sound wider if you didn't apply crossfeed). Okay, how about this. Same track, HPF at 500Hz and LPF at 7kHz VERSUS HPF at 1000Hz and LPF at 19kHz. Which one under you definition has more "soundstage"?
The next thing I don't understand, is this "Higher and Lower" thing. Stereo effects only occur on a horizontal plane when recording (unless you're recording with one mic higher than the other, which noone does). But there is no way to tell if music is coming from a higher or lower source, simply with a stereo recording (without visual aids). This actually also happens with real life sounds (ask someone to go behind your back and play a sounds behind your back with your eyes closed, you're not going to be able to tell if the sound is coming from above your head, or lower than your head). It's one of the downsides of having ears horizontally matched. Owls can actually hear sounds up and down since their ears are displaced. We mostly use visual and other clues (like time delay) to aproximate where the sound is. So when you say messing with EQ is making the sound go up and down from a positional standpoint, that's not really possible from playback without other auditory clues.
As for mono + reverb not contributing to soundstage. I assumed soundstage was more than simply the distance of a percieved sound left and right of your ear. I would have thought also distance in a cone even directly away from your face contributes to this. You bring up an example about live players stacked in front of one another, not being able to maintain imaging. I agree, but now you've implicated imaging into this whole ordeal, which I assumed (and also like before pleaded, that you remove soundstage from as many established concepts). This only makes understanding it more complex.
The last question that you didn't understand: I mean't recording made in two different locations, and both recordings listened to on the same headphone in the same home for instance.
You replied with a band playing in a church versus a booth, the soundstage would be different. But that doesn't make sense with the original claim you made (you said soundstage is the stereo separation, and something like headphones inherently provide it more than speakers, which I agreed). But then how is it possible two identical recordings, with members in the same position, suddenly change soundstage, if no other factors changed?
With the understanding we had about mono recordings.. How are my conclusions about soundstage not exactly the same as yours? I said originally, soundstage can be best summed up with the combination of recording type (binural, mono, etc..) and quality obviously. Along with pinna activation levels. But most importantly post processing effects like echo, reverb, and channel panning..
You're slowly describing everything I initially said? Aside from the unintuitiveness about how echo and reverb don't add soundstage because it simply move the recording further away? Or the weird "lower or higher" position by simply changing frequency response (though I argue, up-and-down positioning can't be discerned unless you have one ear higher than the other on a horizontal plane of listening).
Okay, so, do we agree at least on this: That DAC's and AMP's definitionally have no appreciable relation to soundstage? And headphones can have an effect as they present music in a different manner than was originally mastered (more stereo seperated due to isolation from left and right channel of audio output which speakers can't do). But more importantly, most soundstage differences people describe are achieved due to recording type, and also recording setting - along with post processing techniques?