By definition, we will never "understand" our subjective understanding - there is an unresolvable recursion here. At the same time, I will never know if you are already an AI avatar, or you if I am an AI avatar. The Turing test no longer works.
Ah, but if that happens, you will
know that you don't know. And that differentiates you from a Turing machine
I'm not so sure we'll never understand these mental states. Even if it is infinite recursion (maybe it is, maybe it is something else), we can understand infinite recursion with mathematical precision. Sometimes infinite recursion asymptotically approaches a finite limit. Sometimes it explodes. We also understand different sizes of infinities, at micro and macro scales. And not only understand them, but describe and manipulate them with mathematical precision in number theory and topology.
An example may bring this abstractness down to Earth. We've all had the "AHA" experience when you are studying something difficult or complex, and you suddenly have this flash of insight, to finally understand it. Before that point, you solved problems by blindly following formulas without understanding it. But suddenly, you understand it at a new level. And you can apply this understanding to gain new insight into the problems you were already solving, and to solve problems of greater complexity, in new ways with greater efficiency.
If our brains are nothing more than Turing machines, this AHA experience is nothing more than code, or a set of formal instructions. What is the code that implements this "AHA" experience? Now if the AHA experience was just an emotion, you could say the code is something like, fire this neuron to trigger that gland to release some seratonin. But the AHA experience is more than an emotion. It involves the discovery and storage of a new intuitive abstract model, that you store in your memory and apply at increasing levels of abstraction to solve new problems.
We must all consider the possibility that such code might exist, but if so, nobody has ever devised what that code is. Before coding it, we would first have to understand what the AHA experience is in the first place, and we haven't even gotten that far. That is why all AI and ML are based on pattern recognition. We understand how pattern recognition works, well enough to devise coded instructions that can run on a Turing machine. But for other human cognitive states, like "geometric intuition" or "cause and effect understanding", nobody understands what they are, let alone how to code them. That is why no AI or ML has these other modes of human cognition. Which raises the obvious question whether it is even possible to encode these as formal instructions for a Turing machine.