That is impossible because the models are entirely trained by humans for a very focused application case.I wonder if this is likely to happen anytime soon. Let's rule out omniscience or whatever, but just say that the AI has to do as well as a smart, unbiased human in establishing whether a bit of information is true or not.
...
ChatGPT ... for many corner cases, we train it collectively. Because the model does not have a lot of data yet. You can argue with ChatGPT if you go for an obscure question, and debate with it. Ask it something like "When was the earliest Ming Dynasty jade vase made?" and argue with it suggesting any bizarre data point, and it'll start using your input until someone else corrects it. AI has zero intelligence on its own. It's basically specialized automation with a big data lake.