To me AI will be damaging unless we can be sure it is offering up factual information.
I wonder if this is likely to happen anytime soon. Let's rule out omniscience or whatever, but just say that the AI has to do as well as a smart, unbiased human in establishing whether a bit of information is true or not.
This is not so trivial. If a given factoid is not well-established, you have to search for many sources of information, judge the credibility of each one, draw upon existing knowledge, sometimes in many fields, and finally come to a conclusion as simple as "yes, no, or maybe".
AFAIK the current GPT-style generative tools do not do this kind of work. I am not sure if the tool exists or is close to existing.
Also, there are plenty of people with vested interests in making sure the AIs DON'T offer up factual information. For example, reporting certain facts in certain countries gets you thrown in prison, others have a lot riding on spreading misinformation, and I'm not just referring to Shunyata here.
And then there is, of course, the depressingly large set of facts that are up for political or religious debate, for whatever reason.
To give an extreme example, some people maintain it's a fact that slavery in the USA was good for the slaves. Most others maintain the opposite. Few will be pedantic enough to point out that "good" and "bad" are subjective judgments and so aren't "facts" per se. Almost nobody would be satisfied with the latter response anyway. What should an AI do about this, let alone the people who have to answer for its output?
I think we can make AIs avoid spouting total fantasies or outright lies at some point. However, the status of unconfirmed and controversial facts will probably always be up to people.