Not to dwell on that subject overly, but you are correct of course. Unless one is focussing on a narrow technical question or function, generating text about renewable energy while avoiding discussion of climate change issues (being a significant factor in renewable energy economics, rationales and strategies) would require some odd constraints and/or distortions (which would be political, not scientific). Arguing for 'AI' without guardrails, then insisting on avoiding certain subject matter is inconsistent nonsense.
We don't disagree. But don't conflate societal norms with trendy motives, that's a straw man and an oversimplification.
Selective facts are certainly problematic, but 'pure facts' are also insufficient. For example, most societies posit as normative that slavery or genocide are unacceptable. We won't go into detail on this forum, but suffice to consider that one social group can gain competitive advantage by eliminating another, or that fully functional economies can be built on slave labour. Acceptability or otherwise is primarily based on morals and ethics, rather than facts. A probabilistic text generator can refer to those things in context obviously, but Microsoft and others won't let their 'AI' products advocate those things, and the necessary applied constraints are normative, not factual. It's impossible to avoid value judgements when developing and implementing functional 'AI' products in our societies.
So we have two broad problem areas. Firstly, there is no comprehensive training corpus comprised of 'pure facts' to start with. Secondly, some normative constraints will be applied necessarily, requiring value judgements, some of which will be contested.