If science is about repeatability, what does it mean when the science comes in the form of nothing but a super-complex computer model and no way to test its accuracy? How do you meaningfully review such a computer model? How would another scientist test it for repeatability? Can the hypothesis being tested by the model be falsified?
(In case you think I'm being political - everyone's super-sensitive to politics these days - I really am just thinking of the general idea of modelling to predict any system that cannot be tested directly (e.g. global scale), such as for epidemiology, ocean currents, tectonic plate movements, insect population spread, bird migration, etc.)
In principle it's no different than predicting the motion of the planets using mathematics. But in practice, that field is maybe rendered scientific because it can call upon 'axioms' to judge the model against. It's also likely to be so simple that a person can hold the principle of the whole thing in their head.
If, instead, I want to predict the global bee population in the year 2050, can this ever really be 'scientific'? At the end of the day, it looks as though the most that can be said about such a model is "It looks reasonable". But if so, where does the 'progress' - the standing on the shoulders of giants - come from? If a consensus forms around a "reasonable" prediction, and further models are therefore developed that also confirm the prediction, it looks to me from the outside as though an edifice is being constructed on a very flimsy foundation. And there is no incentive for anyone, anywhere to break the consensus because there is no glory in saying "my model is even more reasonable than the consensus" and no reason for the scientific establishment to dismantle their previous edifice and build a new one on equally flimsy foundations.
Any views?