I'm not so sure - or at least not any more than a human judge would at a camera club.Alas, I think chatgpt response was a failure. That image should have received a score of 0 out 5 for any competition. Heck, even in a non-competition setup, that picture would be devoid of artistic value. That AI gave it 3.5 out of 5 means it is still aiming to please.
I've never seen a club judging where even an image as artistically meritless as the one I gave to chat gpt gets scored zero. Typically scores will range from around 40 or 50% of max for the worst images up to 100%- for exceptional. The judge only needs to score a poor image so that it comes bottom or close to bottom of the pack. And a Judge nearly always finds something positive to say amongst the "constructive feedback". And I've seen submissions as bad as my cake
There is an unspoken intent not to humiliate people. Expecially when the real point of the club is to help people grow their photographic skills.
In particular this:
# The Important Question
Is this:
* A quick phone snap?
* Or intended as a competition food image?
Because if this is a casual snap — it's perfectly fine.
If this is for competition — it needs styling, lighting control, and compositional intent.
Would be a damning indictment stated in front of the club members - effectively saying "this should never have been submitted"
I think the AI gave a remarkably human tactful response. Pretty much perfect for a club evironment, where the judgement is given in front of the members. Perhaps less appropriate for an anonymous "scoring only" assessment - for example in the sort of competition where photos are submitted - scored - and the results handed out. Though I think in these type of mass submission competitions, it is often only the top 20% or so that are actually scored. The rest falling into the "runner up" category. Runner up meaning "it didn't even make it into the "these are the ones we will score" pile."
Last edited: