Be sure to check the working. With many LLMs, even if it can set up the right calculation and arrive at the right answer it will likely tell you an incorrect value up front because it's making it up then retrospectively trying to justify it.Except that it can't do maths and will spew nonsense at you instead.
Gemini:
I had a different sort of nonsense when trying DeepSeek on a puzzle, which tried a few incorrectly applied methods, got it wrong, tried to verify its answer, hit a dead end, then concluded "I know the answer because it's been published, therefore it's this. QED." I suppose at least it was the correct number?
I think in a later generation, basic maths will be a solved problem, but for now it's the thing I've had the least success with.