If you have the ability to check the result, it can be a very powerful tool that can save you a lot of time. If you have to trust it blindly, all bets are off.
Just quoting this for emphasis, it's the thing everyone needs to know about the current crop of GPT / LLM tools.
They are only really useful for two types of question - ones where you already know the answer, and ones where there is no controversy and it is very hard to get the wrong answer.
These tools "know" things by having "read" them on the internet. As we well know, there are often conflicting answers on the internet. The GPTs of the world do not know the difference between correct and incorrect, and in fact have no way of determining it. They're basically the ultimate software-based implementation of "fake it 'til you make it".
If you use them to calculate speaker dimensions, they are likely to be correct, but far from guaranteed.
Doesn't that defeat the purpose?
Well, yes. Huh, turns out these things are somewhat overhyped, are you surprised?
If you have arrived a state in life you need an assistant,
The last thing I need is an assistant who gets things wrong constantly, but says them as confidently as they say 2+2=4.
I actually tried using Chat GPT for my job - from time to time I need to analyze large tracts of interview transcripts. ChatGPT COULD summarize this content reasonably well, or at least it seems like it. That would save me a lot of time.
In practice it's actually unable to process more than about 1 page of content at a time, which for me is a nonstarter, it needs to take the interview as a whole to be useful. (This is due to the token limit on GPT 3.5, not inherent to the technology as far as I know. They only accept a certain size of input right now.) What's worse, it shows no obvious sign of failure, but simply starts making things up - "hallucinating", or missing things.
If you ask it to do something simple, like "count how many times they say "dog" in this document", it will fail miserably if the document is longer than a few pages. Something that MS Word has been able to do for 30 years. It's not actually good at parsing factual information like this, it just really seems like it would be.
But in other ways, it's incredibly powerful. It can tell the difference between who said what in the interview. If you ask it to say what the Interviewer thinks vs. the Interviewee, it can do this reasonably well. It can follow who is talking in the interview transcript and summarize their statements fairly competently. Something MS Word has never done, something I have never seen a computer do before.
It's early days but I think once there is more "fact-checking" built into these things, they will really become a useful assistant like you're saying. For example, they're already working on integrating Wolfram Alpha with GPT tools - which would seem to be a tidy solution to the "it gets math wrong" problem.