AI doesn't "think" or "understand" like (some) humans do. Every ML algorithm essentially boils down to pattern recognition. Yet they can be quite sophisticated, and even if they never actually think or understand, they may someday become functionally indistinguishable from humans.
The definition has 3 key parts
1. Given data, discovers patterns on its own, which the programmers did not tell it about or anticipate.
2. Uses these patterns to make predictions or decisions.
3. Given more data, makes better predictions or decisions.
Every practical computer is a Turing machine, which means ultimately, everything it does boils down to 4 operations: READ, WRITE, IF, GOTO. AI/ML has a lot of hype from Musk et.al. Most of the experts, like experts in any other field, have a strong curiousity and fascination of the field (else they wouldn't have invested years of their lives studying it) so they focus on capabilities rather than limitations. Some get so entranced by the possibilities that they redefine their own humanity in terms of it, with notions like, "my mind is nothing more than a Turing machine, and any understanding or consciousness I have is just an illusion however real it might seem." But some experts do talk about limitations, and aren't afraid to discuss the idea that meaning might be deeper than formal logic, that truth may be deeper than formal proof. For example, Gary Marcus and Emily Bender among others. For a more balanced perspective on this subject, Google them and read some of their stuff.
Ultimately, it requires human supervision. AI/ML can outperform humans on tasks that match their training, but performance becomes unpredictable and poor as soon as they encounter situations outside their training. Only a human can understand the context and meaning of a situation and act accordingly, outside instructions, disobey an order realizing that carrying it out would be unethical, or contrary to the mission objective.