Maybe you meant to say that current Large Language Models don't understand what they output, but understanding is not an all-or-nothing event. Even people don't always fully understand what we hear or read, especially if it is not fluent in a language. And computer programs can, indeed, be constructed that can partly understand natural languages.
Consider a self-driving car that is programmed to accept requests such as "Take me to the airport." If you give it such a command, and it, indeed, takes you to the airport, then it certainly has understood your request -- better than a person who doesn't know English would. It doesn't matter that the car wouldn't know what an airport is, other than a place on its map. It just has to understand your request enough to carry it out. Partial understanding is enough, both for people and for computer programs; it depends on the task.