How about llms not using responses like " I understand " ever. An llm is not capable of understanding and having it use human-like idiosyncrasies are what make people turn to the llm instead of realhumans that can actually help them