Beware of Gell-Mann amnesia, validation bias and plain nonsense written into summaries LLMs do.
I have fed ChatGPT a pdf file with activity codes from a local tax authority and asked how I could classify some things I was interested in doing. It invented codes that didn't exist.
I would be very very careful about asking any LLM to organize data for me and trusting the output.
As for "life advice" type of thing, they are very sycophantic. I wouldn't go to a friend who always agrees with me enthusiastically for life advice. That sort of yes man behavior is quite toxic.