> It tried to build something simple and while it got the job done the thinking displayed did not fill me with confidence. It was pages and pages of "actually no", "hang on", "wait that makes no sense". It was like the model was having a breakdown.

It has been probanly trained to assess its own "thoughts" regularly and outputs those for the assesment results. I wouldn't worry much about the reasoning text contents, and it's nice to have them in contrast to the closed model "summaries", so it's easier to see what's going on.