Extracting readable thoughts from the intermediate representations is a great step for transparency. It makes debugging model behavior much more viable.