That’s a gross over generalization. Some of the insurance data here suggests use of AI to make underwriting decisions. There are several states with regulations which could potentially pull these agent solutions into their regulatory oversight if used by the industry to effect insurance outcomes.

Odd lots podcast had an interesting snippet about an financial institution that uses AI to make loan decisions. The guest said that they only use it on applicants who were rejected in the traditional sequence, and then uses AI to accept them if possible. That way there's an articulable reason for a rejection, but they use the non-deterministic AI to allow an extra person through - since the laws about loans are mostly around not discriminating against people - companies are (generally) welcome to accept whoever.

That's dependent on the credit laws of the country in question though. In Australia you have it both ways, you cannot unreasonably discriminate (e.g. race, gender etc) but at the same time you are forbidden from issuing credit to applicants who cannot meet the affordability requirements of said credit. E.g. issuing a loan to a customer who provably cannot afford it is a breach of the NCC, and the company is held responsible for this. As a credit provider you must make reasonable enquiries into a customer's financial position, failing to do this is a breach. You must also be able to explain and justify the decision to issue credit if challenged by the civil regulator (AFCA - who are granted significant power in addressing this), on the basis of a customer complaint, and they most certainly do not accept "human said no but the computer then said yes" without hard facts such as proven positive income flow (pay slips, bank statements), known expenses, liabilities and reliable credit history.