Most mortgage processing delays aren’t due to risk — they’re due to manual workflows.

We’ve been working on SimplAI, an AI-driven system designed for banking and financial services, starting with mortgage operations.

The problem we kept seeing:

15–22 day processing timelines

Heavy manual document handling (500+ pages per loan)

Repetitive data entry + verification loops

Underwriters spending hours on non-decision work

So we built a set of AI agents that handle the operational layer:

Document AI (IDP) → classifies + extracts data from loan docs in minutes

Income analysis models → parse tax returns, payslips, and variable income

Verification integrations → real-time employment + financial checks

AI-assisted underwriting → pre-validates files and generates conditions

Compliance engine → continuously checks against regulatory rules

What we’re seeing in production:

End-to-end processing: ~18 days → 3–5 days

Data extraction accuracy: 97%+

Underwriting review time: 3–4 hrs → <45 mins

Cost per loan: reduced by ~40–50%

We’re not replacing underwriters — we’re removing the operational bottlenecks around them.

Still early, but we’re exploring:

Agent-based workflows across lending lifecycle

Better handling of edge cases (self-employed borrowers, non-QM loans)

Explainability in underwriting decisions

Would love feedback from folks in fintech, lending, or anyone building AI systems in regulated environments.

Promising impact, but in regulated domains like mortgages, the real challenge isn’t speed, it’s proving reliability and auditability at scale

Having worked in AFC related compliance, I do think that there is a lot of potential for AI to make a huge impact in efficiency if the emphasis is put on making relevant data more visible to humans making decisions. Seems like you've identified the real bottlenecks in your processes and made significant improvements.

In the bank I worked for (and for the regulator they were responding to), one of the big pushbacks against using AI for compliance related processes was the "blackbox" effect. If no one understands _why_ the machine made the decision it did you can get into real legal problems if the action is contested by the person affected by that decision. Generally speaking any automated decisions need to be auditable which track specifically which rule was in place when the entity was processed and what decision path was taken based on this rule.

Another thing is that this kind of decision track record should also be visible to the human who will ultimately make the approve or reject decision. If for example your AI system is making a realtime employment check and flags something as problematic - the person checking that record should be able to click through the audit logs and verify that indeed the AI flagged that entry based on the March 2026 version of the ruleset and not the now-outdated January 2026 ruleset and be able to check what the differences in those rulesets are.

So yeah, the TL;DR is that regulators want to see logs.

Could you share more about your compliance engine? Where are you sourcing the data from?

Happy to connect and would like to learn more about explainability