Congratulations on the launch. I think it's a smart move to not use MCP here. Because your LLM really needs to understand how the different integrations work together.
Question: you say you do semantics search. If I understand correctly that means you must somehow index all data (Gmail, GDrive, ...) otherwise the AI would have to "download/scan" thousands of files each time you ask a question. So how do you do the indexing?
For some background: I'm working on something similar. My clients are architects. They have about 300k files for just one building. With an added 50k issues and a couple of thousand emails. And don't forget all subcontractors.
Would Slashy be able to handle that?
Not sure we haven't ever done volume that size for one person, but in theory should be able too!
We use indexing similar to glean (but a bit less elegant without the ACLs)
Can talk more about your use case if you'd like to.
Send me a text at 262-271-5339