I looked at this product and found some concerning issues.

First, you're using Firecrawl as your crawling infrastructure, but Firecrawl explicitly blocks Reddit. Yet one of your examples mentions "Check if user complaints about Figma's performance on large files have increased in the last 3 months. Search forums like Hacker News, Reddit, and Figma's community site..."

How are you accomplishing this? The comment about whether it's legal to crawl Reddit remains unanswered in this thread.

Second, you're accepting credit cards without providing any Terms of Service. This seems like a significant oversight for a YC company.

Third, as another commenter mentioned, GPT-5 can already do this faster and more effectively, and Claude has similar capabilities. I'm struggling to see the value proposition here beyond a thin wrapper around existing LLM capabilities with some agent orchestration. We're beyond assuming prompts are useful IP nowadays, or am I wrong?

Perhaps most concerning is the lack of basic account management features - there's no way to delete an account after creation. I'd say I'd like clarification, but there's no way I couldn't just code this up with Codex to run locally and do it myself (with a local crawler that can actually crawl Reddit even).

Hey, appreciate the feedback. Will address all your points.

Regarding Reddit, we have our own custom handler for Reddit URLs which uses the Reddit API, which we are billed for when we exceed free limits.

For Terms of Service, you're right, that is definitely an oversight on our part. We just published both our Terms of Service and Privacy Policy on the website.

When it comes to comparing with GPT-5 and Claude, we do believe that our prompting, agent orchestration, and other core parts of the product such as parallel search results analysis and parallel agents are improvements on just GPT-5 and Claude, while also allowing it to run at much cheaper costs on significantly smaller models. Our v1 which we built months ago was essentially the same as what GPT-5 thinking with web search currently does, and we've since made the explicit choice to focus on data quality, user controllability, and cost efficiency over latency. So while yes, it might give faster results and work better for smaller datasets, both we and our users have found Webhound to work better for siloed sources and larger datasets.

Regarding account deletion, that is also a fair point. So far we've had people email us when they want their account deleted, but we will add account deletion ASAP.

Criticism like this helps us continue to hold ourselves to a high standard, so thanks for taking the time to write it up.