Hello everyone, thank you for the intense feedback over the last hour.
I see two main concerns emerging, and I want to be completely transparent:
1.so Files and IP Protection The core MCA algorithm is compiled to protect the IP while I seek $100k Pre-Seed funding. This is the "lottery ticket" I need to cash to scale. I did not retrain any model. The system's SOTA performance comes entirely from the proprietary MCA-first Gate logic. Reproducibility is guaranteed: you can run the exact binary that produced the 80.1% SOTA results and verify all logs.
2. Overfit vs. Architectural Logic All LLM and embedding components are off-the-shelf. The success is purely due to the VAC architecture. MCA is a general solution designed to combat semantic drift in multi-hop, conversational memory. If I was overfitting by tuning boundaries, I would have 95%+ accuracy, not 80.1%. The 16% failures are real limitations.
Call to Action: Next Benchmarks I need your recommendations: I am looking for the toughest long-term conversation benchmarks you know. What else should I test the VAC Memory system on to truly prove its generalizability?
GitHub: https://github.com/vac-architector/VAC-Memory-System
I appreciate the honesty of the HN community and your help in validating my work.