A causal inference engine for deep learning training that provides structured explanations of neural network training failures. Understand why your model failed during training through semantic analysis and abductive reasoning, not raw tensor inspection.
Stars
20
Forks
2
Watchers
20
Open Issues
8
Overall repository health assessment
No package.json found
This might not be a Node.js project
1.4k
commits
29
commits
3
commits
2
commits
feat: add TensorBoard comparison to demo + bandit security report
4acb660View on GitHubdocs: update SESSION_SUMMARY with all deleted corrupted branches
2dd22bdView on GitHubdocs: update SESSION_SUMMARY with branch analysis and validation sync improvements
c303269View on GitHubfeat: implement 'Extreme Rigueur' rules, add Phase 3 docs, and fix progress audit
b985aa1View on GitHubdocs: Introduce detailed Copilot instructions and update AI guidelines with a new PR analysis rule and milestone lock enforcement.
80317d3View on GitHubfeat: Add foundational AI agent rules, development guidelines, and initial project setup files.
a99b7e9View on GitHubMerge pull request #634 from LambdaSection/infra/MLO-1-cross-platform-ci-gates
dc20405View on GitHubfeat: Establish initial project discovery and validation framework with Mom Test scripts, research prompts, decision memos, and a session summary.
7045d8cView on GitHub