AutoEvals is a tool for quickly and easily evaluating AI model outputs using best practices.
Stars
854
Forks
54
Watchers
854
Open Issues
13
Overall repository health assessment
^8.17.1^1.1.0^1.1.6^4.1.0^1.0.7^4.2.0^6.3.0^3.25.76^3.24.6^4.1.2^1.1.3^4.0.9^4.2.6^20.19.11^2.10.5^8.5.0^3.14.0^0.25.13^3.17.1^5.9.2^2.1.9152
commits
47
commits
13
commits
9
commits
6
commits
4
commits
4
commits
3
commits
3
commits
3
commits
chore: Publish python via trusted publishing and unify release process (#183)
a5854eeView on GitHubchore: Publish JS package via gha trusted publishing (#180)
110e252View on GitHubTrace injection in python to mirror the JS implementation (#175)
0d428fbView on GitHubFix MDX parsing by escaping curly braces in JSDoc comment (#174)
d78f4abView on GitHubPass dangerouslyAllowBrowser to clean client constructor in isWrapped (#170)
78de21bView on GitHubAdd reasoningEffort/reasoning_effort parameter support (#165)
1d23753View on GitHub