AgentEval is the comprehensive .NET toolkit for AI agent evaluation—tool usage validation, RAG quality metrics, stochastic evaluation, and model comparison—built first for Microsoft Agent Framework (MAF) and Microsoft.Extensions.AI. What RAGAS, PromptFoo and DeepEval do for Python, AgentEval does for .NET
Stars
78
Forks
8
Watchers
78
Open Issues
7
Overall repository health assessment
No package.json found
This might not be a Node.js project
fix: update AgentEval package reference to version 0.6.0-beta for validation
5dbfb47View on GitHubfix: update AgentEval package reference to version 0.5.4-beta
43d02c1View on GitHubrefactor: update conversation evaluation to use IEvaluableAgent and MAFAgentAdapter
651296cView on GitHubdocs: update commercial section to clarify open-source status and remove outdated information
bc1d22fView on GitHubfix: update repository references from joslat to AgentEvalHQ across documentation and samples
6f5095cView on GitHubci: re-trigger after fixing Actions permissions
127e0d5View on GitHubchore: remove redundant CLI pack step from release workflow and add CODEOWNERS file for review assignments
224d2bdView on GitHubRemove obsolete CLI tests for ListCommand, MetricSelection, Program, RedTeamCommand, and StochasticFlag
068a066View on GitHubAdd comprehensive tests for CLI commands and evaluation options
f2b1759View on GitHubMerge pull request #10 from joslat/joslat-maf-upgrade-to-rc2
9fb13b2View on GitHubfix: update naming conventions for original filename and improve error handling in EvalCommand
8034350View on GitHubfix: update documentation for original filename handling in DirectoryExporter and improve font resolver fallback logic
bf3d73aView on GitHubchore: update documentation for directory export format and improve font resolver comments
ecd6fbdView on GitHub