Found 2 repositories(showing 2)
mohdibrahimaiml
A comprehensive human-in-the-loop evaluation platform for Large Language Models, built for AI alignment and safety research. This Flask-based application enables human evaluators to provide structured feedback on LLM outputs across multiple quality dimensions.
lzn87591
A triangular multi-agent evaluation skill for Large Language Models, where a Worker, Leader, and Auditor collaboratively assess reasoning quality, factual correctness, and execution reliability through adversarial verification.
All 2 repositories loaded