Found 28 repositories(showing 28)
mlabonne
Automatically evaluate your LLMs in Google Colab
monkeydrupy
Automatically evaluate your LLMs in Google Colab
godfreyjason
Automatically evaluate your LLMs in Google Colab
khoroumenate
Automatically evaluate your LLMs in Google Colab
lucasjiang-aoi
Automatically evaluate your LLMs in Google Colab
NineIT420
No description available
civillibertarian-stressincontinence617
🛠️ Simplify LLM evaluation with our Colab notebook; just name your model, choose a benchmark, and run for automated insights.
frontsail-ai
A minimal, vitest-native evals library for LLM applications. Built-in metrics, G-Eval LLM-as-judge, autoevals integration, and pretty console output.
Hamxea
LLM_AutoEval
johniebaptora
Automatically evaluate your LLMs in Google Colab
puntasmiling
Automatically evaluate your LLMs in Google Colab
iffystrayer
No description available
thedoylan
Automatically evaluate your LLMs in Google Colab
topplusskill
Automatically evaluate your LLMs in Google Colab
incredibledevpy
Automatically evaluate your LLMs in Google Colab
monk1337
Auto eval Multimedqa
LewisDeBenoisIV
No description available
BaddestNinja
Automatically evaluate your LLMs in Google Colab
plaban1981
result of auto evaluation for the Mixture of models created
sebutz
Automatically evaluate your LLMs in Google Colab
apsingmax
Automatically evaluate your LLMs in Google Colab
rairegalon
Automatically evaluate your LLMs in Google Colab
PainAsFuel
Automatically evaluate your LLMs in Google Colab
MagnusLabonne
Automatically evaluate your LLMs in Google Colab
sailplatform
No description available
tevfik94
An Automated LLM-as-a-Judge Framework specialized for Arabic & English evaluation using LLM
karinaolaru
No description available
one-aalam
TypeScript-first LLM evaluation library built on Autoevals with Vitest integration
All 28 repositories loaded