Found 4 repositories(showing 4)
Automatic Evaluation Code for Measuring Dialogue Generation Model Performance
benjaminchang7
Leveraging Large Language Models for text generation and dialogue systems, with advanced evaluation and retrieval techniques.
The project investigates follow-up question generation in medical dialogues by fine-tuning large language models on synthetic, ICF-grounded data. This repository contains the synthetic training data generation script and training data, evaluation data, evaluation scripts, and experiment outputs.
sreetejadusi
Fine-tuning and evaluating large language models (LLaMA-2 7B & Mistral 7B) using LoRA on the Empathetic Dialogues dataset — complete with quantization, auto-resume, generation, and metrics visualization.
All 4 repositories loaded