This repo explores multi-label emotion classification using pretrained encoder (RoBERTa, DistilBERT) and decoder models with LoRA (Gemma, Llama-3.2-1B, Stella). It compares dense vs generative approaches, optimized using F1-Macro scores. Key techniques include class weights and 10-fold stratified validation to enhance performance on sparse classes.
Stars
2
Forks
1
Watchers
2
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
64
commits
Rename llama_3_2_3B_it_Multilabel__emotion_generation.ipynb to instruction_tuned/Llama_3.2_3B_it.ipynb
ec91bacView on GitHubRename QWEN_14B_Instruct_multilabel_generation_emotion.ipynb to instruction_tuned/QWEN_14B_Instruct.ipynb
0d8e0e3View on GitHub