Found 10 repositories(showing 10)
hiyouga
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
ArtificialZeng
No description available
ArtificialZeng
ChatGLM-Efficient-Tuning-New-explained
ZeyuanGuo
No description available
LKS9090
We build an intelligent analysis system based on ChatGLM-6B for e-commerce comment mining. It realizes multi-label classification and SPO extraction with lightweight fine-tuning and efficient inference, helping merchants improve feedback analysis and decision-making efficiency.
stoneisstar
测试报错:File "E:\UEDemo\UnrealDemo2023\ChatGLM\ChatGLM2_6B_main\ChatGLM_Efficient_Tuning_main\src\glmtuner\tuner\core\adapter.py", line 65, in init_adapter assert os.path.exists(os.path.join(model_args.checkpoint_dir[0], WEIGHTS_NAME)), \ AssertionError: Provided path (path_checkpointLittle) does not contain a LoRA weight.
PKUwangchen
No description available
pythonsvc
No description available
wu55246842
clone from : https://github.com/hiyouga/ChatGLM-Efficient-Tuning.git
DDDdxy527
Parameter-efficient fine-tuning for ChatGLM-6B using P-Tuning v2 and LoRA. Reduces trainable parameters by 99% while maintaining performance. Supports single GPU training.
All 10 repositories loaded