Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
Pythonmodelscope/ms-swift

ms-swift

Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, Qwen3-Omni, InternVL3.5, Ovis2.5, GLM4.5v, Llava, Phi4, ...) (AAAI 2025).

88.0/100
13.0KForks: 1.2K
View on GitHubHomepage →
Loading report...

Similar Projects

LlamaFactory

92

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

Python68.0K

unsloth

93

Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.

Python53.5K

Langchain-Chatchat

72

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain

Python37.4K

peft

91

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

Python20.7K
Back to List