Pinned Loading
-
Dao-AILab/flash-attention
Dao-AILab/flash-attention PublicFast and memory-efficient exact attention
-
IDEA-CCNL/Fengshenbang-LM
IDEA-CCNL/Fengshenbang-LM PublicFengshenbang-LM(封神榜大模型)是IDEA研究院认知计算与自然语言研究中心主导的大模型开源体系,成为中文AIGC和认知智能的基础设施。
-
MagiAttention
MagiAttention PublicForked from SandAI-org/MagiAttention
A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training
Python
-
slime
slime PublicForked from THUDM/slime
slime is a LLM post-training framework for RL Scaling.
Python
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.



