Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
Cudathu-ml/SageAttention

SageAttention

[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.

62.4/100
3.3KForks: 399
View on GitHubHomepage →
Loading report...

Similar Projects

SpargeAttn

57

[ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.

Cuda982

how-to-optim-algorithm-in-cuda

60

how to optimize some algorithm in cuda.

Cuda2.9K

raft

79

RAFT contains fundamental widely-used algorithms and primitives for machine learning and information retrieval. The algorithms are CUDA-accelerated and form building blocks for more easily writing high performance applications.

Cuda996

cuvs

77

cuVS - a library for vector search and clustering on the GPU

Cuda736
Back to List