Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
Cudamirage-project/mirage

mirage

Mirage Persistent Kernel: Compiling LLMs into a MegaKernel

72.5/100
2.2KForks: 198
View on GitHubHomepage →
Loading report...

Similar Projects

SageAttention

62

[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.

Cuda3.3K

how-to-optim-algorithm-in-cuda

60

how to optimize some algorithm in cuda.

Cuda2.9K

rtp-llm

80

RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.

Cuda1.1K

raft

79

RAFT contains fundamental widely-used algorithms and primitives for machine learning and information retrieval. The algorithms are CUDA-accelerated and form building blocks for more easily writing high performance applications.

Cuda996
Back to List