Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
PythonNVIDIA/TensorRT-LLM

TensorRT-LLM

TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.

89.2/100
13.0KForks: 2.2K
View on GitHubHomepage →
Loading report...

Similar Projects

vllm

93

A high-throughput and memory-efficient inference and serving engine for LLMs

Python72.4K

sglang

90

SGLang is a high-performance serving framework for large language models and multimodal models.

Python24.2K

flashinfer

84

FlashInfer: Kernel Library for LLM Serving

Python5.1K

LMCache

87

Supercharge Your LLM with the Fastest KV Cache Layer

Python7.6K
Back to List