Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
C++tensorflow/serving

serving

A flexible, high-performance serving system for machine learning models

91.6/100
6.3KForks: 2.2K
View on GitHubHomepage →
Loading report...

Similar Projects

MNN

94

MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.

C++15.0K

rocketride-server

78

High-performance AI pipeline engine with a C++ core and 50+ Python-extensible nodes. Build, debug, and scale LLM workflows with 13+ model providers, 8+ vector databases, and agent orchestration, all from your IDE. Includes VS Code extension, TypeScript/Python SDKs, and Docker deployment.

C++1.9K

Serving

66

A flexible, high-performance carrier for machine learning models(『飞桨』服务化部署框架)

C++921

model_server

85

A scalable inference server for models optimized with OpenVINO™

C++859
Back to List