Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
Pythonjundot/omlx

omlx

LLM inference server with continuous batching & SSD caching for Apple Silicon — managed from the macOS menu bar

83.7/100
1.3KForks: 98
View on GitHubHomepage →
Loading report...

Similar Projects

vllm-mlx

62

OpenAI and Anthropic compatible server for Apple Silicon. Run LLMs and vision-language models (Llama, Qwen-VL, LLaVA) with continuous batching, MCP tool calling, and multimodal support. Native MLX backend, 400+ tok/s. Works with Claude Code.

Python545

mlx-vlm

81

MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.

Python2.3K

gorilla

86

Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)

Python12.7K

json_repair

85

A python module to repair invalid JSON from LLMs

Python4.6K
Back to List