gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling
-
Updated
Dec 4, 2025 - Python
gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling
A High-Performance LLM Inference Engine with vLLM-Style Continuous Batching
OpenAI-compatible server with continuous batching for MLX on Apple Silicon
Add a description, image, and links to the continuous-batching topic page so that developers can more easily learn about it.
To associate your repository with the continuous-batching topic, visit your repo's landing page and select "manage topics."