vllm
0
reviews
A high-throughput and memory-efficient inference and serving engine for LLMs
45
Security
22
Quality
35
Maintenance
36
Overall
v0.15.1
PyPI
Python
Feb 5, 2026
by vLLM Team
74348
GitHub Stars
Community Reviews
No reviews yet
Be the first to share your experience with this package
Write a Review
Sign in to write a review
Sign In
Dependencies
aiohttp
3.13.3
anthropic
0.86.0
blake3
1.0.8
cachetools
7.0.5
cbor2
5.9.0
cloudpickle
3.1.2
compressed-tensors
0.13.0
depyf
0.20.0
diskcache
5.6.3
einops
0.8.2
fastapi
0.135.2
filelock
3.25.2
flashinfer-python
0.6.1
gguf
0.18.0
grpcio
1.78.1
grpcio-reflection
1.78.1
ijson
3.5.0
lark
1.2.2
llguidance
1.3.0
lm-format-enforcer
0.11.3
and 38 more