Skip to content

Commit cc5d3b7

Browse files
committed
fix comment
Signed-off-by: Sage Moore <sage@neuralmagic.com>
1 parent 0f2d197 commit cc5d3b7

File tree

1 file changed

+5
-4
lines changed

1 file changed

+5
-4
lines changed

vllm/model_executor/layers/fused_moe/shared_fused_moe.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -28,10 +28,11 @@ def __init__(
2828
super().__init__(**kwargs)
2929
self._shared_experts = shared_experts
3030

31-
# Disable shared expert overlap if we are using eplb or not using
32-
# flashinfer + DP since there is nothing to be gained in this case.
33-
# Disabling the overlap optimization also prevents the shared experts
34-
# from being hidden from torch.compile.
31+
# Disable shared expert overlap if we are using eplb, because there
32+
# are correctness issues, or not using flashinfer + DP since there
33+
# is nothing to be gained in this case. Disabling the overlap
34+
# optimization also prevents the shared experts from being hidden
35+
# from torch.compile.
3536
self.use_overlapped = (
3637
use_overlapped
3738
and not (

0 commit comments

Comments
 (0)