Skip to content

Commit cf88eef

Browse files
Update vllm/model_executor/layers/fused_moe/shared_fused_moe.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Sage Moore <sagemoore@utexas.edu> Signed-off-by: Sage Moore <sage@neuralmagic.com>
1 parent cc5d3b7 commit cf88eef

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

vllm/model_executor/layers/fused_moe/shared_fused_moe.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,8 +28,8 @@ def __init__(
2828
super().__init__(**kwargs)
2929
self._shared_experts = shared_experts
3030

31-
# Disable shared expert overlap if we are using eplb, because there
32-
# are correctness issues, or not using flashinfer + DP since there
31+
# Disable shared expert overlap if we are using eplb, because of
32+
# correctness issues, or if using flashinfer with DP, since there
3333
# is nothing to be gained in this case. Disabling the overlap
3434
# optimization also prevents the shared experts from being hidden
3535
# from torch.compile.

0 commit comments

Comments
 (0)