Skip to content

Support EP mcore import for TE Spec and Fix mamba moe config#1342

Open
jenchen13 wants to merge 3 commits intomainfrom
jennifchen/te_ep_import
Open

Support EP mcore import for TE Spec and Fix mamba moe config#1342
jenchen13 wants to merge 3 commits intomainfrom
jennifchen/te_ep_import

Conversation

@jenchen13
Copy link
Copy Markdown
Contributor

@jenchen13 jenchen13 commented Apr 24, 2026

What does this PR do?

Type of change: Bug fix

  • Enable EP (expert parallelism) import for HF to MCore when using TE Spec
  • Fix bug in mamba moe config which doesn't skip attention layers properly in MCore (Mcore uses different naming for attention layers than HF)

Usage

# In Megatron-LM/examples/post_training/modelopt
 MLM_EXTRA_ARGS="--export-default-te-spec --trust-remote-code --moe-router-dtype fp32" EP=4 HF_MODEL_CKPT=</path/to/hf> MLM_MODEL_SAVE=<save/path> ./convert.sh nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16

Testing

Before your PR is "Ready for review"

Make sure you read and follow Contributor guidelines and your commits are signed (git commit -s -S).

Make sure you read and follow the Security Best Practices (e.g. avoiding hardcoded trust_remote_code=True, torch.load(..., weights_only=False), pickle, etc.).

  • Is this change backward compatible?: ✅ / ❌ / N/A
  • If you copied code from any other sources or added a new PIP dependency, did you follow guidance in CONTRIBUTING.md: ✅ / ❌ / N/A
  • Did you write any new necessary tests?: ✅ / ❌ / N/A
  • Did you update Changelog?: ✅ / ❌ / N/A

Additional Information

Summary by CodeRabbit

  • Bug Fixes

    • Fixed expert-slice assignment so each expert-parallel rank imports the correct expert slice.
    • Improved recognition of pipeline-parallel layer indices in submodule names.
  • Improvements

    • Relaxed constraints between local and global expert counts for grouped-local-expert imports.
    • Extended Mamba-MoE quantizer config to cover additional attention naming patterns.
    • Exporter now accepts an additional hybrid model type for export when available.

Signed-off-by: Jennifer Chen <jennifchen@nvidia.com>
@jenchen13 jenchen13 requested review from a team as code owners April 24, 2026 19:34
@jenchen13 jenchen13 requested a review from jingyu-ml April 24, 2026 19:34
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 24, 2026

📝 Walkthrough

Walkthrough

Remaps safetensor global expert indices into TEGroupedMLP local weight slots per expert-parallel rank, requires global expert count divisible by local expert count, relaxes layer-index detection for PP submodule names, extends disabled quantizer patterns, and accepts Megatron HybridModel in exporter input validation.

Changes

Cohort / File(s) Summary
MoE Expert Import Logic
modelopt/torch/export/plugins/megatron_importer.py
_grouped_mlp_merging now iterates local slots and maps global_expert_id = init_expert_id + local_id when loading safetensor keys into weight{local_id}. _import_transformer_layer computes init_index from get_expert_model_parallel_rank() * num_local_experts and enforces that total global experts are divisible by local experts.
Exporter model acceptance
modelopt/torch/export/unified_export_megatron.py
Conditionally imports HybridModel (falls back to MambaModel) and updates constructor validation to accept HybridModel instances alongside existing allowed model types.
Quantizer configuration
modelopt/torch/quantization/config.py
Adds two disabled Mamba‑MoE quantizer patterns for Mcore-style attention names: *self_attention.linear_qkv* and *self_attention.linear_proj*; preserves HF-style *q_proj*, *k_proj*, *v_proj*, *o_proj* entries and clarifies comments.
PP layer-index handling
modelopt/torch/distill/plugins/megatron.py
_adjust_layer_index_for_pp now recognizes numeric layer indices appearing as the final token in submodule names (dot followed by number at end-of-string) in addition to prior patterns.

Sequence Diagram(s)

sequenceDiagram
    participant Rank as ExpertParallelRank
    participant Importer as MegatronImporter
    participant Store as SafeTensorStore
    participant MLP as TEGroupedMLP

    Rank->>Importer: compute init_index = get_expert_model_parallel_rank() * num_local_experts
    Importer->>Store: for local_id in 0..num_local_experts-1 load key for global_expert_id = init_index + local_id
    Store-->>Importer: return expert weights (global_expert_id)
    Importer->>MLP: write expert weights into `weight{local_id}` slots
    Note right of MLP: requires total_global_experts % num_local_experts == 0
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title partially addresses the main changes (mamba moe config fix, EP mcore import) but is unclear and contains what appears to be a typo ('TE Spec' with extra space), making it vague about the full scope. Clarify the title to be more specific and readable, e.g., 'Fix mamba MoE config for mcore and support EP import with TE spec' and correct spacing issues.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Security Anti-Patterns ✅ Passed No security anti-patterns detected in modified files. No torch.load with weights_only=False, numpy.load with allow_pickle=True, hardcoded trust_remote_code=True, eval/exec on external input, nosec comments, or non-permissive dependencies found.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch jennifchen/te_ep_import

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 24, 2026

PR Preview Action v1.8.1

QR code for preview link

🚀 View preview at
https://NVIDIA.github.io/Model-Optimizer/pr-preview/pr-1342/

Built to branch gh-pages at 2026-04-24 20:44 UTC.
Preview will be ready when the GitHub Pages deployment is complete.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@modelopt/torch/export/plugins/megatron_importer.py`:
- Around line 662-668: Replace the fragile assert-based check with an explicit
validation that raises a clear exception: verify both that num_global_experts is
divisible by num_local_experts and that num_global_experts == num_local_experts
* ep_size (where ep_size is the EP topology size used by
get_expert_model_parallel_rank()); if these conditions fail, raise a ValueError
with a descriptive message mentioning num_global_experts, num_local_experts and
ep_size so the expert slice computation (init_index =
get_expert_model_parallel_rank() * num_local_experts) cannot proceed with an
invalid topology.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: f87a3cc7-689f-4332-a931-506c6d7f0513

📥 Commits

Reviewing files that changed from the base of the PR and between 5887410 and 4fa630a.

📒 Files selected for processing (2)
  • modelopt/torch/export/plugins/megatron_importer.py
  • modelopt/torch/quantization/config.py

Comment on lines +662 to +668
assert num_global_experts % num_local_experts == 0, (
"num_global_experts must be divisible by num_local_experts "
"during MoE import"
)
init_index = 0
# Each EP rank owns a contiguous slice of global experts:
# [ep_rank * num_local_experts, (ep_rank + 1) * num_local_experts).
init_index = get_expert_model_parallel_rank() * num_local_experts
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n modelopt/torch/export/plugins/megatron_importer.py | sed -n '650,680p'

Repository: NVIDIA/Model-Optimizer

Length of output: 2049


🏁 Script executed:

# Check imports and function definitions
cat -n modelopt/torch/export/plugins/megatron_importer.py | head -50

Repository: NVIDIA/Model-Optimizer

Length of output: 2174


🏁 Script executed:

# Search for where these functions are defined or imported
rg "get_expert_model_parallel_rank|get_expert_model_parallel_world_size" modelopt/torch/export/plugins/megatron_importer.py -B 2 -A 2

Repository: NVIDIA/Model-Optimizer

Length of output: 650


Replace assert with explicit EP-topology validation.

Line 662 uses assert for runtime validation, which Python removes when executed with optimization flags (-O). Additionally, divisibility alone (num_global_experts % num_local_experts == 0) does not guarantee correct expert mapping. The code distributes experts by EP rank using the formula [ep_rank * num_local_experts, (ep_rank + 1) * num_local_experts), which requires num_global_experts == num_local_experts * ep_size. Without this constraint, mismatches between expert count and EP topology can silently produce incorrect indexing.

Suggested fix
-                        assert num_global_experts % num_local_experts == 0, (
-                            "num_global_experts must be divisible by num_local_experts "
-                            "during MoE import"
-                        )
-                        # Each EP rank owns a contiguous slice of global experts:
-                        # [ep_rank * num_local_experts, (ep_rank + 1) * num_local_experts).
-                        init_index = get_expert_model_parallel_rank() * num_local_experts
+                        ep_rank = get_expert_model_parallel_rank()
+                        ep_size = get_expert_model_parallel_world_size()
+                        if num_global_experts != num_local_experts * ep_size:
+                            raise ValueError(
+                                "Expected num_global_experts == num_local_experts * ep_size "
+                                f"for TEGroupedMLP import, got {num_global_experts=}, "
+                                f"{num_local_experts=}, {ep_size=}."
+                            )
+                        # Each EP rank owns a contiguous slice of global experts:
+                        # [ep_rank * num_local_experts, (ep_rank + 1) * num_local_experts).
+                        init_index = ep_rank * num_local_experts
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modelopt/torch/export/plugins/megatron_importer.py` around lines 662 - 668,
Replace the fragile assert-based check with an explicit validation that raises a
clear exception: verify both that num_global_experts is divisible by
num_local_experts and that num_global_experts == num_local_experts * ep_size
(where ep_size is the EP topology size used by
get_expert_model_parallel_rank()); if these conditions fail, raise a ValueError
with a descriptive message mentioning num_global_experts, num_local_experts and
ep_size so the expert slice computation (init_index =
get_expert_model_parallel_rank() * num_local_experts) cannot proceed with an
invalid topology.

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 24, 2026

Codecov Report

❌ Patch coverage is 30.00000% with 7 lines in your changes missing coverage. Please review.
✅ Project coverage is 73.67%. Comparing base (0678136) to head (67e021d).
⚠️ Report is 11 commits behind head on main.

Files with missing lines Patch % Lines
modelopt/torch/export/plugins/megatron_importer.py 0.00% 6 Missing ⚠️
modelopt/torch/distill/plugins/megatron.py 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1342      +/-   ##
==========================================
- Coverage   74.46%   73.67%   -0.80%     
==========================================
  Files         464      481      +17     
  Lines       50089    52610    +2521     
==========================================
+ Hits        37300    38759    +1459     
- Misses      12789    13851    +1062     
Flag Coverage Δ
examples 41.49% <20.00%> (+5.45%) ⬆️
gpu 58.57% <30.00%> (-0.52%) ⬇️
regression 14.85% <0.00%> (+0.05%) ⬆️
unit 52.72% <0.00%> (+0.27%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Signed-off-by: Jennifer Chen <jennifchen@nvidia.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
modelopt/torch/export/plugins/megatron_importer.py (1)

661-667: ⚠️ Potential issue | 🟠 Major

Replace assert with an explicit EP-topology check against ep_size.

Two concerns on this block that are still present:

  1. assert at line 661 is removed under python -O, so the only guard against a misconfigured EP topology disappears in optimized runs.
  2. num_global_experts % num_local_experts == 0 is necessary but not sufficient. The slicing [ep_rank * num_local_experts, (ep_rank + 1) * num_local_experts) is only well-defined when num_global_experts == num_local_experts * ep_size. Divisibility alone (e.g. num_global=16, num_local=4, ep_size=2) will silently leave 8 global experts unimported and can also produce overlapping/duplicate reads on other shapes.

This mirrors the pattern already used elsewhere in the same codebase (see modelopt/torch/export/plugins/mcore_custom.py lines 416–445, which explicitly takes ep_size = get_expert_model_parallel_world_size() and raises ValueError on mismatch).

🔧 Suggested fix
-                        num_local_experts = experts.num_local_experts
-                        num_global_experts = experts.config.num_moe_experts
-                        assert num_global_experts % num_local_experts == 0, (
-                            "num_global_experts must be divisible by num_local_experts "
-                            "during MoE import"
-                        )
-                        # Each EP rank owns a contiguous slice of global experts:
-                        # [ep_rank * num_local_experts, (ep_rank + 1) * num_local_experts).
-                        init_index = get_expert_model_parallel_rank() * num_local_experts
+                        num_local_experts = experts.num_local_experts
+                        num_global_experts = experts.config.num_moe_experts
+                        ep_rank = get_expert_model_parallel_rank()
+                        ep_size = get_expert_model_parallel_world_size()
+                        if num_global_experts != num_local_experts * ep_size:
+                            raise ValueError(
+                                "TEGroupedMLP import requires "
+                                "num_global_experts == num_local_experts * ep_size, got "
+                                f"{num_global_experts=}, {num_local_experts=}, {ep_size=}."
+                            )
+                        # Each EP rank owns a contiguous slice of global experts:
+                        # [ep_rank * num_local_experts, (ep_rank + 1) * num_local_experts).
+                        init_index = ep_rank * num_local_experts

This will also require importing get_expert_model_parallel_world_size alongside get_expert_model_parallel_rank at line 42.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modelopt/torch/export/plugins/megatron_importer.py` around lines 661 - 667,
Replace the fragile assert with an explicit EP-topology validation: import
get_expert_model_parallel_world_size alongside get_expert_model_parallel_rank,
compute ep_size = get_expert_model_parallel_world_size(), then check both that
num_global_experts % num_local_experts == 0 and that num_global_experts ==
num_local_experts * ep_size; if the check fails raise a ValueError with a clear
message; keep using get_expert_model_parallel_rank() to compute init_index only
after the validation passes.
🧹 Nitpick comments (2)
modelopt/torch/quantization/config.py (1)

239-250: LGTM — patterns correctly target both HF and Mcore attention layers.

The added *self_attention.linear_qkv* and *self_attention.linear_proj* entries correctly complement the existing HF-style *q_proj*/*k_proj*/*v_proj*/*o_proj* patterns to cover Mcore's attention module naming, matching the structures seen in modelopt/torch/export/unified_export_megatron.py and modelopt/torch/prune/plugins/mcore_minitron.py. Since all entries only toggle enable: False, ordering among themselves is not a concern.

Minor nit (optional): the comment on line 250 reads "Skip QKV Output Projection (Mcore naming)" — self_attention.linear_proj is the attention output projection, not a QKV output projection. Consider rewording to "Skip Attention Output Projection (Mcore naming)" for clarity and consistency with line 242's HF counterpart.

✏️ Optional comment tweak
     {
         "quantizer_name": "*self_attention.linear_proj*",
         "enable": False,
-    },  # Skip QKV Output Projection (Mcore naming)
+    },  # Skip Attention Output Projection (Mcore naming)
 ]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modelopt/torch/quantization/config.py` around lines 239 - 250, Update the
comment for the quantizer entry with "quantizer_name":
"*self_attention.linear_proj*" to accurately describe it as the attention output
projection (e.g., change "Skip QKV Output Projection (Mcore naming)" to "Skip
Attention Output Projection (Mcore naming)"); locate the entry by the unique
symbol "*self_attention.linear_proj*" (and optionally mirror wording with the HF
counterpart "*o_proj*") and replace the comment text accordingly.
modelopt/torch/export/plugins/megatron_importer.py (1)

298-307: Global→local expert remap looks correct.

The loop correctly maps each HF global expert init_expert_id + local_id into the TEGroupedMLP local slot weight{local_id}, which is what the comment describes and matches the rank-derived init_index set by the caller. The parallel_config parameter is accepted but intentionally unused here since ETP is routed through the use_packed_local_experts branch at lines 685–695.

One small follow-up to consider (optional, not blocking): the # TODO handle weight_scale means quantized-MoE import via TEGroupedMLP will silently skip scales. Please make sure this path is currently only exercised for unquantized HF checkpoints, or gate it with an explicit error when a weight_quantizer._scale is present on the module so a quantized grouped-MLP import doesn't silently produce an unscaled model.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modelopt/torch/export/plugins/megatron_importer.py` around lines 298 - 307,
The current import loop that maps HF global experts to TEGroupedMLP local slots
(for local_id in range(num_local_experts) ... state_dict[f"weight{local_id}"] =
tensor; module.load_state_dict(state_dict)) ignores any weight scales; add a
guard before this remapping to detect quantized grouped-MLP modules (e.g., check
module.weight_quantizer._scale or similar attribute) and either raise a clear
error or assert if a scale is present so we don't silently import a quantized
checkpoint without applying scales; ensure the check references the actual
module attribute (weight_quantizer._scale) and short-circuits this code path
when quantization metadata exists, leaving the existing parallel_config handling
unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@modelopt/torch/export/plugins/megatron_importer.py`:
- Around line 661-667: Replace the fragile assert with an explicit EP-topology
validation: import get_expert_model_parallel_world_size alongside
get_expert_model_parallel_rank, compute ep_size =
get_expert_model_parallel_world_size(), then check both that num_global_experts
% num_local_experts == 0 and that num_global_experts == num_local_experts *
ep_size; if the check fails raise a ValueError with a clear message; keep using
get_expert_model_parallel_rank() to compute init_index only after the validation
passes.

---

Nitpick comments:
In `@modelopt/torch/export/plugins/megatron_importer.py`:
- Around line 298-307: The current import loop that maps HF global experts to
TEGroupedMLP local slots (for local_id in range(num_local_experts) ...
state_dict[f"weight{local_id}"] = tensor; module.load_state_dict(state_dict))
ignores any weight scales; add a guard before this remapping to detect quantized
grouped-MLP modules (e.g., check module.weight_quantizer._scale or similar
attribute) and either raise a clear error or assert if a scale is present so we
don't silently import a quantized checkpoint without applying scales; ensure the
check references the actual module attribute (weight_quantizer._scale) and
short-circuits this code path when quantization metadata exists, leaving the
existing parallel_config handling unchanged.

In `@modelopt/torch/quantization/config.py`:
- Around line 239-250: Update the comment for the quantizer entry with
"quantizer_name": "*self_attention.linear_proj*" to accurately describe it as
the attention output projection (e.g., change "Skip QKV Output Projection (Mcore
naming)" to "Skip Attention Output Projection (Mcore naming)"); locate the entry
by the unique symbol "*self_attention.linear_proj*" (and optionally mirror
wording with the HF counterpart "*o_proj*") and replace the comment text
accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: be965339-147e-4389-8ddc-526a753fbe9d

📥 Commits

Reviewing files that changed from the base of the PR and between 4fa630a and 948e4de.

📒 Files selected for processing (2)
  • modelopt/torch/export/plugins/megatron_importer.py
  • modelopt/torch/quantization/config.py

Signed-off-by: Jennifer Chen <jennifchen@nvidia.com>
@jenchen13 jenchen13 requested a review from a team as a code owner April 24, 2026 20:40
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
modelopt/torch/distill/plugins/megatron.py (1)

166-175: Regex expansion looks correct; consider scoping the replacement to the matched span.

Expanding the lookahead to (?=\.|$) correctly covers submodule names where the numeric layer index is the last token (e.g., decoder.layers.5). One pre-existing fragility worth noting now that more names flow through this path: Line 175 uses submodule_name.replace(match.group(0), str(new_layer_idx)), which replaces every occurrence of that digit substring, not just the matched layer index. For names like decoder.layers.5.mlp.experts.5 this would rewrite both. A targeted substitution using the match span is safer:

♻️ Proposed refactor
-    new_submodule_name = submodule_name.replace(match.group(0), str(new_layer_idx))
+    start, end = match.span()
+    new_submodule_name = submodule_name[:start] + str(new_layer_idx) + submodule_name[end:]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modelopt/torch/distill/plugins/megatron.py` around lines 166 - 175, The
replacement currently uses submodule_name.replace(match.group(0), ...) which
will replace every occurrence of the matched digits; update the logic to replace
only the matched span: use the match span (match.start(), match.end()) to
construct new_submodule_name (e.g., slice before + new index + slice after) or
call re.sub with the compiled pattern and count=1 so only the found occurrence
is replaced; keep use of TransformerLayer._get_layer_offset(model_cfg), match,
and new_layer_idx unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@modelopt/torch/distill/plugins/megatron.py`:
- Around line 166-175: The replacement currently uses
submodule_name.replace(match.group(0), ...) which will replace every occurrence
of the matched digits; update the logic to replace only the matched span: use
the match span (match.start(), match.end()) to construct new_submodule_name
(e.g., slice before + new index + slice after) or call re.sub with the compiled
pattern and count=1 so only the found occurrence is replaced; keep use of
TransformerLayer._get_layer_offset(model_cfg), match, and new_layer_idx
unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 2fde340e-abbc-441d-9228-4c410e1a2f36

📥 Commits

Reviewing files that changed from the base of the PR and between 948e4de and 67e021d.

📒 Files selected for processing (2)
  • modelopt/torch/distill/plugins/megatron.py
  • modelopt/torch/export/unified_export_megatron.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant