From 3ff56942758df744d16fd5a7ba90e3ec020380e6 Mon Sep 17 00:00:00 2001 From: samithcsachi Date: Fri, 24 Apr 2026 00:31:00 +0400 Subject: [PATCH 1/2] Fix: #3833 Feedback about Multi GPU training with DDP --- beginner_source/ddp_series_multigpu.rst | 2 ++ 1 file changed, 2 insertions(+) diff --git a/beginner_source/ddp_series_multigpu.rst b/beginner_source/ddp_series_multigpu.rst index ef6549d4de0..43ed42ae87b 100644 --- a/beginner_source/ddp_series_multigpu.rst +++ b/beginner_source/ddp_series_multigpu.rst @@ -202,6 +202,8 @@ Running the distributed training job Here's what the code looks like: .. code-block:: python + + def main(rank, world_size, total_epochs, save_every): ddp_setup(rank, world_size) dataset, model, optimizer = load_train_objs() From 6df2585b9a9f0173ffe1b14ea685ee50cd79ca9f Mon Sep 17 00:00:00 2001 From: samithcsachi Date: Fri, 24 Apr 2026 01:51:32 +0400 Subject: [PATCH 2/2] Fix: resolve lint issues in DDP multi-GPU tutorial --- beginner_source/ddp_series_multigpu.rst | 2 -- 1 file changed, 2 deletions(-) diff --git a/beginner_source/ddp_series_multigpu.rst b/beginner_source/ddp_series_multigpu.rst index 43ed42ae87b..f0f67e59309 100644 --- a/beginner_source/ddp_series_multigpu.rst +++ b/beginner_source/ddp_series_multigpu.rst @@ -202,7 +202,6 @@ Running the distributed training job Here's what the code looks like: .. code-block:: python - def main(rank, world_size, total_epochs, save_every): ddp_setup(rank, world_size) @@ -220,7 +219,6 @@ Here's what the code looks like: mp.spawn(main, args=(world_size, total_epochs, save_every,), nprocs=world_size) - Further Reading ---------------