diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 2a16a98d..8b9b42d0 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -8,7 +8,7 @@ Please refer to [MONAI Bundle Specification](https://docs.monai.io/en/latest/mb_ The [get started](https://github.com/Project-MONAI/tutorials/blob/main/bundle/get_started.md) notebook is a step-by-step tutorial to help developers easily get started to develop a bundle. And [bundle examples](https://github.com/Project-MONAI/tutorials/tree/main/bundle) show the typical bundle for 3D segmentation, how to use customized components in a bundle, and how to parse bundle in your own program as "hybrid" mode, etc. -As for the path related varibles within config files (such as "bundle_root"), we suggest to use path that do not include personal information (such as `"/home/your_name/"`).The following is an example of path using: +As for the path related variables within config files (such as "bundle_root"), we suggest using paths that do not include personal information (such as `"/home/your_name/"`). The following is an example of a path definition: `"bundle_root": "/workspace/data/"`. @@ -44,7 +44,7 @@ If a bundle has large files, please upload those files into a publicly accessibl 1. `path`, relative path of the large file in the bundle. 2. `url`, URL link that can download the file. 3. `hash_val`, (**optional**) expected hash value of the file. -4. `hash_type`, (**optional**) hash type. Supprted hash type includes "md5", "sha1", "sha256" and "sha512". +4. `hash_type`, (**optional**) hash type. Supported hash types include "md5", "sha1", "sha256" and "sha512". The template is as follow, and you can also click [here](https://github.com/Project-MONAI/model-zoo/blob/dev/models/spleen_ct_segmentation/large_files.yml) to see an actual example of `spleen_ct_segmentation`: diff --git a/docs/readme_template.md b/docs/readme_template.md index 5f22f86e..9ff0ef7c 100644 --- a/docs/readme_template.md +++ b/docs/readme_template.md @@ -1,14 +1,14 @@ # Model Title ### **Authors** -*Anyone who should be attributed as part of the model. If multiple people or companies, use a comma seperated list* +*Anyone who should be attributed as part of the model. If multiple people or companies, use a comma separated list* Example: Firstname1 LastName1, Firstname2 Lastname2, Affiliation1 ### **Tags** -*What tags describe the model and task performed? Use a comma seperated list* +*What tags describe the model and task performed? Use a comma separated list* Example: @@ -45,7 +45,7 @@ This model achieves the following results on COCO 2017 validation: a box AP (ave For more details regarding evaluation results, we refer to table 5 of the original paper. -## **Additinal Usage Steps** (Optional) +## **Additional Usage Steps** (Optional) *If your bundle requires steps outside the normal flow of usage, describe those here in bash style commands.* Example: @@ -67,7 +67,7 @@ Example: The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## **Limitations** (Optional) -Are there general limitations of what this model should be used for? Has this been approved for use in any clinicial systems? Are there any things to watch out for when using this model? +Are there general limitations of what this model should be used for? Has this been approved for use in any clinical systems? Are there any things to watch out for when using this model? Example: *This training and inference pipeline was developed by NVIDIA. It is based on a segmentation model created by NVIDIA researchers. This research use only software that has not been cleared or approved by FDA or any regulatory agency. Clara’s pre-trained models are for developmental purposes only and cannot be used directly for clinical procedures.* diff --git a/models/brats_mri_axial_slices_generative_diffusion/docs/README.md b/models/brats_mri_axial_slices_generative_diffusion/docs/README.md index bb954f22..bb1d8745 100644 --- a/models/brats_mri_axial_slices_generative_diffusion/docs/README.md +++ b/models/brats_mri_axial_slices_generative_diffusion/docs/README.md @@ -20,7 +20,7 @@ An example result from inference is shown below: **This is a demonstration network meant to just show the training process for this sort of network with MONAI. To achieve better performance, users need to use larger dataset like [BraTS 2021](https://www.synapse.org/#!Synapse:syn25829067/wiki/610865).** ## Data -The training data is BraTS 2016 and 2017 from the Medical Segmentation Decathalon. Users can find more details on the dataset (`Task01_BrainTumour`) at http://medicaldecathlon.com/. +The training data is BraTS 2016 and 2017 from the Medical Segmentation Decathlon. Users can find more details on the dataset (`Task01_BrainTumour`) at http://medicaldecathlon.com/. - Target: Image Generation - Task: Synthesis @@ -114,7 +114,7 @@ This result is benchmarked under: ## MONAI Bundle Commands In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). ### Execute Autoencoder Training diff --git a/models/brats_mri_generative_diffusion/docs/README.md b/models/brats_mri_generative_diffusion/docs/README.md index f8e068b5..7c2ccbf4 100644 --- a/models/brats_mri_generative_diffusion/docs/README.md +++ b/models/brats_mri_generative_diffusion/docs/README.md @@ -20,7 +20,7 @@ An example result from inference is shown below: **This is a demonstration network meant to just show the training process for this sort of network with MONAI. To achieve better performance, users need to use larger dataset like [Brats 2021](https://www.synapse.org/#!Synapse:syn25829067/wiki/610865) and have GPU with memory larger than 32G to enable larger networks and attention layers.** ## Data -The training data is BraTS 2016 and 2017 from the Medical Segmentation Decathalon. Users can find more details on the dataset (`Task01_BrainTumour`) at http://medicaldecathlon.com/. +The training data is BraTS 2016 and 2017 from the Medical Segmentation Decathlon. Users can find more details on the dataset (`Task01_BrainTumour`) at http://medicaldecathlon.com/. - Target: Image Generation - Task: Synthesis @@ -112,7 +112,7 @@ This result is benchmarked under: In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). ### Execute Autoencoder Training diff --git a/models/brats_mri_segmentation/docs/README.md b/models/brats_mri_segmentation/docs/README.md index 6aef7784..3749f9ea 100644 --- a/models/brats_mri_segmentation/docs/README.md +++ b/models/brats_mri_segmentation/docs/README.md @@ -94,7 +94,7 @@ This result is benchmarked under: ## MONAI Bundle Commands In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). #### Execute training: diff --git a/models/cxr_image_synthesis_latent_diffusion_model/docs/README.md b/models/cxr_image_synthesis_latent_diffusion_model/docs/README.md index 32a8ee23..90abd5bf 100644 --- a/models/cxr_image_synthesis_latent_diffusion_model/docs/README.md +++ b/models/cxr_image_synthesis_latent_diffusion_model/docs/README.md @@ -1,6 +1,6 @@ # Description -A diffusion model to synthetise X-Ray images based on radiological report impressions. +A diffusion model to synthesise X-Ray images based on radiological report impressions. # Model Overview This model is trained from scratch using the Latent Diffusion Model architecture [1] and is used for the synthesis of @@ -20,7 +20,7 @@ original images to have a format of 512 x 512 pixels. ## Preprocessing We resized the original images to make the smallest sides have 512 pixels. When inputting it to the network, we center cropped the images to 512 x 512. The pixel intensity was normalised to be between [0, 1]. The text data was obtained -from associated radiological reports. We randoomly extracted sentences from the findings and impressions sections of the +from associated radiological reports. We randomly extracted sentences from the findings and impressions sections of the reports, having a maximum of 5 sentences and 77 tokens. The text was tokenised using the CLIPTokenizer from transformers package (https://github.com/huggingface/transformers) (pretrained model "stabilityai/stable-diffusion-2-1-base") and then encoded using CLIPTextModel from the same package and pretrained diff --git a/models/endoscopic_inbody_classification/docs/README.md b/models/endoscopic_inbody_classification/docs/README.md index ee927be7..b8dbb1b0 100644 --- a/models/endoscopic_inbody_classification/docs/README.md +++ b/models/endoscopic_inbody_classification/docs/README.md @@ -97,7 +97,7 @@ This result is benchmarked under: ## MONAI Bundle Commands In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). #### Execute training: diff --git a/models/endoscopic_tool_segmentation/docs/README.md b/models/endoscopic_tool_segmentation/docs/README.md index d923cd9c..cc9ef1df 100644 --- a/models/endoscopic_tool_segmentation/docs/README.md +++ b/models/endoscopic_tool_segmentation/docs/README.md @@ -33,7 +33,7 @@ Since datasets are private, existing public datasets like [EndoVis 2017](https:/ ### Preprocessing When using EndoVis or any other dataset, it should be divided into "train", "valid" and "test" folders. Samples in each folder would better be images and converted to jpg format. Otherwise, "images", "labels", "val_images" and "val_labels" parameters in `configs/train.json` and "datalist" in `configs/inference.json` should be modified to fit given dataset. After that, "dataset_dir" parameter in `configs/train.json` and `configs/inference.json` should be changed to root folder which contains "train", "valid" and "test" folders. -Please notice that loading data operation in this bundle is adaptive. If images and labels are not in the same format, it may lead to a mismatching problem. For example, if images are in jpg format and labels are in npy format, PIL and Numpy readers will be used separately to load images and labels. Since these two readers have their own way to parse file's shape, loaded labels will be transpose of the correct ones and incur a missmatching problem. +Please notice that loading data operation in this bundle is adaptive. If images and labels are not in the same format, it may lead to a mismatching problem. For example, if images are in jpg format and labels are in npy format, PIL and Numpy readers will be used separately to load images and labels. Since these two readers have their own way to parse file's shape, loaded labels will be transpose of the correct ones and incur a mismatching problem. ## Training configuration The training as performed with the following: @@ -92,7 +92,7 @@ This result is benchmarked under: ## MONAI Bundle Commands In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). #### Execute training: diff --git a/models/lung_nodule_ct_detection/docs/README.md b/models/lung_nodule_ct_detection/docs/README.md index ea888ae5..8be47b2e 100644 --- a/models/lung_nodule_ct_detection/docs/README.md +++ b/models/lung_nodule_ct_detection/docs/README.md @@ -23,7 +23,7 @@ In these files, the values of "box" are the ground truth boxes in world coordina The raw CT images in LUNA16 have various of voxel sizes. The first step is to resample them to the same voxel size. In this model, we resampled them into 0.703125 x 0.703125 x 1.25 mm. -Please following the instruction in Section 3.1 of https://github.com/Project-MONAI/tutorials/tree/main/detection to do the resampling. +Please follow the instruction in Section 3.1 of https://github.com/Project-MONAI/tutorials/tree/main/detection to do the resampling. ### Data download The mhd/raw original data can be downloaded from [LUNA16](https://luna16.grand-challenge.org/Home/). The DICOM original data can be downloaded from [LIDC-IDRI database](https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI) [3,4,5]. You will need to resample the original data to start training. @@ -93,7 +93,7 @@ This result is benchmarked under: ## MONAI Bundle Commands In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). #### Execute training: diff --git a/models/maisi_ct_generative/scripts/find_masks.py b/models/maisi_ct_generative/scripts/find_masks.py index f8f3a14d..1bb757ff 100644 --- a/models/maisi_ct_generative/scripts/find_masks.py +++ b/models/maisi_ct_generative/scripts/find_masks.py @@ -61,8 +61,8 @@ def find_masks( mask_foldername: str = "./datasets/masks/", ): """ - Find candidate masks that fullfills all the requirements. - They shoud contain all the anatomies in `anatomy_list`. + Find candidate masks that fulfills all the requirements. + They should contain all the anatomies in `anatomy_list`. If there is no tumor specified in `anatomy_list`, we also expect the candidate masks to be tumor free. If check_spacing_and_output_size is True, the candidate masks need to have the expected `spacing` and `output_size`. Args: @@ -74,7 +74,7 @@ def find_masks( database_filepath: path for the json file that stores the information of all the candidate masks. mask_foldername: directory that saves all the candidate masks. Return: - candidate_masks, list of dict, each dict contains information of one candidate mask that fullfills all the requirements. + candidate_masks, list of dict, each dict contains information of one candidate mask that fulfills all the requirements. """ # check and preprocess input if isinstance(anatomy_list, int): diff --git a/models/multi_organ_segmentation/docs/README.md b/models/multi_organ_segmentation/docs/README.md index 3b26bbfa..e18bedc8 100644 --- a/models/multi_organ_segmentation/docs/README.md +++ b/models/multi_organ_segmentation/docs/README.md @@ -49,7 +49,7 @@ Mean Dice = 88.6% ## MONAI Bundle Commands In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). #### Execute model searching: diff --git a/models/pancreas_ct_dints_segmentation/docs/README.md b/models/pancreas_ct_dints_segmentation/docs/README.md index ce11419f..b41e3c83 100644 --- a/models/pancreas_ct_dints_segmentation/docs/README.md +++ b/models/pancreas_ct_dints_segmentation/docs/README.md @@ -4,7 +4,7 @@ A neural architecture search algorithm for volumetric (3D) segmentation of the p ![image](https://developer.download.nvidia.com/assets/Clara/Images/clara_pt_net_arch_search_segmentation_workflow_4-1.png) ## Data -The training dataset is the Pancreas Task from the Medical Segmentation Decathalon. Users can find more details on the datasets at http://medicaldecathlon.com/. +The training dataset is the Pancreas Task from the Medical Segmentation Decathlon. Users can find more details on the datasets at http://medicaldecathlon.com/. - Target: Pancreas and pancreatic tumor - Modality: Portal venous phase CT @@ -32,7 +32,7 @@ The neural architecture search was performed with the following: - Initial Learning Rate: 0.025 - Loss: DiceCELoss -### Optimial Architecture Training Configuration +### Optimal Architecture Training Configuration The training was performed with the following: - AMP: True @@ -112,7 +112,7 @@ Users can install Graphviz for visualization of searched architectures (needed i ## MONAI Bundle Commands In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). #### Execute model searching: diff --git a/models/pathology_nuclei_classification/docs/README.md b/models/pathology_nuclei_classification/docs/README.md index 5fb03872..3b274311 100644 --- a/models/pathology_nuclei_classification/docs/README.md +++ b/models/pathology_nuclei_classification/docs/README.md @@ -167,7 +167,7 @@ This result is benchmarked under: ## MONAI Bundle Commands In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). #### Execute training: diff --git a/models/pathology_nuclei_segmentation_classification/docs/README.md b/models/pathology_nuclei_segmentation_classification/docs/README.md index a68cfc1c..28d10b31 100644 --- a/models/pathology_nuclei_segmentation_classification/docs/README.md +++ b/models/pathology_nuclei_segmentation_classification/docs/README.md @@ -8,7 +8,7 @@ The model is trained to simultaneously segment and classify nuclei, and a two-st There are two training modes in total. If "original" mode is specified, [270, 270] and [80, 80] are used for `patch_size` and `out_size` respectively. If "fast" mode is specified, [256, 256] and [164, 164] are used for `patch_size` and `out_size` respectively. The results shown below are based on the "fast" mode. -In this bundle, the first stage is trained with pre-trained weights from some internal data. The [original author's repo](https://github.com/vqdang/hover_net) and [torchvison](https://pytorch.org/vision/stable/_modules/torchvision/models/resnet.html#ResNet18_Weights) also provide pre-trained weights but for non-commercial use. +In this bundle, the first stage is trained with pre-trained weights from some internal data. The [original author's repo](https://github.com/vqdang/hover_net) and [torchvision](https://pytorch.org/vision/stable/_modules/torchvision/models/resnet.html#ResNet18_Weights) also provide pre-trained weights but for non-commercial use. Each user is responsible for checking the content of models/datasets and the applicable licenses and determining if suitable for the intended use. If you want to train the first stage with pre-trained weights, just specify `--network_def#pretrained_url ` in the training command below, such as [ImageNet](https://download.pytorch.org/models/resnet18-f37072fd.pth). @@ -33,10 +33,10 @@ unzip consep_dataset.zip ### Preprocessing -After download the [datasets](https://warwick.ac.uk/fac/cross_fac/tia/data/hovernet/consep_dataset.zip), please run `scripts/prepare_patches.py` to prepare patches from tiles. Prepared patches are saved in ``/Prepared. The implementation is referring to . The command is like: +After download the [datasets](https://warwick.ac.uk/fac/cross_fac/tia/data/hovernet/consep_dataset.zip), please run `scripts/prepare_patches.py` to prepare patches from tiles. Prepared patches are saved in ``/Prepared. The implementation is referring to . The command is like: ``` -python scripts/prepare_patches.py --root +python scripts/prepare_patches.py --root ``` ## Training configuration @@ -121,7 +121,7 @@ This result is benchmarked under: ## MONAI Bundle Commands In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). #### Execute training, the evaluation during the training were evaluated on patches: Please note that if the default dataset path is not modified with the actual path in the bundle config files, you can also override it by using `--dataset_dir`: diff --git a/models/pathology_nuclick_annotation/docs/README.md b/models/pathology_nuclick_annotation/docs/README.md index 38336cb3..2a1eeec1 100644 --- a/models/pathology_nuclick_annotation/docs/README.md +++ b/models/pathology_nuclick_annotation/docs/README.md @@ -153,7 +153,7 @@ This result is benchmarked under: ## MONAI Bundle Commands In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). #### Execute training: diff --git a/models/pathology_tumor_detection/docs/README.md b/models/pathology_tumor_detection/docs/README.md index cf04c829..e849e5ac 100644 --- a/models/pathology_tumor_detection/docs/README.md +++ b/models/pathology_tumor_detection/docs/README.md @@ -113,7 +113,7 @@ This result is benchmarked under: In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). #### Execute training diff --git a/models/pediatric_abdominal_ct_segmentation/docs/README.md b/models/pediatric_abdominal_ct_segmentation/docs/README.md index a4bd4c52..17bcad7c 100644 --- a/models/pediatric_abdominal_ct_segmentation/docs/README.md +++ b/models/pediatric_abdominal_ct_segmentation/docs/README.md @@ -75,7 +75,7 @@ Four channel CT label ## MONAI Bundle Commands In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). #### Execute training: diff --git a/models/renalStructures_UNEST_segmentation/scripts/networks/nest_transformer_3D.py b/models/renalStructures_UNEST_segmentation/scripts/networks/nest_transformer_3D.py index 73f11ec3..36020f50 100755 --- a/models/renalStructures_UNEST_segmentation/scripts/networks/nest_transformer_3D.py +++ b/models/renalStructures_UNEST_segmentation/scripts/networks/nest_transformer_3D.py @@ -33,7 +33,7 @@ # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -""" Nested Transformer (NesT) in PyTorch +"""Nested Transformer (NesT) in PyTorch A PyTorch implement of Aggregating Nested Transformers as described in: 'Aggregating Nested Transformers' - https://arxiv.org/abs/2105.12723 diff --git a/models/segmentation_template/docs/README.md b/models/segmentation_template/docs/README.md index e524b4ed..c1826914 100644 --- a/models/segmentation_template/docs/README.md +++ b/models/segmentation_template/docs/README.md @@ -7,7 +7,7 @@ so doesn't do anything useful on its own. The purpose is to demonstrate the base bundles compatible with MONAILabel amongst other things. To use this bundle, copy the contents of the whole directory and change the definitions for network, data, transforms, -or whatever else you want for your own new segmentation bundle. Some of the names are critical for MONAILable but +or whatever else you want for your own new segmentation bundle. Some of the names are critical for MONAI Label but otherwise you're free to change just about whatever else is defined here to suit your network. This bundle should also demonstrate good practice and design, however there is one caveat about definitions being diff --git a/models/spleen_ct_segmentation/docs/README.md b/models/spleen_ct_segmentation/docs/README.md index a8d9bb13..3d964b13 100644 --- a/models/spleen_ct_segmentation/docs/README.md +++ b/models/spleen_ct_segmentation/docs/README.md @@ -6,7 +6,7 @@ This model is trained using the runner-up [1] awarded pipeline of the "Medical S ![model workflow](https://developer.download.nvidia.com/assets/Clara/Images/clara_pt_spleen_ct_segmentation_workflow.png) ## Data -The training dataset is the Spleen Task from the Medical Segmentation Decathalon. Users can find more details on the datasets at http://medicaldecathlon.com/. +The training dataset is the Spleen Task from the Medical Segmentation Decathlon. Users can find more details on the datasets at http://medicaldecathlon.com/. - Target: Spleen - Modality: CT @@ -79,7 +79,7 @@ This result is benchmarked under: ## MONAI Bundle Commands In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). #### Execute training: diff --git a/models/spleen_deepedit_annotation/docs/README.md b/models/spleen_deepedit_annotation/docs/README.md index d6a2fa1e..18f88008 100644 --- a/models/spleen_deepedit_annotation/docs/README.md +++ b/models/spleen_deepedit_annotation/docs/README.md @@ -6,7 +6,7 @@ DeepEdit is an algorithm that combines the power of two models in one single arc The model was trained on 32 images and validated on 9 images. ## Data -The training dataset is the Spleen Task from the Medical Segmentation Decathalon. Users can find more details on the datasets at http://medicaldecathlon.com/. +The training dataset is the Spleen Task from the Medical Segmentation Decathlon. Users can find more details on the datasets at http://medicaldecathlon.com/. - Target: Spleen - Modality: CT @@ -88,7 +88,7 @@ If you face memory issues with CacheDataset, you can either switch to a regular ## MONAI Bundle Commands In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). #### Execute training: diff --git a/models/swin_unetr_btcv_segmentation/docs/README.md b/models/swin_unetr_btcv_segmentation/docs/README.md index 7b61daa1..6c881b63 100644 --- a/models/swin_unetr_btcv_segmentation/docs/README.md +++ b/models/swin_unetr_btcv_segmentation/docs/README.md @@ -99,7 +99,7 @@ This result is benchmarked under: ## MONAI Bundle Commands In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). #### Execute training: diff --git a/models/ventricular_short_axis_3label/docs/README.md b/models/ventricular_short_axis_3label/docs/README.md index c2152fb8..772ec23a 100644 --- a/models/ventricular_short_axis_3label/docs/README.md +++ b/models/ventricular_short_axis_3label/docs/README.md @@ -19,7 +19,7 @@ The network is trained with this data in conjunction with a series of augmentati Free-form deformation is applied to vary the shape of the heart and its surrounding tissues which mimics to a degree deformation like what would be observed through the cardiac cycle. This of course does not replicate the heart moving through plane during the cycle or represent other observed changes but does provide enough variation that full-cycle segmentation is generally acceptable. -Smooth fields are used to vary contrast and intensity in localised regions to simulate some of the variation in image quality caused by acquisition artefacts. Guassian noise is also added to simulate poor quality acquisition. These together force the network to learn to deal with a wider variation of image quality and partially to account for the difference between scanner vendors. +Smooth fields are used to vary contrast and intensity in localised regions to simulate some of the variation in image quality caused by acquisition artefacts. Gaussian noise is also added to simulate poor quality acquisition. These together force the network to learn to deal with a wider variation of image quality and partially to account for the difference between scanner vendors. Training is invoked with the following command line: diff --git a/models/vista2d/docs/README.md b/models/vista2d/docs/README.md index afcdb12a..374dcae8 100644 --- a/models/vista2d/docs/README.md +++ b/models/vista2d/docs/README.md @@ -86,7 +86,7 @@ python -m monai.bundle run_workflow "scripts.workflow.VistaCell" --config_file c You can override the `basedir` to specify a different dataset directory by using the following command: ```bash -python -m monai.bundle run_workflow "scripts.workflow.VistaCell" --config_file configs/hyper_parameters.yaml --basedir +python -m monai.bundle run_workflow "scripts.workflow.VistaCell" --config_file configs/hyper_parameters.yaml --basedir ``` #### Quick run with a few data points diff --git a/models/vista3d/docs/README.md b/models/vista3d/docs/README.md index 0043fc85..0888861a 100644 --- a/models/vista3d/docs/README.md +++ b/models/vista3d/docs/README.md @@ -1,5 +1,5 @@ # Model Overview -Vista3D model fintuning/evaluation/inference pipeline. VISTA3D is trained using over 20 partial datasets with more complicated pipeline. To avoid confusion, we will only provide finetuning/continual learning APIs for users to finetune on their +Vista3D model finetuning/evaluation/inference pipeline. VISTA3D is trained using over 20 partial datasets with more complicated pipeline. To avoid confusion, we will only provide finetuning/continual learning APIs for users to finetune on their own datasets. To reproduce the paper results, please refer to https://github.com/Project-MONAI/VISTA/tree/main/vista3d # Installation Guide @@ -68,7 +68,7 @@ Example code for 5 fold cross-validation generation can be found [here](data.md) Note the data is not the absolute path to the image and label file. The actual image file will be `os.path.join(dataset_dir, data["training"][item]["image"])`, where `dataset_dir` is defined in `configs/train_continual.json`. Also 5-fold cross-validation is not required! `fold=0` is defined in train.json, which means any data item with fold==0 will be used as validation and other fold will be used for training. So if you only have train/val split, you can manually set validation data with "fold": 0 in its datalist and the other to be training by setting "fold" to any number other than 0. ``` ## Step2: Changing hyperparameters -For continual learning, user can change `configs/train_continual.json`. More advanced users can change configurations in `configs/train.json`. Most hyperparameters are straighforward and user can tell based on their names. The users must manually change the following keys in `configs/train_continual.json`. +For continual learning, user can change `configs/train_continual.json`. More advanced users can change configurations in `configs/train.json`. Most hyperparameters are straightforward and user can tell based on their names. The users must manually change the following keys in `configs/train_continual.json`. #### 1. `label_mappings` ``` "label_mappings": { @@ -94,10 +94,10 @@ For continual learning, user can change `configs/train_continual.json`. More adv Change `data_list_file_path` to the absolute path of your data json split. Change `dataset_dir` to the root folder that combines with the relative path in the data json split. #### 3. Optional hyperparameters and details are [here](finetune.md). -Hyperparameteers finetuning is important and varies from task to task. +Hyperparameters finetuning is important and varies from task to task. ## Step3: Run finetuning -The hyperparameters in `configs/train_continual.json` will overwrite ones in `configs/train.json`. Configs in the back will overide the previous ones if they have the same key. +The hyperparameters in `configs/train_continual.json` will overwrite ones in `configs/train.json`. Configs in the back will override the previous ones if they have the same key. Single-GPU: ```bash diff --git a/models/vista3d/docs/data.md b/models/vista3d/docs/data.md index 45595a80..878399e2 100644 --- a/models/vista3d/docs/data.md +++ b/models/vista3d/docs/data.md @@ -1,5 +1,5 @@ ### Best practice to generate data list -User can use monai to generate the 5-fold data lists. Full exampls can be found in VISTA3D open source [codebase](https://github.com/Project-MONAI/VISTA/blob/main/vista3d/data/make_datalists.py) +User can use monai to generate the 5-fold data lists. Full examples can be found in VISTA3D open source [codebase](https://github.com/Project-MONAI/VISTA/blob/main/vista3d/data/make_datalists.py) ```python from monai.data.utils import partition_dataset from monai.bundle import ConfigParser diff --git a/models/vista3d/docs/finetune.md b/models/vista3d/docs/finetune.md index 855350a1..a59a6379 100644 --- a/models/vista3d/docs/finetune.md +++ b/models/vista3d/docs/finetune.md @@ -4,7 +4,7 @@ #### Best practice to set label_mapping -For a class that represent the same or similar class as the global index, directly map it to the global index. For example, "mouse left lung" (e.g. index 2 in the mouse dataset) can be mapped to the 28 "left lung upper lobe"(or 29 "left lung lower lobe") with [[2,28]]. After finetuning, 28 now represents "mouse left lung" and will be used for segmentation. If you want to segment 4 substructures of aorta, you can map one of the substructuress to 6 aorta and the rest to any value, [[1,6],[2,133],[3,134],[4,135]]. +For a class that represent the same or similar class as the global index, directly map it to the global index. For example, "mouse left lung" (e.g. index 2 in the mouse dataset) can be mapped to the 28 "left lung upper lobe"(or 29 "left lung lower lobe") with [[2,28]]. After finetuning, 28 now represents "mouse left lung" and will be used for segmentation. If you want to segment 4 substructures of aorta, you can map one of the substructures to 6 aorta and the rest to any value, [[1,6],[2,133],[3,134],[4,135]]. ``` NOTE: Do not map to global index value >= 255. `num_classes=255` in the config only represent the maximum mapping index, while the actual output class number only depends on your label_mapping definition. The 255 value in the inference output is also used to represent 'NaN' value. ``` @@ -18,7 +18,7 @@ Users can disable if the validation takes too long. In `train_continual.json`, only `n_train_samples` and `n_val_samples` are used for training and validation. #### `patch_size` -The patch size parameter is defined in `configs/train_continual.json`: `"patch_size": [128, 128, 128]`. For finetuning purposes, this value needs to be changed acccording to user's task and GPU memory. Usually a larger patch_size will give better final results. `[192,192,128]` is a good value for larger memory GPU. +The patch size parameter is defined in `configs/train_continual.json`: `"patch_size": [128, 128, 128]`. For finetuning purposes, this value needs to be changed according to user's task and GPU memory. Usually a larger patch_size will give better final results. `[192,192,128]` is a good value for larger memory GPU. #### `resample_to_spacing` The resample_to_spacing parameter is defined in `configs/train_continual.json` and it represents the resolution the model will be trained on. The `1.5,1.5,1.5` mm default is suitable for large CT organs, but for other tasks, this value should be changed to achive the optimal performance. diff --git a/models/vista3d/docs/inference.md b/models/vista3d/docs/inference.md index 220a6211..15919106 100644 --- a/models/vista3d/docs/inference.md +++ b/models/vista3d/docs/inference.md @@ -9,7 +9,7 @@ All the configurations for inference is stored in inference.json, change those p - The `label_prompt` is a list of length `B`, which can perform `B` foreground objects segmentation, e.g. `[2,3,4,5]`. If `B>1`, Point prompts must NOT be provided. - The `points` is of shape `[N, 3]` like `[[x1,y1,z1],[x2,y2,z2],...[xN,yN,zN]]`, representing `N` point coordinates **IN THE ORIGINAL IMAGE SPACE** of a single foreground object. `point_labels` is a list of length [N] like [1,1,0,-1,...], which matches the `points`. 0 means background, 1 means foreground, -1 means ignoring this point. `points` and `point_labels` must pe provided together and match length. -- **B must be 1 if label_prompt and points are provided together**. The inferer only supports SINGLE OBJECT point click segmentatation. +- **B must be 1 if label_prompt and points are provided together**. The inferer only supports SINGLE OBJECT point click segmentation. - If no prompt is provided, the model will use `everything_labels` to segment 117 classes: ```Python diff --git a/models/wholeBody_ct_segmentation/docs/README.md b/models/wholeBody_ct_segmentation/docs/README.md index f7440476..51629328 100644 --- a/models/wholeBody_ct_segmentation/docs/README.md +++ b/models/wholeBody_ct_segmentation/docs/README.md @@ -179,7 +179,7 @@ This result is benchmarked under: ## MONAI Bundle Commands In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file. -For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). +For more detailed usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html). #### Execute training: diff --git a/models/wholeBrainSeg_Large_UNEST_segmentation/docs/README.md b/models/wholeBrainSeg_Large_UNEST_segmentation/docs/README.md index 82aff737..6a9e705b 100644 --- a/models/wholeBrainSeg_Large_UNEST_segmentation/docs/README.md +++ b/models/wholeBrainSeg_Large_UNEST_segmentation/docs/README.md @@ -39,7 +39,7 @@ Fig.2 - The network architecture of UNEST Base model ## Data -The training data is from the Vanderbilt University and Vanderbilt University Medical Center with public released OASIS and CANDI datsets. +The training data is from the Vanderbilt University and Vanderbilt University Medical Center with public released OASIS and CANDI datasets. Training and testing data are MRI T1-weighted (T1w) 3D volumes coming from 3 different sites. There are a total of 133 classes in the whole brain segmentation task. Among 50 T1w MRI scans from Open Access Series on Imaging Studies (OASIS) (Marcus et al., 2007) dataset, 45 scans are used for training and the other 5 for validation. The testing cohort contains Colin27 T1w scan (Aubert-Broche et al., 2006) and 13 T1w MRI scans from the Child and Adolescent Neuro Development Initiative (CANDI) @@ -52,7 +52,7 @@ The data should be in the MNI305 space before inference. If your images are already in MNI space, skip the registration step. -You could use any resitration tool to register image to MNI space. Here is an example using ants. +You could use any registration tool to register image to MNI space. Here is an example using ants. Registration to MNI Space: Sample suggestion. E.g., use ANTS or other tools for registering T1 MRI image to MNI305 Space. ``` @@ -158,8 +158,8 @@ With 10 fine-tuned labels, the training process converges fast. | 132 : Left-TTG---transverse-temporal-gyrus | -## Bundle Integration in MONAI Lable -The inference and training pipleine can be easily used by the MONAI Label server and 3D Slicer for fast labeling T1w MRI images in MNI space. +## Bundle Integration in MONAI Label +The inference and training pipeline can be easily used by the MONAI Label server and 3D Slicer for fast labeling T1w MRI images in MNI space. ![](./3DSlicer_use.png)
diff --git a/models/wholeBrainSeg_Large_UNEST_segmentation/scripts/networks/nest_transformer_3D.py b/models/wholeBrainSeg_Large_UNEST_segmentation/scripts/networks/nest_transformer_3D.py index 73f11ec3..36020f50 100755 --- a/models/wholeBrainSeg_Large_UNEST_segmentation/scripts/networks/nest_transformer_3D.py +++ b/models/wholeBrainSeg_Large_UNEST_segmentation/scripts/networks/nest_transformer_3D.py @@ -33,7 +33,7 @@ # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -""" Nested Transformer (NesT) in PyTorch +"""Nested Transformer (NesT) in PyTorch A PyTorch implement of Aggregating Nested Transformers as described in: 'Aggregating Nested Transformers' - https://arxiv.org/abs/2105.12723