@@ -115,3 +115,51 @@ file used for segmentation of lung from radiograph, which can be find in
115115 label_source = [0, 1]
116116 label_target = [0, 255]
117117
118+
119+ Evaluation
120+ ----------
121+
122+ To evaluate a model's prediction results compared with the ground truth,
123+ use the ``pymic_eval_seg `` and ``pymic_eval_cls `` commands for segmentation
124+ and classfication tasks, respectively. Both of them accept a configuration
125+ file to specify the evaluation metrics, predicted results, ground truth and
126+ other information.
127+
128+ For example, for segmentation tasks, run:
129+
130+ .. code-block :: none
131+
132+ pymic_eval_seg evaluation.cfg
133+
134+ The configuration file is like (an example from ``PYMIC_examples/seg_ssl/ACDC ``):
135+
136+ .. code-block :: none
137+
138+ [evaluation]
139+ metric = dice
140+ label_list = [1,2,3]
141+ organ_name = heart
142+
143+ ground_truth_folder_root = ../../PyMIC_data/ACDC/preprocess
144+ segmentation_folder_root = result/unet2d_em
145+ evaluation_image_pair = config/data/image_test_gt_seg.csv
146+
147+ See :mod: `pymic.util.evaluation_seg.evaluation ` for details of the configuration required.
148+
149+ For classification tasks, run:
150+
151+ .. code-block :: none
152+
153+ pymic_eval_cls evaluation.cfg
154+
155+ The configuration file is like (an example from ``PYMIC_examples/classification/CHNCXR ``):
156+
157+ .. code-block :: none
158+
159+ [evaluation]
160+ metric_list = [accuracy, auc]
161+ ground_truth_csv = config/cxr_test.csv
162+ predict_csv = result/resnet18.csv
163+ predict_prob_csv = result/resnet18_prob.csv
164+
165+ See :mod: `pymic.util.evaluation_cls.main ` for details of the configuration required.
0 commit comments