This repository contains the code for our paper:
TreeGrad-Ranker: Feature Ranking via
TreeStab provides a numerically stable
-
TreeGrad: The backbone algorithm calculating gradients in
$O(L)$ time, which include weighted Banzhaf values. - TreeGrad-Shap: A stable implementation for Beta Shapley values with integral parameters.
- TreeGrad-Ranker: An optimized feature ranking tool.
- TreeStab: The combination of TreeGrad and TreeGrad-Shap.
The test/ directory contains several scripts used to verify the correctness of our implementation.
Follow the steps below to train the models, generate the experimental results, and reproduce the figures from the paper.
First, train all the required (gradient boosting) decision trees:
python createTreeModel.py
To generate the experimental results, use the -p flag followed by the number of available CPUs:
python main.py -p <number_of_cpus>
Once the results are ready, the figures can be plotted using:
python plot_comparison.py
python plot_hyperparameters.py
To replicate the numerical stability analysis and visualize the precision gap, run:
python plot_inaccuracy.py -p <number_of_cpus>