🚀 Welcome to the Gradients on Demand Subnet
Distributed intelligence for LLM and diffusion model training. Where the world's best AutoML minds compete.
Tournaments 🏆 Competitive events where the validator executes miners' open-source training scripts on dedicated infrastructure.
- Duration: 4-7 days per tournament
- Frequency: New tournaments start 72 hours after the previous one ends
- Rewards: Exponentially higher weight potential for top performers
- Open Source: Winning AutoML scripts are released when tournaments complete
- Winners Repository: First place tournament scripts is uploaded to github.com/gradients-opensource 🤙
- Tournament Overview
For technical documentation on GRPO reward functions and implementation details, see GRPO Safe Code Execution Guide.
You can re-evaluate existing tasks on your own machine. Or you can run non-submitted models to check if they are good. This works for tasks not older than 7 days.
Make sure to build the latest docker images before running the evaluation.
docker build -f dockerfiles/validator.dockerfile -t weightswandering/tuning_vali:latest .
docker build -f dockerfiles/validator-diffusion.dockerfile -t diagonalge/tuning_validator_diffusion:latest .To see the available options, run:
python -m utils.run_evaluation --helpTo re-evaluate a task, run:
python -m utils.run_evaluation --task_id <task_id>To run a non-submitted model, run:
python -m utils.run_evaluation --task_id <task_id> --models <model_name>