Skip to content

michaelj094/G.O.D

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

G.O.D Subnet

🚀 Welcome to the Gradients on Demand Subnet

Distributed intelligence for LLM and diffusion model training. Where the world's best AutoML minds compete.

Tournaments 🏆 Competitive events where the validator executes miners' open-source training scripts on dedicated infrastructure.

  • Duration: 4-7 days per tournament
  • Frequency: New tournaments start 72 hours after the previous one ends
  • Rewards: Exponentially higher weight potential for top performers
  • Open Source: Winning AutoML scripts are released when tournaments complete
  • Winners Repository: First place tournament scripts is uploaded to github.com/gradients-opensource 🤙
  • Tournament Overview

Setup Guides

Developer Resources

For technical documentation on GRPO reward functions and implementation details, see GRPO Safe Code Execution Guide.

Recommended Compute Requirements

Compute Requirements

Miner Advice

Miner Advice

Running evaluations on your own

You can re-evaluate existing tasks on your own machine. Or you can run non-submitted models to check if they are good. This works for tasks not older than 7 days.

Make sure to build the latest docker images before running the evaluation.

docker build -f dockerfiles/validator.dockerfile -t weightswandering/tuning_vali:latest .
docker build -f dockerfiles/validator-diffusion.dockerfile -t diagonalge/tuning_validator_diffusion:latest .

To see the available options, run:

python -m utils.run_evaluation --help

To re-evaluate a task, run:

python -m utils.run_evaluation --task_id <task_id>

To run a non-submitted model, run:

python -m utils.run_evaluation --task_id <task_id> --models <model_name>

About

Gradients on demand

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.5%
  • Shell 1.7%
  • Other 0.8%