-
Notifications
You must be signed in to change notification settings - Fork 79
feat: add GenEval benchmark #507
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…mpts benchmark - Introduced `from_benchmark` method in `PrunaDataModule` to create instances from benchmark classes. - Added `Benchmark`, `BenchmarkEntry`, and `BenchmarkRegistry` classes for managing benchmarks. - Implemented `PartiPrompts` benchmark for text-to-image generation with various categories and challenges. - Created utility function `benchmark_to_datasets` to convert benchmarks into datasets compatible with `PrunaDataModule`. - Added integration tests for benchmark functionality and data module interactions.
…filtering - Remove heavy benchmark abstraction (Benchmark class, registry, adapter, 24 subclasses) - Extend setup_parti_prompts_dataset with category and num_samples params - Add BenchmarkInfo dataclass for metadata (metrics, description, subsets) - Switch PartiPrompts to prompt_with_auxiliaries_collate to preserve Category/Challenge - Merge tests into test_datamodule.py Reduces 964 lines to 128 lines (87% reduction) Co-authored-by: Cursor <cursoragent@cursor.com>
Add GenEval benchmark for fine-grained compositional evaluation of text-to-image models. Fetches prompts from GitHub and generates questions. - Add setup_geneval_dataset with 6 subcategories - Categories: single_object, two_object, counting, colors, position, color_attr - Generates evaluation questions from metadata - Register in base_datasets with prompt_with_auxiliaries_collate - Add BenchmarkInfo with metrics: ["qa_accuracy"] - Add tests Co-authored-by: Cursor <cursoragent@cursor.com>
Document all dataclass fields per Numpydoc PR01 with summary on new line per GL01. Co-authored-by: Cursor <cursoragent@cursor.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
Comment @cursor review or bugbot run to trigger another review on this PR
- Add list_benchmarks() to filter benchmarks by task type - Add get_benchmark_info() to retrieve benchmark metadata - Add COCO, ImageNet, WikiText to benchmark_info registry - Fix metric names to match MetricRegistry (clip_score, clipiqa) Co-authored-by: Cursor <cursoragent@cursor.com>
Use None default and check both pos existence and non-empty first element to avoid malformed questions. Co-authored-by: Cursor <cursoragent@cursor.com>
|
This PR has been inactive for 10 days and is now marked as stale. |
Closes #514
Summary
Changes
setup_geneval_datasetinsrc/pruna/data/datasets/prompt.py_generate_geneval_questionhelper for question generationbase_datasetsinsrc/pruna/data/__init__.pyBenchmarkInfowith metrics:["qa_accuracy"]Test plan
test_geneval_with_category_filterpassestest_dm_from_string[GenEval-...]passes