Skip to content

Conversation

@davidberenstein1957
Copy link
Member

@davidberenstein1957 davidberenstein1957 commented Jan 31, 2026

Closes #514

Summary

  • Add GenEval benchmark for fine-grained compositional evaluation
  • Fetches prompts from GitHub and generates evaluation questions
  • Supports 6 subcategories: single_object, two_object, counting, colors, position, color_attr

Changes

  • Add setup_geneval_dataset in src/pruna/data/datasets/prompt.py
  • Add _generate_geneval_question helper for question generation
  • Register in base_datasets in src/pruna/data/__init__.py
  • Add BenchmarkInfo with metrics: ["qa_accuracy"]
  • Auxiliaries include: tag, questions, include metadata
  • Add tests for basic loading and category filtering

Test plan

  • test_geneval_with_category_filter passes
  • test_dm_from_string[GenEval-...] passes

davidberenstein1957 and others added 4 commits January 22, 2026 10:58
…mpts benchmark

- Introduced `from_benchmark` method in `PrunaDataModule` to create instances from benchmark classes.
- Added `Benchmark`, `BenchmarkEntry`, and `BenchmarkRegistry` classes for managing benchmarks.
- Implemented `PartiPrompts` benchmark for text-to-image generation with various categories and challenges.
- Created utility function `benchmark_to_datasets` to convert benchmarks into datasets compatible with `PrunaDataModule`.
- Added integration tests for benchmark functionality and data module interactions.
…filtering

- Remove heavy benchmark abstraction (Benchmark class, registry, adapter, 24 subclasses)
- Extend setup_parti_prompts_dataset with category and num_samples params
- Add BenchmarkInfo dataclass for metadata (metrics, description, subsets)
- Switch PartiPrompts to prompt_with_auxiliaries_collate to preserve Category/Challenge
- Merge tests into test_datamodule.py

Reduces 964 lines to 128 lines (87% reduction)

Co-authored-by: Cursor <cursoragent@cursor.com>
Add GenEval benchmark for fine-grained compositional evaluation of
text-to-image models. Fetches prompts from GitHub and generates questions.

- Add setup_geneval_dataset with 6 subcategories
- Categories: single_object, two_object, counting, colors, position, color_attr
- Generates evaluation questions from metadata
- Register in base_datasets with prompt_with_auxiliaries_collate
- Add BenchmarkInfo with metrics: ["qa_accuracy"]
- Add tests

Co-authored-by: Cursor <cursoragent@cursor.com>
Document all dataclass fields per Numpydoc PR01 with summary on new line per GL01.

Co-authored-by: Cursor <cursoragent@cursor.com>
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

Comment @cursor review or bugbot run to trigger another review on this PR

davidberenstein1957 and others added 2 commits January 31, 2026 16:28
- Add list_benchmarks() to filter benchmarks by task type
- Add get_benchmark_info() to retrieve benchmark metadata
- Add COCO, ImageNet, WikiText to benchmark_info registry
- Fix metric names to match MetricRegistry (clip_score, clipiqa)

Co-authored-by: Cursor <cursoragent@cursor.com>
Use None default and check both pos existence and non-empty first element to avoid malformed questions.

Co-authored-by: Cursor <cursoragent@cursor.com>
@davidberenstein1957 davidberenstein1957 changed the base branch from feat/add-partiprompts-benchmark-to-pruna to main January 31, 2026 16:04
@github-actions
Copy link

This PR has been inactive for 10 days and is now marked as stale.

@github-actions github-actions bot added the stale label Feb 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BENCHMARK] Add GenEval benchmarks

1 participant