Skip to content

Centralize pagination test, add constraints#313

Merged
PGijsbers merged 11 commits intomainfrom
pagination-test
Apr 14, 2026
Merged

Centralize pagination test, add constraints#313
PGijsbers merged 11 commits intomainfrom
pagination-test

Conversation

@PGijsbers
Copy link
Copy Markdown
Contributor

@PGijsbers PGijsbers commented Apr 14, 2026

For now I think a limit of 1000 seems reasonable, but we'll have to evaluate this later (and also whether or not we should have per-endpoint limits).

@PGijsbers PGijsbers changed the title Centralize pagination test Centralize pagination test, add constraints Apr 14, 2026
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • The test_pagination_invalid_type docstring and parameter name no longer match its behavior (it now also checks range violations, not just types); consider renaming the test and updating the docstring to reflect that it validates both type and bounds.
  • Since Pagination.limit and offset now enforce specific bounds (gt=0, le=1000, ge=0), you might want to add a small test explicitly covering the boundary values (e.g., limit=1, limit=1000, offset=0) to ensure the edges are accepted as intended.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The `test_pagination_invalid_type` docstring and parameter name no longer match its behavior (it now also checks range violations, not just types); consider renaming the test and updating the docstring to reflect that it validates both type and bounds.
- Since `Pagination.limit` and `offset` now enforce specific bounds (`gt=0`, `le=1000`, `ge=0`), you might want to add a small test explicitly covering the boundary values (e.g., `limit=1`, `limit=1000`, `offset=0`) to ensure the edges are accepted as intended.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 14, 2026

Warning

Rate limit exceeded

@PGijsbers has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 53 minutes and 37 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 53 minutes and 37 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 745abf81-ccf5-4509-8a19-86bb0d0bf444

📥 Commits

Reviewing files that changed from the base of the PR and between 8b6832c and d20d4cc.

📒 Files selected for processing (1)
  • tests/routers/openml/datasets_list_datasets_test.py

Walkthrough

The Pagination model in src/routers/dependencies.py was changed to use pydantic.Field constraints: offset now has ge=0, and limit now uses gt=0 and le=LIMIT_MAX with new module-level constants LIMIT_DEFAULT (100) and LIMIT_MAX. New unit tests were added to validate Pagination instantiation and error reporting for invalid limit/offset inputs. Multiple end-to-end pagination tests were removed or adjusted, and several tests were updated to import and use LIMIT_DEFAULT/LIMIT_MAX instead of hardcoded limits.

Possibly related PRs

  • openml/server-api PR 305: Modifies the same pagination model and related test behaviors (validation and limit constants), showing a direct code-level overlap.
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Centralize pagination test, add constraints' directly matches the main changes: consolidating pagination validation tests and introducing pagination field constraints.
Description check ✅ Passed The description discusses pagination limit evaluation (1000 limit and per-endpoint limits), which relates to the constraint additions and test centralization in the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch pagination-test

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tests/dependencies/pagination_test.py (1)

19-31: Add an explicit boundary case for limit=0.

You validate negative and oversized limits, but not the exact lower boundary introduced by gt=0. Adding limit=0 improves regression protection.

✅ Suggested test addition
 `@pytest.mark.parametrize`(
     ("kwargs", "expected_field"),
     [
         ({"limit": "abc", "offset": 0}, "limit"),
+        ({"limit": 0, "offset": 0}, "limit"),
         ({"limit": -5, "offset": 0}, "limit"),
         ({"limit": 2000, "offset": 0}, "limit"),
         ({"limit": 5, "offset": "xyz"}, "offset"),
         ({"limit": 5, "offset": -5}, "offset"),
     ],
     ids=[
         "bad_limit_type",
+        "zero_limit",
         "negative_limit",
         "limit_too_large",
         "bad_offset_type",
         "negative_offset",
     ],
 )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/dependencies/pagination_test.py` around lines 19 - 31, Add an explicit
boundary case for limit=0 to the invalid-params parametrization: include the
tuple ({"limit": 0, "offset": 0}, "limit") in the list of test cases and add the
corresponding id "zero_limit" to the ids list so the test asserts that a zero
limit (violating gt=0) is treated as invalid; update the parameter list entries
and ids to include these new items in tests/dependencies/pagination_test.py.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/dependencies/pagination_test.py`:
- Around line 19-31: Add an explicit boundary case for limit=0 to the
invalid-params parametrization: include the tuple ({"limit": 0, "offset": 0},
"limit") in the list of test cases and add the corresponding id "zero_limit" to
the ids list so the test asserts that a zero limit (violating gt=0) is treated
as invalid; update the parameter list entries and ids to include these new items
in tests/dependencies/pagination_test.py.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 59c00b5d-556c-4bcd-9351-278ae5478b65

📥 Commits

Reviewing files that changed from the base of the PR and between 3879b5b and c8868d8.

📒 Files selected for processing (3)
  • src/routers/dependencies.py
  • tests/dependencies/pagination_test.py
  • tests/routers/openml/task_list_test.py
💤 Files with no reviewable changes (1)
  • tests/routers/openml/task_list_test.py

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 14, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 93.58%. Comparing base (ce2431f) to head (d20d4cc).
⚠️ Report is 2 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #313   +/-   ##
=======================================
  Coverage   93.58%   93.58%           
=======================================
  Files          72       73    +1     
  Lines        3117     3119    +2     
  Branches      220      221    +1     
=======================================
+ Hits         2917     2919    +2     
  Misses        143      143           
  Partials       57       57           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@PGijsbers
Copy link
Copy Markdown
Contributor Author

Closes #302

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/routers/openml/datasets_list_datasets_test.py`:
- Around line 110-112: The test currently forces PHP results to LIMIT_DEFAULT by
doing php_json = php_json[:LIMIT_DEFAULT], which ignores an explicit requested
limit; change this to slice using the effective requested limit: read the test's
requested limit (e.g. from the variable used to build the request such as params
or limit), parse it to an int if present, and set effective_limit =
requested_limit if provided else LIMIT_DEFAULT, then do php_json =
php_json[:effective_limit] so php_json and py_json are compared using the same
limit; keep references to php_json, py_json and LIMIT_DEFAULT to locate the
change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: af95c178-862d-4a5f-85f9-9abf21660286

📥 Commits

Reviewing files that changed from the base of the PR and between c8868d8 and 23664ba.

📒 Files selected for processing (5)
  • src/routers/dependencies.py
  • tests/dependencies/pagination_test.py
  • tests/routers/openml/datasets_list_datasets_test.py
  • tests/routers/openml/migration/tasks_migration_test.py
  • tests/routers/openml/task_list_test.py
💤 Files with no reviewable changes (1)
  • tests/routers/openml/task_list_test.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/dependencies/pagination_test.py

Comment thread tests/routers/openml/datasets_list_datasets_test.py
@PGijsbers PGijsbers merged commit 169a224 into main Apr 14, 2026
9 checks passed
@PGijsbers PGijsbers deleted the pagination-test branch April 14, 2026 12:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant