Skip to content

fix(py): relocate tools of model-config test and sample-flow test#4669

Draft
MengqinShen wants to merge 1 commit intomainfrom
elisa/fix/model-config-test-tool
Draft

fix(py): relocate tools of model-config test and sample-flow test#4669
MengqinShen wants to merge 1 commit intomainfrom
elisa/fix/model-config-test-tool

Conversation

@MengqinShen
Copy link
Contributor

@MengqinShen MengqinShen commented Feb 14, 2026

No description provided.

@github-actions github-actions bot added docs Improvements or additions to documentation js python Python config fix test labels Feb 14, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @MengqinShen, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the Python sample testing infrastructure by relocating different testing tools into dedicated subdirectories. This change aims to improve the modularity and maintainability of the test suite, making it easier to manage and extend specific types of tests. The update also includes necessary adjustments to paths, dependency configurations, and enhances the output format for model configuration tests.

Highlights

  • Test Tool Reorganization: The sample testing tools have been restructured into two distinct subdirectories: flow-test for flow-related tests and model-config-test for model configuration tests, improving organization and clarity.
  • Path and Configuration Updates: All relevant scripts and configuration files, including test_sample_flows and pyproject.toml, were updated to reflect the new directory structure and ensure correct execution and dependency resolution.
  • Dependency Management Enhancements: The pyproject.toml and uv.lock files for the model configuration tests were updated to correctly manage dependencies, including the addition of google-cloud-logging.
  • Improved Model Test Output: The run_single_model_test.py script now includes functionality to output test results in a structured JSON format, marked for easier programmatic parsing.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • py/bin/test_sample_flows
    • Updated the execution path for the review_sample_flows.py script to its new flow-test subdirectory.
  • py/pyproject.toml
    • Added sample-test as a workspace member.
    • Updated the tool.uv.workspace.members list to include the new samples/sample-test/model-config-test path.
  • py/samples/sample-test/flow-test/README.md
    • Added a new README file providing documentation for the sample flow testing tool.
  • py/samples/sample-test/flow-test/review_sample_flows.py
    • Renamed and moved from samples/sample-test/ to samples/sample-test/flow-test/.
  • py/samples/sample-test/flow-test/run_single_flow.py
    • Renamed and moved from samples/sample-test/ to samples/sample-test/flow-test/.
  • py/samples/sample-test/model-config-test/README.md
    • Renamed and moved from samples/sample-test/ to samples/sample-test/model-config-test/.
    • Updated the title from 'Model Performance Testing Tool' to 'Model Config Testing Tool'.
  • py/samples/sample-test/model-config-test/model_performance_test.py
    • Renamed and moved from samples/sample-test/ to samples/sample-test/model-config-test/.
    • Adjusted internal path resolution logic due to directory change.
  • py/samples/sample-test/model-config-test/pyproject.toml
    • Renamed and moved from samples/sample-test/ to samples/sample-test/model-config-test/.
    • Updated plugin dependencies to use workspace references instead of relative paths.
    • Added google-cloud-logging as a new dependency.
  • py/samples/sample-test/model-config-test/run_single_model_test.py
    • Renamed and moved from samples/sample-test/ to samples/sample-test/model-config-test/.
    • Adjusted internal path resolution logic.
    • Implemented JSON output for test results with start and end markers.
  • py/samples/sample-test/model-config-test/server.py
    • Renamed and moved from samples/sample-test/ to samples/sample-test/model-config-test/.
    • Adjusted internal path resolution logic.
  • py/samples/sample-test/model-config-test/static/index.html
    • Renamed and moved from samples/sample-test/static/ to samples/sample-test/model-config-test/static/.
  • py/samples/sample-test/model-config-test/static/script.js
    • Renamed and moved from samples/sample-test/static/ to samples/sample-test/model-config-test/static/.
  • py/samples/sample-test/model-config-test/static/style.css
    • Renamed and moved from samples/sample-test/static/ to samples/sample-test/model-config-test/static/.
  • py/samples/sample-test/model-config-test/uv.lock
    • Renamed and moved from samples/sample-test/ to samples/sample-test/model-config-test/.
    • Updated to include google-cloud-logging and its transitive dependencies.
Activity
  • No specific activity (comments, reviews, or progress updates) was provided in the context for this pull request.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@MengqinShen MengqinShen requested a review from yesudeep February 14, 2026 08:10
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the model configuration and sample flow testing tools by relocating them into subdirectories. The changes are mostly file moves and path updates in scripts and configuration files. I've provided one suggestion to improve the error handling in one of the test scripts to make it more robust.

Comment on lines 200 to 201
except Exception: # noqa: S110 - intentionally silent, error handled by returning result dict
pass
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The except Exception block currently uses pass, which means if an unexpected error occurs in the try block (e.g., during json.loads or asyncio.run), the script will exit silently without producing any output. The parent process, which expects a JSON output, will then fail with a less informative JSON decoding error. It would be more robust to handle this exception by printing a JSON object that indicates failure, similar to how errors are handled within the run_model_test function. This will ensure the parent script always receives a parsable JSON response and can report the specific error that occurred.

Suggested change
except Exception: # noqa: S110 - intentionally silent, error handled by returning result dict
pass
except Exception: # noqa: S110 - error is captured and reported as JSON
import traceback
result = {
'success': False,
'response': None,
'error': f'Unexpected error in test script:\n{traceback.format_exc()}',
'timing': 0.0,
}
print(f'---JSON_RESULT_START---\n{json.dumps(result)}\n---JSON_RESULT_END---')

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

config docs Improvements or additions to documentation fix js python Python test

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

2 participants