-
Notifications
You must be signed in to change notification settings - Fork 5
Add utility to write timestamped CSV performance reports #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
perfTestRunner.py
Outdated
| # Save results after successful test execution | ||
| save_results(test_data) | ||
|
|
||
| if "train" in test_data["results"] and "match" in test_data["results"]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you make this generic please? so if we have other tests, it just loops through them and creates the appropriate column?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, can we have reports per test per year? that may be easier to send into the visualization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback! I’ve updated the implementation to make CSV generation fully generic by dynamically iterating over all phases, and organized reports under perf_reports/testName/year. The runner now passes the complete results dictionary to the CSV writer. Please let me know if this looks good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@padam-prakash please review
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR refactors the performance test runner to simplify CSV generation from existing JSON test reports. Instead of executing tests and comparing results, the runner now reads a pre-existing JSON report and generates a timestamped CSV file for easier tracking and analysis.
Key changes:
- Replaced test execution logic with JSON-to-CSV conversion
- Added dedicated CSV writer utility with input validation
- Integrated timestamped CSV generation into CI workflows
Reviewed changes
Copilot reviewed 6 out of 7 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| perf_csv_writer.py | New utility to validate and convert JSON test reports to CSV format |
| perfTestRunner.py | Refactored from test executor to JSON-to-CSV converter |
| results/febrl_120K_20260106_085938.csv | Example CSV output with performance metrics |
| results/febrl_120K_20260106_085906.csv | Example CSV output (duplicate content) |
| .github/workflows/runWorkfow_file.yml | New workflow to run performance tests and commit CSV results |
| .github/workflows/run-performance.yml | Workflow to generate CSV from upstream FEBRL test reports |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| @@ -0,0 +1,37 @@ | |||
| name: Run Performance Tests and Save CSV | |||
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Corrected spelling of 'Workfow' to 'Workflow' in filename.
| git config user.name "github-actions[bot]" | ||
| git config user.email "github-actions[bot]@users.noreply.github.com" | ||
| git add perf_reports || true |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The workflow attempts to add 'perf_reports' directory, but the code writes to 'results/' directory (line 49 in perfTestRunner.py). This should be 'git add results/' to match the actual output location.
| git add perf_reports || true | |
| git add results || true |
perf_csv_writer.py
Outdated
| # ---------------------------------------------------- | ||
|
|
||
| output_dir = os.path.dirname(csv_path) | ||
| os.makedirs(output_dir, exist_ok=True) |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When csv_path is just a filename without directory components, os.path.dirname() returns an empty string, causing os.makedirs to create a directory with an empty name in the current working directory. Add a check: only call makedirs if output_dir is not empty.
| os.makedirs(output_dir, exist_ok=True) | |
| if output_dir: | |
| os.makedirs(output_dir, exist_ok=True) |
|
Where are the performance tests actually being executed now? In this PR, If test execution has moved to another repository or workflow, could we document and link to the job that produces the |
|
remove emojis. |
This PR adds a timestamped CSV performance report per run and integrates it into the existing perfTestRunner without affecting existing JSON outputs.