Skip to content

Conversation

@Srishti-8976
Copy link

@Srishti-8976 Srishti-8976 commented Dec 17, 2025

This PR adds a timestamped CSV performance report per run and integrates it into the existing perfTestRunner without affecting existing JSON outputs.

# Save results after successful test execution
save_results(test_data)

if "train" in test_data["results"] and "match" in test_data["results"]:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you make this generic please? so if we have other tests, it just loops through them and creates the appropriate column?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, can we have reports per test per year? that may be easier to send into the visualization.

Copy link
Author

@Srishti-8976 Srishti-8976 Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback! I’ve updated the implementation to make CSV generation fully generic by dynamically iterating over all phases, and organized reports under perf_reports/testName/year. The runner now passes the complete results dictionary to the CSV writer. Please let me know if this looks good.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@padam-prakash please review

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors the performance test runner to simplify CSV generation from existing JSON test reports. Instead of executing tests and comparing results, the runner now reads a pre-existing JSON report and generates a timestamped CSV file for easier tracking and analysis.

Key changes:

  • Replaced test execution logic with JSON-to-CSV conversion
  • Added dedicated CSV writer utility with input validation
  • Integrated timestamped CSV generation into CI workflows

Reviewed changes

Copilot reviewed 6 out of 7 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
perf_csv_writer.py New utility to validate and convert JSON test reports to CSV format
perfTestRunner.py Refactored from test executor to JSON-to-CSV converter
results/febrl_120K_20260106_085938.csv Example CSV output with performance metrics
results/febrl_120K_20260106_085906.csv Example CSV output (duplicate content)
.github/workflows/runWorkfow_file.yml New workflow to run performance tests and commit CSV results
.github/workflows/run-performance.yml Workflow to generate CSV from upstream FEBRL test reports

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@@ -0,0 +1,37 @@
name: Run Performance Tests and Save CSV
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected spelling of 'Workfow' to 'Workflow' in filename.

Copilot uses AI. Check for mistakes.
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
git add perf_reports || true
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The workflow attempts to add 'perf_reports' directory, but the code writes to 'results/' directory (line 49 in perfTestRunner.py). This should be 'git add results/' to match the actual output location.

Suggested change
git add perf_reports || true
git add results || true

Copilot uses AI. Check for mistakes.
# ----------------------------------------------------

output_dir = os.path.dirname(csv_path)
os.makedirs(output_dir, exist_ok=True)
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When csv_path is just a filename without directory components, os.path.dirname() returns an empty string, causing os.makedirs to create a directory with an empty name in the current working directory. Add a check: only call makedirs if output_dir is not empty.

Suggested change
os.makedirs(output_dir, exist_ok=True)
if output_dir:
os.makedirs(output_dir, exist_ok=True)

Copilot uses AI. Check for mistakes.
@padam-prakash
Copy link
Contributor

Where are the performance tests actually being executed now?

In this PR, perfTestRunner.py has been refactored from a test executor into a JSON-to-CSV converter. The workflow .github/workflows/run-performance.yml checks out zinggAI/zingg_community_performance_reports and consumes a pre-generated JSON at INPUT=community_reports/perf_test/perf_test_report/testReport_febrl120K.json, then writes a timestamped CSV so, it doesn’t run train/match here.

If test execution has moved to another repository or workflow, could we document and link to the job that produces the testReport JSON? Alternatively, if tests should run in this repo, we might add a job that executes the Zingg scripts to generate the JSON before conversion. Also note that .github/workflows/runWorkfow_file.yml calls perfTestRunner.py without setting INPUT, so it wouldn’t run successfully.

@padam-prakash
Copy link
Contributor

remove emojis.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants