Skip to content

feat: add support for haystack pipeline benchmarking#11033

Open
srini047 wants to merge 2 commits intodeepset-ai:mainfrom
srini047:benchmark-pipeline
Open

feat: add support for haystack pipeline benchmarking#11033
srini047 wants to merge 2 commits intodeepset-ai:mainfrom
srini047:benchmark-pipeline

Conversation

@srini047
Copy link
Copy Markdown
Contributor

@srini047 srini047 commented Apr 3, 2026

Related Issues

Proposed Changes:

  • Benchmark results of both entire pipeline (normal and async pipeline) and per component level
  • Using percentiles instead of average as it provides more accurate and user-centric view of how the pipeline actually performs in the real world. So p50, p90, p99 are must including avg and total displayed.
  • Display the benchmark result in a user-friendly way , json format, and as PipelineBenchmarkResult object.

How did you test it?

Added testcase specific to benchmark.

Notes for the reviewer

Please find the sample output:

# `result.report()`
==========================================================
 Pipeline Benchmark Results
==========================================================

name         p50       p90       p99       avg       total
-----------  --------  --------  --------  --------  --------
pipeline     0.583 ms  0.620 ms  0.620 ms  0.593 ms  1.779 ms

add_two      0.028 ms  0.033 ms  0.033 ms  0.029 ms  0.086 ms
add_default  0.025 ms  0.027 ms  0.027 ms  0.025 ms  0.076 ms
double       0.018 ms  0.022 ms  0.022 ms  0.019 ms  0.057 ms

  Runs               : 3
  Fastest run        : 0.577 ms
  Slowest run        : 0.620 ms
  Slowest component  : add_two
==========================================================

# `result.to_json()`
{
  "pipeline": {
    "p50": 0.5826249980600551,
    "p90": 0.6195829919306561,
    "p99": 0.6195829919306561,
    "avg": 0.5930136636986086,
    "total": 1.779040991095826
  },
  "components": {
    "add_two": {
      "p50": 0.027582995244301856,
      "p90": 0.03283399564679712,
      "p99": 0.03283399564679712,
      "avg": 0.02870833365401874,
      "total": 0.08612500096205622
    },
    "add_default": {
      "p50": 0.024833003408275545,
      "p90": 0.02658400626387447,
      "p99": 0.02658400626387447,
      "avg": 0.025403001927770674,
      "total": 0.07620900578331202
    },
    "double": {
      "p50": 0.01824999344535172,
      "p90": 0.021500003640539944,
      "p99": 0.021500003640539944,
      "avg": 0.019166662241332233,
      "total": 0.0574999867239967
    }
  },
  "slowest_component": "add_two",
  "fastest_run": 0.5768330011051148,
  "slowest_run": 0.6195829919306561,
  "num_runs": 3,
  "pipeline_name": "Pipeline"
}

Checklist

  • I have read the contributors guidelines and the code of conduct.
  • I have updated the related issue with new insights and changes.
  • I have added unit tests and updated the docstrings.
  • I've used one of the conventional commit types for my PR title: fix:, feat:, build:, chore:, ci:, docs:, style:, refactor:, perf:, test: and added ! in case the PR includes breaking changes.
  • I have documented my code.
  • I have added a release note file, following the contributors guidelines.
  • I have run pre-commit hooks and fixed any issue.

@srini047 srini047 requested a review from a team as a code owner April 3, 2026 12:25
@srini047 srini047 requested review from bogdankostic and removed request for a team April 3, 2026 12:25
@vercel
Copy link
Copy Markdown

vercel bot commented Apr 3, 2026

@srini047 is attempting to deploy a commit to the deepset Team on Vercel.

A member of the Team first needs to authorize it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Benchmark Haystack Pipeline

1 participant