Skip to content

Fix: Add mlflowconfig to eval base model#5745

Open
mollyheamazon wants to merge 2 commits intoaws:masterfrom
mollyheamazon:fix/eval_mlflowconfig
Open

Fix: Add mlflowconfig to eval base model#5745
mollyheamazon wants to merge 2 commits intoaws:masterfrom
mollyheamazon:fix/eval_mlflowconfig

Conversation

@mollyheamazon
Copy link
Copy Markdown
Contributor

@mollyheamazon mollyheamazon commented Apr 10, 2026

Description

Fix MLflow tracking support for base model only evaluations.

The three _BASE_MODEL_ONLY pipeline templates (DETERMINISTIC, CUSTOM_SCORER, LLMAJ) were missing the MlflowConfig block that their fine-tuned counterparts already had. The auto-resolve infrastructure in base_evaluator.py was already wiring up mlflow_resource_arn for all evaluator types, but the resolved ARN was silently dropped because the templates had no MlflowConfig block to render it into.

Changes:

  • Added MlflowConfig block to LLMAJ_TEMPLATE_BASE_MODEL_ONLY, DETERMINISTIC_TEMPLATE_BASE_MODEL_ONLY, and CUSTOM_SCORER_TEMPLATE_BASE_MODEL_ONLY in pipeline_templates.py
  • Updated unit tests in test_pipeline_templates.py to assert MlflowConfig is present (previously incorrectly asserted absent)
  • Uncommented mlflow_resource_arn in test_benchmark_evaluator.py integ test for base model only evaluation to validate the fix end-to-end

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@mollyheamazon mollyheamazon deployed to auto-approve April 11, 2026 00:03 — with GitHub Actions Active
@mollyheamazon mollyheamazon changed the title Add mlflowconfig to eval Fix: Add mlflowconfig to eval base model Apr 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant