Skip to content

Conversation

@MaxPastukhov15
Copy link

@MaxPastukhov15 MaxPastukhov15 commented Nov 26, 2025

This PR implements and tests the LikelihoodBreakpointer, which stops the EM-like iterative pipeline when the change in log-likelihood between consecutive iterations falls below a specified threshold (|L_{t+1} - L_t| < threshold).

Key changes:

  • Added LikelihoodBreakpointer class with proper convergence logic and validation.

  • Wrote unit tests using mocked likelihood sequences to reliably verify convergence detection, first-call behavior, and instance reuse after reset.

Closes #52

@MaxPastukhov15 MaxPastukhov15 self-assigned this Nov 26, 2025
@MaxPastukhov15 MaxPastukhov15 changed the title Likelihood breakpointer feature: likelihood breakpointer Nov 26, 2025
@MaxPastukhov15 MaxPastukhov15 changed the title feature: likelihood breakpointer feat: likelihood breakpointer Nov 26, 2025
Copy link

@iraedeus iraedeus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

absolute difference between the current and previous log-likelihood values
falls below a specified threshold:

|L_{t+1} - L_t| < threshold
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put math experessions in docstrings with ..math::

def __init__(self, threshold: float):
self._validate(threshold)
self.threshold = threshold
self._L_old: Optional[float] = None
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why two variables? It is enough to have 1 variable _likelihood_old
And do not use capital letters in mutable variables

@MaxPastukhov15 MaxPastukhov15 deleted the MaxPastukhov15/likelihood_breakpointer branch December 6, 2025 11:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants