feat(models): Add L1 regularization support to LogisticRegression#960
Open
vtewari2 wants to merge 1 commit intosunlabuiuc:masterfrom
Open
feat(models): Add L1 regularization support to LogisticRegression#960vtewari2 wants to merge 1 commit intosunlabuiuc:masterfrom
vtewari2 wants to merge 1 commit intosunlabuiuc:masterfrom
Conversation
Add optional l1_lambda parameter (default 0.0, fully backward-compatible)
that appends a sparsity-inducing L1 penalty on the final linear layer's
weights to the BCE loss during forward():
loss = BCE(logits, y_true) + l1_lambda * ||fc.weight||_1
This is equivalent to scikit-learn LogisticRegression(penalty='l1', C=C)
with l1_lambda = 1 / (C * n_train), and reproduces the regularisation used
in Boag et al. 2018 "Racial Disparities and Mistrust in End-of-Life Care"
(MLHC 2018, arXiv:1808.03827) to train interpersonal-feature mistrust
classifiers on MIMIC-III.
Co-Authored-By: Varun Tewari <vtewari2@illinois.edu>
This was referenced Apr 11, 2026
Open
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds an optional
l1_lambdaparameter toLogisticRegressionthat appendsa sparsity-inducing L1 penalty on the final linear layer's weights to the
training loss:
loss = BCE(logits, y_true) + l1_lambda * ‖fc.weight‖₁
The default is
l1_lambda=0.0, making this fully backward-compatible —existing code that instantiates
LogisticRegressionwithout the parameteris unaffected.
Motivation
The current
LogisticRegressionmodel supports unregularised logisticregression only. Many clinical prediction tasks — particularly those with
high-dimensional, sparse binary feature spaces — benefit from L1
regularisation to produce interpretable, sparse weight vectors. This is the
standard approach in clinical ML literature (e.g. LASSO logistic regression)
and was specifically used in:
Parameter equivalence to scikit-learn
For users migrating from scikit-learn's
LogisticRegression(penalty='l1', C=C):l1_lambda = 1 / (C × n_train)
Example:
C=0.1on a dataset of 38,000 training samples →l1_lambda ≈ 2.6e-4Changes
pyhealth/models/logistic_regression.pyl1_lambda: float = 0.0to__init__forward(): addl1_lambda * self.fc.weight.abs().sum()to losswhen
l1_lambda > 0Usage