Skip to content

Conversation

@tmigot
Copy link
Member

@tmigot tmigot commented Nov 21, 2025

A suggestion of changes to make 0.22 less breaking.
I tested NLPModelsModifiers, NLPModelsTest, QuadraticModels, ADNLPModels .

Changes are very minor, for instance for NLPModelsModifiers and NLPModelsTest:

  • Testing a poor x would no longer fail
  • The show function of nlp has also changed

QuadraticModels and ADNLPModels were not requiring any modification.

I commented out the warn messages in some places, because it allocates.

@tmigot
Copy link
Member Author

tmigot commented Nov 21, 2025

@dpo @amontoison A suggestion of patch for 0.22.1.
After this updating to 0.22 should be reduce to merging the CompatHelper PR. For the core pacakges, we can still use the work done by klamike , and I will take care of the rest.

@tmigot tmigot requested review from amontoison and dpo November 21, 2025 21:42
@github-actions
Copy link
Contributor

Package name latest stable
ADNLPModels
AdaptiveRegularization
AmplNLReader
BundleAdjustmentModels
CUTEst
CaNNOLeS
DCISolver
FletcherPenaltySolver
FluxNLPModels
JSOSolvers
JSOSuite
LLSModels
ManualNLPModels
NLPModelsIpopt
NLPModelsJuMP
NLPModelsKnitro
NLPModelsModifiers
NLPModelsTest
NLSProblems
PDENLPModels
PartiallySeparableNLPModels
PartiallySeparableSolvers
Percival
QuadraticModels
RegularizedProblems
SolverBenchmark
SolverCore
SolverTest
SolverTools

@amontoison
Copy link
Member

amontoison commented Nov 21, 2025

I posted a message on Zulip to explain the situation. This PR will not help the current issue.

@dpo
Copy link
Member

dpo commented Nov 21, 2025

@amontoison There is no need to alarm everyone. On Zulip, I'm quite sure nobody has any idea what you are talking about except for @tmigot and I. In addition, nobody there makes high-level decisions about JSO. Everybody (me first) appreciates your concerns to keep versions consistent and everything in sync. It looks like a mistake happened. Let's just try to fix it cleanly and constructively. After this, we will need to work together on procedures to make sure this doesn't happen again.

When you write

This PR will not help the current issue.

you need to tell us why and be specific. I'm still trying to understand exactly what is causing problems and what isn't.

@amontoison
Copy link
Member

amontoison commented Nov 21, 2025

@dpo It is just that I don't want the new release to land in the packages of exanauts / MadNLP or optimal_control organizations.

I don't have full control on that.
It will be a mess if half packages update to 0.22.

I am currently in New-Zealand with limited bandwidth but I will do my best to explain the issues asap.

But it will give me a lot of work when I will be back to Chicago. We still have time to revert this mistake.

@dpo
Copy link
Member

dpo commented Nov 22, 2025

Presumably, they test before merging. In the mean time, let's fix the issue. I'm also willing to help upgrading the various dependents here. I'm by no means asking you to do it all by yourself.

@tmigot Says that he tested this PR against several packages. Now I'd like to understand why the breakage isn't working.

@amontoison
Copy link
Member

amontoison commented Nov 22, 2025

Presumably, they test before merging. In the mean time, let's fix the issue. I'm also willing to help upgrading the various dependents here. I'm by no means asking you to do it all by yourself.

I will explain the problem in more detail in #523. The core issue is not really code breakage, but rather version incompatibilities and the way the package manager handles them.

If we switch to NLPModels.jl 0.22 and only support that version (which is the recommended workflow for breaking releases, since CI tests only the latest version), then all new releases of this package will depend on NLPModels.jl ≥ 0.22. The problem is that any package that has not yet migrated will still require 0.21, forcing users to install older versions. This makes it difficult to benefit from recent hotfixes or new features in packages that have upgraded, while maintaining compatibility with those that haven't.

For example, if a solver uses the linear API such as NLPModelsKNITRO.jl, we must drop support for NLPModels.jl 0.21 and move to 0.22. In the future we also need to bump the dependency on KNITRO.jl to version 1.0, because this is the version required for KNITRO 15, the only version currently delivered with KNITRO_jll.jl and supported.
(I need to do that since 2/3 weeks.)

Because of this, we won't be able to solve problems in OptimizationProblems.jl, ADNLPModels.jl, CUTEst.jl, or NLPModelsJuMP.jl, since all of them are still capped at 0.21 at the moment.

@tmigot Says that he tested this PR against several packages. Now I'd like to understand why the breakage isn't working.

The issue is not caused by the code itself. In fact, only 4 or 5 packages using the linear API are affected at the code level. The actual breakage comes from the version bump: since 0.22 is a new minor release, the package manager prevents testing against it unless we update each Project.toml. Until we bump these dependencies one by one, CI cannot install / test them with NLPModels.jl 0.22.

The number of direct and indirect dependent packages is quite large.
-> https://juliahub.com/ui/Packages/General/NLPModels/0.21.5#dependents

Even if we make 0.22.1 non-breaking at the code level, it still leads to incompatibility with the package manager.
From a user angle, we also need to do new releases for these dependents such that they don't need to install on their own the branch main.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants