Skip to content

Define how to run tests#218

Open
furtib wants to merge 1 commit intoEricsson:mainfrom
furtib:test-script
Open

Define how to run tests#218
furtib wants to merge 1 commit intoEricsson:mainfrom
furtib:test-script

Conversation

@furtib
Copy link
Copy Markdown
Contributor

@furtib furtib commented Apr 8, 2026

Why:
The requirements for running tests are not well defined.
This patch aims to define which targets should be run for testing.
This patch does not define how the user should set up their environment, only how the tests should be run. That is a todo for the next patch.

What:

  • Added a script that runs all checks.
  • Modified the CONTRIBUTING.md to specify which tests must be run

Addresses:
none?

@furtib furtib self-assigned this Apr 8, 2026
@furtib furtib marked this pull request as draft April 8, 2026 10:08
@furtib furtib changed the title Define how to run tests [WIP] Define how to run tests Apr 8, 2026
Copy link
Copy Markdown
Collaborator

@nettle nettle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good start! Thanks @furtib!

Comment thread CONTRIBUTING.md
-------

Before submitting any changes please make sure all tests and checks are passed, pylint doesn't show warnings on new code, and fill out the Pull Request template. If you are a new contributor, and the template is confusing, feel free to submit the PR and we will help you iron it out! On how to run or add a new test, please see [test/README.md](test/README.md).
Before submitting any changes please make sure all tests and checks are passed.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sure, before running anything we must crealy describe how to initialize the environment.
See Development Environment above

Comment thread run_tests.sh
Comment on lines +2 to +5
# This script currently assumes that the user have set up their environment.
# This means users using bazelisk have set their bazel version.
# Either with the environment variable: USE_BAZEL_VERSION
# or with the .bazelversion file.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm... discussable...
Maybe we should setup correct environment in this script?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Set up through program arguments/environmental variables? I understand that our clientbase have their own bazel and CodeChecker distributions, so we can't generalize too much here.

This sounds like a reasonable compromise:
USE_BAZEL_VERSION=8.5.0 CODECHECKER_BIN_PATH=/path/to/codechecker/25 run_tests.sh

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is about Development Environment not Execution Environment since we run tests, right?
Here we have to find common ground, common environment that works for all possible contributors, starting from you and me, but extend to anyone. Which already means at least different versions of Ubuntu and RedHat, different version sets of Bazel and Clang etc.
That's exactly why I suggested Mise and/or Micromamba to solve this problem.
Both are still not ideal, but at least something. Maybe some alternatives are possible too.
(BTW, native bazel dependency manager may work, but we did not try)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could very well set up the python virtual environment for testing in this script, but I would be hesitant to also do that for the llvm toolchain (maybe thats where im wrong?).

My first idea is whether a bazel provided llvm toolchain exist and can we use that?

My experience is that most of these crashes should be due to diagtool not being found by clang. (You can check the codechecker log for the failing analyzed file to confirm this, even though the plist files arent getting created the log still does)

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could very well set up the python virtual environment for testing in this script, but I would be hesitant to also do that for the llvm toolchain (maybe thats where im wrong?).

Right, unfortunately, venv (virtual environment) is not sufficient, that's exactly why I suggested mise and/or micromamba.

My first idea is whether a bazel provided llvm toolchain exist and can we use that?

Yes, it is theoretically possible!
Practically... I dont know, it might be difficult to find llvm package that we could use.

My experience is that most of these crashes should be due to diagtool not being found by clang. (You can check the codechecker log for the failing analyzed file to confirm this, even though the plist files arent getting created the log still does)

There are so many different failures depending on versions and environments, so did not check them.
We need to fix the environment first, then set of tests and the way we run them.
After that I can report and investigate fails.

@furtib
Copy link
Copy Markdown
Contributor Author

furtib commented Apr 14, 2026

I have investigated the possibility of using the bazel module: https://github.com/bazel-contrib/toolchains_llvm
And I found it to be surprisingly doable. I don't even think the big download size is a huge blocker, since it won't redownload it even after a bazel clean.
With that said, it introduces the complexity of whether we require all users to use it or not. (How do we test both methods if we do not require it?)

So instead, I tried this different approach, setting up the environment "manually".
I try not to set any dependency that we need to vary, like Bazel; I'm still figuring out how to handle that.
I chose this because it produces an environment that I think is more easily reproducible to end users compared to what Micromamba provides.

@furtib furtib requested review from Szelethus and nettle April 14, 2026 11:08
@furtib furtib force-pushed the test-script branch 5 times, most recently from 3349517 to 39cd357 Compare April 28, 2026 08:27
Copy link
Copy Markdown
Collaborator

@nettle nettle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks to me as introduction of venv + llvm environment :)
It should go under .ci/ as yet another dev env along with mise and micromamba.
And it should be a separate review, not a part of "Define how to run tests".

@furtib
Copy link
Copy Markdown
Contributor Author

furtib commented Apr 30, 2026

Fair!
I will strip this back to just the script.

That raises some questions about whether the file test/test.sh is still being used in CI workflows, or if I can remove that entirely.

@furtib furtib marked this pull request as ready for review April 30, 2026 17:51
@furtib furtib changed the title [WIP] Define how to run tests Define how to run tests Apr 30, 2026
@nettle
Copy link
Copy Markdown
Collaborator

nettle commented Apr 30, 2026

Fair! I will strip this back to just the script.

Actually I think venv or uv could be useful :)
But again as separate PRs

That raises some questions about whether the file test/test.sh is still being used in CI workflows, or if I can remove that entirely.

To be honest just adding such script does not bring much value without defining the environment I think.
Without defining reliable environment (micromamba, mise, venv, uv or something else)
we cannot say we have answer to the question "how to run tests?"

@nettle
Copy link
Copy Markdown
Collaborator

nettle commented Apr 30, 2026

I have investigated the possibility of using the bazel module: https://github.com/bazel-contrib/toolchains_llvm And I found it to be surprisingly doable. I don't even think the big download size is a huge blocker, since it won't redownload it even after a bazel clean. With that said, it introduces the complexity of whether we require all users to use it or not. (How do we test both methods if we do not require it?)

This is actually interesting!
Can you please do a prototype? (as a separate PR)

So instead, I tried this different approach, setting up the environment "manually". I try not to set any dependency that we need to vary, like Bazel; I'm still figuring out how to handle that. I chose this because it produces an environment that I think is more easily reproducible to end users compared to what Micromamba provides.

Well, setting up the environment "manually" sounds extremely unreliable.
To make sure we have "common grounds" for tests the setup must be automated.
But of course it starts from describing "manual" steps and conditions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants