RFE: allow test dependency packages to be installed on test phase and not on plan preparation
TMT collects all tests dependencies and tries to install them before running the tests.
This causes problem when a test plans contains a lot of tests and the dependency of a single test fails to be install, no test is executed at all.
It would be nice if we could pass a parameter to tmt to change this behavior and move the test dependency installation to be done on test level.
Ideally, testing-farm allow this parameter to be passed when using the CLI.
Agree with the idea, and I think it's not the first time this came up.
One issue I can think regarding this is how it can interact with provision.user. I did not check if the prepare.install steps have special workarounds to make sure the installations are run as sudo.
I've looked at a bit and I did not see an issue around user being overridden.
But I've found another issue. All tests are performed in the same environment so if testA installed a package and after that testB is run, it would still have the package installed from testA. This is particularly problematic when tests are allowed to run in parallel and the installation is locked because another one is in progress.
One possible approach is to install and uninstall after each test and disallow any tests to run in parallel when there are test-specific package installs. One caveat is that not all package managers may cleanly uninstall the dependencies (think dnf remove vs pip uninstall).
@LecrisUT I wonder if we really need to uninstall the packages? If we can not allow parallel test runs when using test-specific packages installs the problem is solved already, no?
Note that running multiple tests, in parallel, at the same time, on the same guest, is not something that would work out of the box. tmt does not like to share guests with other tmt invocations in general, the fact that one test might spoil the environment by installing and not removing its dependencies is just a small piece of the ugly puzzle.
@LecrisUT I wonder if we really need to uninstall the packages?
That would be to avoid unintentional side effects.
Here's an example: I write 2 tests that test cmake build using either gcc or clang, but I assume there is only 1 compiler installed so I don't bother specifying it and let cmake autodetect. Depending on the order of tests, the clang test might not have picked up clang as the compiler, but used gcc since that is the default for cc.
@LecrisUT from my understanding the uninstalling the packages is some other problem that is not related to this issue. Currently all test dependencies are installed as first thing, so for your example gcc and clang will be installed with default behavior...
Yes indeed with the current workflow that is also not possible. But it's partially about expectations. In the current approach it will always fail and you see the installation done much earlier on. If we move it to test specific it becomes flaky.
My thought is that if we have test specific installation, might as well go the extra mile and make sure it is robust and covers most usecases. It would prevent down the line announcing behavior changes or making another variable to cover different behaviors.
+1 to @LecrisUT, IMO we should restore (read: dnf history undo) after the test in the case the require/recommend installation happens just before the test is run.
But we can do it in later PR, just moving the installation will get some test results (currently 0, no test run). The problem with messed up system (eg. too many packages installed, conflicting require (eg. mariadb vs mysql server rpms) will still be present.
So how about following ways to control dependency install:
- install everything in prepare (current way)
- install before the test
- install before the test and remove after the test
- install nothing
+1 for restoring. dnf history undo sounds good, but if we are to support other package managers, this seems more like a job for ansible?
With playbook,
- snapshot
- name: Gather the list of installed packages (snapshot)
ansible.builtin.package_facts:
manager: auto
- name: Store the initial package list
ansible.builtin.set_fact:
initial_packages: "{{ ansible_facts.packages }}"
- desired state
- name: Install packages for test $test
ansible.builtin.package:
name:
- foo
- bar
state: present
- name: Uninstall a package for test $test
ansible.builtin.package:
name:
- baz
state: absent
- the anoying part of restoring it - is there a better way?
- name: Gather the current list of packages
ansible.builtin.package_facts:
manager: auto
- name: Calculate packages to reinstall
ansible.builtin.set_fact:
packages_to_reinstall: "{{ hostvars[inventory_hostname].initial_packages.keys() | difference(ansible_facts.packages.keys()) }}"
- name: Calculate packages to remove
ansible.builtin.set_fact:
packages_to_remove: "{{ ansible_facts.packages.keys() | difference(hostvars[inventory_hostname].initial_packages.keys()) }}"
- name: Re-install packages that were uninstalled
ansible.builtin.package:
name: "{{ item }}"
state: present
loop: "{{ packages_to_reinstall }}"
when: packages_to_reinstall | length > 0
- name: Remove packages that were newly installed
ansible.builtin.package:
name: "{{ item }}"
state: absent
loop: "{{ packages_to_remove }}"
when: packages_to_remove | length > 0
cough cough ansible_runner
From discussion from last triage meeting:
- keep this issue only for installation before running the test
- do not deal with rollback (restore/uninstall) for now (file as another issue if even needed)
- NOTE: mvadkert believes if user would need expected environment per test rather tell him to use
--max 1to run each test in a separate environment
- NOTE: mvadkert believes if user would need expected environment per test rather tell him to use
- keep this issue only for installation before running the test
Maybe we could emphasize the usage of order if a full rollback is not necessary for each test.