cardano_node_tests.pytest_plugins package
Submodules
cardano_node_tests.pytest_plugins.xdist_scheduler module
- class cardano_node_tests.pytest_plugins.xdist_scheduler.OneLongScheduling(config: Any, log: Producer | None = None)[source]
Bases:
LoadScopeSchedulingScheduling plugin with long-test balancing and split-key dispersion.
- Scope:
An “xdist_group” marker value or full node id.
Tests marked with
@pytest.mark.longare tracked so that no more than one is scheduled per worker at a time.Tests marked with
@pytest.mark.smokeget a fast lane when the run has at leastSMOKE_DEDICATED_THRESHOLDworkers AND there are smoke tests in the collection:SMOKE_DEDICATED_COUNTworkers (the lowest by gateway id) prefer smoke tests over any other work. Other workers continue to pick up any work, including smoke tests. Once no smoke tests remain queued, smoke workers fall back to the regular scheduling path so they don’t sit idle for the rest of the run.Tests marked with
@pytest.mark.xdist_split("<key>")are NOT grouped onto one worker (unlikexdist_group). Each gets its own per-test scope so the scheduler can reorder them independently, and the number of in-flight tests sharing a given split key is capped at the available cluster instance capacity (seeself.clusters_count). A test that locks multiple shared resources can declare several keys at once as positional args (e.g.@pytest.mark.xdist_split("governance", "plutus")); the cap is then enforced independently for each key.Without this cap, when many tests with the same split key are collected together (e.g. governance tests in the same module), xdist hands them out at once to many workers; workers beyond the instance capacity then sit blocked in the cluster manager waiting for the shared resource (e.g. governance setup) to free up, instead of running unrelated non-conflicting tests that could share those same cluster instances in parallel. With the cap, only as many same-key tests are scheduled simultaneously as there are instances to host them, and the remaining workers pick non-conflicting work.
Example: 9 cluster instances, 18 governance tests, 20 workers. Without the cap, 18 workers all try to start governance — 9 run, the other 9 wait for instance capacity while 2 unrelated tests sit unassigned. With the cap, 9 workers run governance and the remaining 11 run non-governance tests on those same instances in parallel.
- Workqueue:
Ordered dictionary that maps all available scopes with their associated tests (nodeid). Nodeids are in turn associated with their completion status. One entry of the workqueue is called a work unit. In turn, a collection of work unit is called a workload.
workqueue = { '<scope>': { '<full>/<path>/<to>/test_module.py::test_case1': False, '<full>/<path>/<to>/test_module.py::test_case2': False, (...) }, (...) }
- Assigned_work:
Ordered dictionary that maps worker nodes with their assigned work units.
assigned_work = { '<scope>': { '<full>/<path>/<to>/test_module.py': { '<full>/<path>/<to>/test_module.py::test_case1': False, '<full>/<path>/<to>/test_module.py::test_case2': False, (...) }, (...) }, (...) }
- cardano_node_tests.pytest_plugins.xdist_scheduler.pytest_collection_modifyitems(items: list) None[source]
- cardano_node_tests.pytest_plugins.xdist_scheduler.pytest_xdist_make_scheduler(config: Any, log: Any) OneLongScheduling[source]