Testing¶
Charms should have tests to verify that they are functioning correctly. This page describes the types of testing that you should consider.
Unit testing¶
Unit tests isolate and validate individual code units (functions, methods, and so on) by mocking Juju APIs and workloads without external interactions. Unit tests are intended to be isolated and fast to complete. These are the tests you would run before committing any code changes.
Every unit test involves a mocked event context, as charms only execute in response to events. A charm doesn’t do anything unless it’s being run, and it is only run when an event occurs. So there is always an event context to be mocked, and the starting point of a unit test is typically an event.
A charm acts like a function, taking event context (always present), configuration, relation data, and stored state as inputs. It then performs operations affecting its workload or other charms:
System operations such as writing files.
Cloud operations such as launching virtual machines.
Workload operations, using Pebble in the case of a Kubernetes charm.
Juju operations such as sharing data with related charms.
Unit tests focus on mapping these inputs to expected outputs. For example, a unit test could verify a system call, the contents of a file, or the contents of a relation databag.
See also: How to write unit tests for a charm.
Coverage¶
Unit testing a charm should cover at least:
How relation data is modified as a result of an event.
What pebble services are running as a result of an event.
Which configuration files are written and their contents, as a result of an event.
Tools¶
ops.testing
, the framework for state-transition testing in Opstox
for automating and standardizing tests
The ops.testing
framework provides State
, which mocks inputs and outputs. The framework also provides Context
and Container
, which offer mock filesystems. Tests involve:
Setting up the charm, metadata, context, output mocks, and Juju state.
Simulating events using
Context.run
. For example,config_changed
,relation_changed
,storage_attached
,pebble_ready
, and so on.Retrieving and asserting the output.
Context
and State
are instantiated before the charm. This enables you to prepare the state of config, relations, and storage before simulating an event.
Examples¶
Interface testing¶
Interface tests validate charm library behavior against mock Juju APIs, ensuring compliance with an interface specification without requiring individual charm code.
Interface specifications, stored in charm-relation-interfaces, are contract definitions that mandate how a charm should behave when integrated with another charm over a registered interface. For information about how to create an interface, see Register an interface.
See also: Write tests for an interface.
Coverage¶
Interface tests enable Charmhub to validate the relations of a charm and verify that your charm supports the registered interface. For example, if your charm supports an interface called “ingress”, interface tests enable Charmhub to verify that your charm supports the registered ingress
interface.
Interface tests also:
Enable alternative implementations of an interface to validate themselves against the contractual specification stored in
charm-relation-interfaces
.Help verify compliance with multiple versions of an interface.
An interface test has the following pattern:
Given - An initial state of the relation over the interface under test.
When - A specific relation event fires.
Then - The state of the databags is valid. For example, the state satisfies a pydantic schema.
In addition to checking that the databag state is valid, we could check for more elaborate conditions.
Tools¶
A pytest plugin called
pytest-interface-tester
Examples¶
Interface tests enable us to check whether our charm complies with the behavioural specification of the interface, independently from whichever charm is integrated with our charm.
Integration testing¶
Integration tests verify the interaction of multiple software components. In the context of a charm, they ensure the charm functions correctly when deployed in a test model in a real controller, checking for “blocked” or “error” states during typical operations. The goal of integration testing is to ensure the charm’s operational logic performs as expected under diverse conditions.
Integration tests should be focused on a single charm. Sometimes an integration test requires multiple charms to be deployed for adequate testing, but ideally integration tests should not become end-to-end tests.
Integration tests typically take significantly longer to run than unit tests.
See also: How to write integration tests for a charm.
Coverage¶
Packing and deploying the charm
Charm actions
Charm relations
Charm configurations
That the workload is up and running, and responsive
Upgrade sequence
Regression test: upgrade stable/candidate/beta/edge from charmhub with the locally-built charm.
Caution
When writing an integration test, it is not sufficient to simply check that Juju reports that running the action was successful; rather, additional checks need to be executed to ensure that whatever the action was intended to achieve worked.
Tools¶
Integration tests and unit tests should run using the minor version of Python that is shipped with the OS specified in charmcraft.yaml
(the base.run-on
key). For example, if Ubuntu 22.04 is specified in charmcraft.yaml
, you can use the following tox configuration:
[testenv]
basepython = python3.10
pytest-operator
¶
pytest-operator
is a Python library that provides Juju plugins for the generic Python library pytest
to facilitate the integration testing of charms.
See more:
pytest-operator
It builds a fixture called ops_test
that helps you interact with Juju through constructs that wrap around python-libjuju
.
See more:
It also provides convenient markers and command line parameters (e.g., the @pytest.mark.skip_if_deployed
marker in combination with the --no-deploy
configuration helps you skip, e.g., a deployment test in the case where you already have a deployment).
Examples¶
Continuous integration¶
Typically, you want the tests to be run automatically against any PR into your repository’s main branch, and potentially trigger a new release whenever the tests succeed. Continuous deployment is out of scope for this page, but we will look at how to set up basic continuous integration.
Create a file called .github/workflows/ci.yaml
. For example, to include a lint
job that runs the tox
unit
environment:
name: Tests
on:
push:
branches:
- main
pull_request:
unit-test:
name: Unit tests
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Install dependencies
run: python -m pip install tox
- name: Run tests
run: tox -e unit
Integration tests are a bit more complex, because these tests require a Juju controller and a cloud in which to deploy it. The following example uses the actions-operator
workflow provided by charmed-kubernetes
to set up microk8s
and Juju:
integration-test-microk8s:
name: Integration tests (microk8s)
needs:
- lint
- unit-test
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup operator environment
uses: charmed-kubernetes/actions-operator@main
with:
provider: microk8s
- name: Run integration tests
# Set a predictable model name so it can be consumed by charm-logdump-action
run: tox -e integration -- --model testing
- name: Dump logs
uses: canonical/charm-logdump-action@main
if: failure()
with:
app: my-app-name
model: testing
For more actions, documentation, and use cases, see charming-actions.