Workflows
Workflows are server-side logic that can schedule and combine server tasks and worker tasks to automate complex operations.
Workflows are created from a workflow template chosen from a set maintained by the server administrators, plus data coming from user input.
See the Workflows for an overview, and below for technical details.
Workflow noop
This is a workflow that does nothing, and is mainly used in tests.
task_data
: empty
Workflow sbuild
This workflow takes a source package and creates sbuild work requests (see Sbuild task) to build it for a set of architectures.
task_data
:prefix
(optional): prefix this string to the item names provided in the internal collectioninput
(required): see Task PackageBuildtarget_distribution
(required string):vendor:codename
to specify the environment to use for building. It will be used to determinedistribution
orenvironment
, depending onbackend
.backend
(optional string): see Task PackageBuildarchitectures
(required list of strings): list of architectures to build. It can includeall
to build a binary forArchitecture: all
arch_all_host_architecture
(string, defaults toamd64
): concrete architecture on which to buildArchitecture: all
packagesenvironment_variant
(optional string): variant of the environment we want to build on, e.g.buildd
; appended during environment lookup fortarget_distribution
above.build_profiles
(optional, default unset): select a build profile, see Task PackageBuild.binnmu
(optional, default unset): build a binNMU, see Task PackageBuild.retry_delays
(optional list): a list of delays to apply to each successive retry; each item is an integer suffixed withm
for minutes,h
for hours,d
for days, orw
for weeks.signing_template_names
(dictionary, optional): mapping from architecture to list of names of binary packages that should be used as templates by the ExtractForSigning task
The source package will be built on the intersection of the provided list of
architectures and the architectures supported in the Architecture:
field
of the source package. Architecture all
packages are built on
arch_all_host_architecture
.
The workflow may also apply a denylist of architectures if it finds a
debian:suite
collection corresponding to the build
distribution/environment, and that suite provides one.
The workflow adds event reactions that cause the debian:upload
artifact
in the output for each architecture to be provided as
{prefix}build-{architecture}
in the workflow’s internal collection.
If the workspace has a debian:package-build-logs collection, then the workflow adds update-collection-with-data and update-collection-with-artifacts event reactions to each sbuild work request to record their build logs there.
If retry_delays
is set, then the workflow adds a corresponding
on_failure
retry-with-delays action to each of the sbuild
work requests it creates. This provides a simplistic way to retry
dependency-wait failures. Note that this currently retries any failure, not
just dependency-waits; this may change in future.
If signing_template_names
exists, then the workflow adds event reactions
that cause the corresponding debian:binary-package
artifacts in the
output for each architecture to be provided as
{prefix}signing-template-{architecture}-{binary_package_name}
in the
workflow’s internal collection.
Workflow update_environments
This workflow schedules work requests to build tarballs and images, and adds them to a debian:environments collection.
task_data
:vendor
(required): the name of the distribution vendor, used to look up the targetdebian:environments
collectiontargets
(required): a list of dictionaries as follows:codenames
(required): the codename of an environment to build, or a list of such codenamescodename_aliases
(optional): a mapping from build codenames to lists of other codenames; if given, add the output to the target collection under the aliases in addition to the build codenames. For example,trixie: [testing]
variants
(optional): an identifier to use as the variant name when adding the resulting artifacts to the target collection, or a list of such identifiers; if not given, the default is not to set a variant namebackends
(optional): the name of the debusine backend to use when adding the resulting artifacts to the target collection, or a list of such names; if not given, the default is not to set a backend namearchitectures
(required): a list of architecture names of environments to build for this codenamemmdebstrap_template
(optional): a template to use to construct data for the Mmdebstrap tasksimplesystemimagebuild_template
(optional): a template to use to construct data for the SimpleSystemImageBuild task
For each codename in each target, the workflow creates a group. Then, for each architecture in that target, it fills in
whichever of mmdebstrap_template
and simplesystemimagebuild_template
that are present and uses them to construct child work requests. In each
one, bootstrap_options.architecture
is set to the target architecture,
and bootstrap_repositories[].suite
is set to the codename if it is not
already set.
The workflow adds one event reaction to each child work request as follows
for each combination of the codename (including any matching entries from
codename_aliases
), variant (variants
, or [null]
if
missing/empty), and backend (backends
, or [null]
if missing/empty).
{vendor}
is the vendor
from the workflow’s task data, and
{category}
is debian:system-tarball
for mmdebstrap
tasks and
debian:system-image
for simplesystemimagebuild
tasks:
on_success:
- action: "update-collection-with-artifacts"
artifact_filters:
category: "{category}"
collection: "{vendor}@debian:environments"
variables:
- codename: {codename}
- variant: {variant} # omit if null
- backend: {backend} # omit if null
Workflow autopkgtest
This workflow schedules autopkgtests for a single source package on a set of architectures.
task_data
:prefix
(string, optional): prefix this string to the item names provided in the internal collectionsource_artifact
(Single lookup, required): see Autopkgtest taskbinary_artifacts
(Multiple lookup, required): see Autopkgtest taskcontext_artifacts
(Multiple lookup, optional): see Autopkgtest taskvendor
(string, required): the distribution vendor on which to run testscodename
(string, required): the distribution codename on which to run testsbackend
(string, optional): see Autopkgtest taskarchitectures
(list of strings, optional): if set, only run on any of these architecture namesarch_all_host_architecture
(string, defaults toamd64
): concrete architecture on which to run tasks forArchitecture: all
packagesinclude_tests
,exclude_tests
,debug_level
,extra_environment
,needs_internet
,fail_on
,timeout
: see Autopkgtest task
Tests will be run on the intersection of the provided list of architectures
(if any) and the architectures provided in binary_artifacts
. If only
Architecture: all
binary packages are provided in binary_artifacts
,
then tests are run on {arch_all_host_architecture}
.
The workflow creates an Autopkgtest task for each concrete architecture, with task data:
input.source_artifact
:{source_artifact}
input.binary_artifacts
: the subset of{binary_artifacts}
that are for the concrete architecture orall
input.context_artifacts
: the subset of{context_artifacts}
that are for the concrete architecture orall
host_architecture
: the concrete architectureenvironment
:{vendor}/match:codename={codename}
backend
:{backend}
include_tests
,exclude_tests
,debug_level
,extra_environment
,needs_internet
,fail_on
,timeout
: copied from workflow task data parameters of the same names
Any of the lookups in input.source_artifact
, input.binary_artifacts
,
or input.context_artifacts
may result in promises, and in that case the workflow adds corresponding
dependencies. Binary promises must include an architecture
field in
their data.
Each work request provides its debian:autopkgtest
artifact as output in
the internal collection, using the item name
{prefix}autopkgtest-{architecture}
.
Todo
It would be useful to have a mechanism to control multiarch tests, such as testing i386 packages on an amd64 testbed.
Workflow package_upload
This workflow signs and uploads source and/or binary packages to an upload queue. It is normally expected to be used as a sub-workflow.
task_data
:source_artifact
(Single lookup, optional): adebian:source-package
ordebian:upload
artifact representing the source package (the former is used when the workflow is started based on a.dsc
rather than a.changes
)binary_artifacts
(Multiple lookup, optional): a list ofdebian:upload
artifacts representing the binary packagesmerge_uploads
(boolean, defaults to False): if True, merge the uploads and create a singlePackageUpload
task to upload them all together; if False, create a separatePackageUpload
task for each uploadsince_version
(string, optional): passed to MakeSourcePackageUpload task ifsource_artifact
is adebian:source-package
target_distribution
(string, optional): passed to MakeSourcePackageUpload task ifsource_artifact
is adebian:source-package
key
(Single lookup, optional): thedebusine:signing-key
artifact to sign the upload with, which must have purposeopenpgp
require_signature
(boolean, defaults to True): whether the upload must be signedtarget
(required): the upload queue, as anftp://
orsftp://
URLvendor
(string, optional): the distribution vendor to use for running MakeSourcePackageUpload task and MergeUploads taskcodename
(string, optional): the distribution codename to use for running MakeSourcePackageUpload task and MergeUploads task
At least one of source_artifact
and binary_artifacts
must be set.
The workflow creates the following tasks, each of which has a dependency on the previous one in sequence, using event reactions to store output in the workflow’s internal collection for use by later tasks:
if
source_artifact
is adebian:source-package
artifact: a MakeSourcePackageUpload task (withsince_version
andtarget_distribution
) to build a corresponding.changes
file Usesvendor
andcodename
to construct the environment lookup.if
merge_uploads
is True and there is more than one source and/or binary artifact: a MergeUploads task to combine them into a single upload. Usesvendor
andcodename
to construct the environment lookup.for each upload (or for the single merged upload, if merging):
if
key
is provided: a Debsign task to have debusine sign the upload with the given keyif
key
is not provided andrequire_signature
is True: an ExternalDebsign task to wait until a user provides a signature, which debusine will then include with the uploada PackageUpload task, to upload the result to the given upload queue
Workflow make_signed_source
This workflow produces a source package with signed contents from a template package and some binary packages.
task_data
:binary_artifacts
(Multiple lookup, required): thedebian:binary-packages
ordebian:upload
artifacts representing the binary packages to testsigning_template_artifacts
(Multiple lookup, required): thedebian:binary-package
artifacts representing the binary packages to testvendor
(string, required): the distribution vendor on which to signcodename
(string, required): the distribution codename on which to signarchitectures
(list of strings): the list of architectures that this workflow is running on, plusall
purpose
(string, required): the purpose of the key to sign with; see Sign taskkey
(Single lookup, required): thedebusine:signing-key
artifact to sign with; must matchpurpose
sbuild_backend
(string, optional): see Task PackageBuild
Any of the lookups in binary_artifacts
or signing_template_artifacts
may result in promises, and in that case the
workflow adds corresponding dependencies. Promises must include an
architecture
field in their data.
The list of architectures to run on is the list of architectures that are
present in both binary_artifacts
and signing_template_artifacts
,
intersected with architectures
.
For each architecture, the workflow creates sub-workflows and tasks as follows, with substitutions based on its own task data. Each one has a dependency on the previous one in sequence, using event reactions to store output in the workflow’s internal collection for use by later tasks:
an ExtractForSigning task, with task data:
input.template_artifact
: the subset of the lookup in this workflow’ssigning_template_artifacts
for the concrete architecture in questioninput.binary_artifacts
: the subset of the lookup in this workflow’sbinary_artifacts
for each ofall
and the concrete architecture in question that existenvironment
:{vendor}/match:codename={codename}
a Sign task, with task data:
purpose
:{purpose}
unsigned
: the output of the previous task, from the workflow’s internal collectionkey
:{key}
an AssembleSignedSource task, with task data:
environment
:{vendor}/match:codename={codename}
template
: the subset of the lookup in this workflow’ssigning_template_artifacts
for the concrete architecture in questionsigned
: the output of the previous task, from the workflow’s internal collection
an sbuild sub-workflow, with task data:
prefix
:signed-source/
input.source_artifact
: the output of the previous task, from the workflow’s internal collectiontarget_distribution
:{vendor}:{codename}
backend
:{sbuild_backend}
architectures
: if{architectures}
is set, then{architectures}
plusall
Todo
We may need to use different keys for different architectures. For example, a UEFI signing key is only useful on architectures that use UEFI, and some architectures have other firmware signing arrangements.
Workflow lintian
This workflow schedules Lintian checks for a single source package and its binaries on a set of architectures.
task_data
:source_artifact
(Single lookup, required): see Lintian taskbinary_artifacts
(Multiple lookup, required): see Lintian taskvendor
(string, required): the distribution vendor on which to run testscodename
(string, required): the distribution codename on which to run testsbackend
(string, optional): see Lintian taskarchitectures
(list of strings, optional): if set, only run on any of these architecture namesoutput
,include_tags
,exclude_tags
,fail_on_severity
: see Lintian task
Lintian will be run on the intersection of the provided list of
architectures (if any) and the architectures provided in
binary_artifacts
, in each case grouping source + arch-all + arch-any
together for the best test coverage. If only Architecture: all
binary
packages are provided in binary_artifacts
, then Lintian will be run once
for source + arch-all.
The workflow creates a Lintian task for each concrete architecture, with task data:
input.source_artifact
:{source_artifact}
input.binary_artifacts
: the subset of{binary_artifacts}
that are for the concrete architecture orall
environment
:{vendor}/match:codename={codename}
backend
:{backend}
output
,include_tags
,exclude_tags
,fail_on_severity
: copied from workflow task data parameters of the same names
Any of the lookups in input.source_artifact
or
input.binary_artifacts
may result in promises, and in that case the workflow adds corresponding
dependencies. Binary promises must include an architecture
field in
their data.
Workflow piuparts
This workflow schedules piuparts
checks for binaries built by a single
source package on a set of architectures.
task_data
:binary_artifacts
(Multiple lookup, required): see Piuparts taskvendor
(string, required): the distribution vendor on which to run testscodename
(string, required): the distribution codename on which to run testsbackend
(string, optional): see Piuparts taskarchitectures
(list of strings, optional): if set, only run on any of these architecture namesarch_all_host_architecture
(string, defaults toamd64
): concrete architecture on which to run tasks forArchitecture: all
packages
piuparts
will be run on the intersection of the provided list of
architectures (if any) and the architectures provided in
binary_artifacts
, in each case grouping arch-all + arch-any together.
If only Architecture: all
binary packages are provided in
binary_artifacts
, then piuparts
will be run once for arch-all on
{arch_all_host_architecture}
.
The workflow creates a Piuparts task for each concrete architecture, with task data:
input.binary_artifacts
: the subset of{binary_artifacts}
that are for the concrete architecture orall
host_architecture
: the concrete architecture, or{arch_all_host_architecture}
if onlyArchitecture: all
binary packages are being checked by this taskenvironment
:{vendor}/match:codename={codename}
base_tgz
:{vendor}/match:codename={codename}
backend
:{backend}
Any of the lookups in input.binary_artifacts
may result in
promises, and in that case the workflow adds
corresponding dependencies. Binary promises must include an
architecture
field in their data.
Todo
It would be useful to have a mechanism to control multiarch tests, such as testing i386 packages on an amd64 testbed.
Todo
It would be useful to be able to set base_tgz
separately from
environment
.
Workflow qa
task_data
:source_artifact
(Single lookup, required): thedebian:source-package
ordebian:upload
artifact representing the source package to testbinary_artifacts
(Multiple lookup, required): thedebian:binary-packages
ordebian:upload
artifacts representing the binary packages to testvendor
(string, required): the distribution vendor on which to run testscodename
(string, required): the distribution codename on which to run testsarchitectures
(list of strings, optional): if set, only run on any of these architecture namesarchitectures_allowlist
(list of strings, optional, either concrete architecture names orall
): if set, only run on any of these architecture names; whilearchitectures
is intended to be supplied by users or passed down from a higher-level workflow, this field is intended to be provided via Task configurationarchitectures_denylist
(list of strings, optional, either concrete architecture names orall
): if set, do not run on any of these architecture names; this field is intended to be provided via Task configurationarch_all_host_architecture
(string, defaults toamd64
): concrete architecture on which to run tasks forArchitecture: all
packagesenable_check_installability
(boolean, defaults to True): whether to include installability-checking taskscheck_installability_suite
(Single lookup, required ifenable_check_installability
is True): thedebian:suite
collection to check installability against; once we have a good way to look up the primary suite for a vendor and codename, this could default to doing soenable_autopkgtest
(boolean, defaults to True): whether to include autopkgtest tasksautopkgtest_backend
(string, optional): see Autopkgtest taskenable_reverse_dependencies_autopkgtest
(boolean, defaults to True): whether to include autopkgtest tasks for reverse-dependenciesreverse_dependencies_autopkgtest_suite
(Single lookup, required ifenable_reverse_dependencies_autopkgtest
is True): thedebian:suite
collection to search for reverse-dependencies; once we have a good way to look up the primary suite for a vendor and codename, this could default to doing soenable_lintian
(boolean, defaults to True): whether to include lintian taskslintian_backend
(string, optional): see Lintian tasklintian_fail_on_severity
(string, optional): see Lintian taskenable_piuparts
(boolean, defaults to True): whether to include piuparts taskspiuparts_backend
(string, optional): see Piuparts task
Any of the lookups in source_artifact
or binary_artifacts
may result
in promises, and in that case the workflow adds
corresponding dependencies. Binary promises must include an
architecture
field in their data.
The list of architectures to run on is the list of architectures from
binary_artifacts
, intersecting {architectures}
if set, intersecting
{architectures_allowlist}
if set, and subtracting
{architectures_denylist}
if set.
The workflow creates sub-workflows and tasks as follows, with substitutions based on its own task data:
if
enable_check_installability
is set, a single CheckInstallability task, with task data:suite
:{check_installability_suite}
binary_artifacts
: the subset of the lookup in this workflow’sbinary_artifacts
for each available architecture
if
enable_autopkgtest
is set, an autopkgtest sub-workflow, with task data:source_artifact
:{source_artifact}
binary_artifacts
: the subset of the lookup in this workflow’sbinary_artifacts
for each ofall
and the concrete architecture in question that existvendor
:{vendor}
codename
:{codename}
backend
:{autopkgtest_backend}
architectures
:{architectures}
arch_all_host_architecture
:{arch_all_host_architecture}
if
enable_reverse_dependencies_autopkgtest
is set, a reverse_dependencies_autopkgtest sub-workflow, with task data:binary_artifacts
: the subset of the lookup in this workflow’sbinary_artifacts
for each ofall
and the concrete architecture in question that existsuite_collection
:{reverse_dependencies_autopkgtest_suite}
vendor
:{vendor}
codename
:{codename}
backend
:{autopkgtest_backend}
architectures
:{architectures}
arch_all_host_architecture
:{arch_all_host_architecture}
if
enable_lintian
is set, a lintian sub-workflow, with task data:source_artifact
:{source_artifact}
binary_artifacts
: the subset of the lookup in this workflow’sbinary_artifacts
for each ofall
and the concrete architecture in question that existvendor
:{vendor}
codename
:{codename}
backend
:{lintian_backend}
architectures
:{architectures}
arch_all_host_architecture
:{arch_all_host_architecture}
fail_on_severity
:{lintian_fail_on_severity}
if
enable_piuparts
is set, a piuparts sub-workflow, with task data:binary_artifacts
: the subset of the lookup in this workflow’sbinary_artifacts
for each ofall
and the concrete architecture in question that existvendor
:{vendor}
codename
:{codename}
backend
:{piuparts_backend}
architectures
:{architectures}
arch_all_host_architecture
:{arch_all_host_architecture}
Todo
Not implemented: enable_check_installability
, check_installability_suite
,
enable_reverse_dependencies_autopkgtest
and reverse_dependencies_autopkgtest_suite
.
Workflow debian_pipeline
We want to provide a workflow coordinating all the steps that are typically run to build and test an upload to Debian, similar to the Salsa CI pipeline but (eventually) with more distribution-wide testing and the ability to handle the task of performing the upload.
This builds on the existing sbuild workflow.
task_data
:source_artifact
(Single lookup, required): thedebian:source-package
ordebian:upload
artifact representing the source package to testvendor
(string, required): the distribution vendor on which to run testscodename
(string, required): the distribution codename on which to run testsarchitectures
(list of strings, optional): if set, only run on any of these architecture namesarchitectures_allowlist
(list of strings, optional, either concrete architecture names orall
): if set, only run on any of these architecture names; whilearchitectures
is intended to be supplied by users, this field is intended to be provided via Task configurationarchitectures_denylist
(list of strings, optional, either concrete architecture names orall
): if set, do not run on any of these architecture names; this field is intended to be provided via Task configurationarch_all_host_architecture
(string, defaults toamd64
): concrete architecture on which to run tasks forArchitecture: all
packagessigning_template_names
(dictionary, optional): mapping from architecture to list of names of binary packages that should be used as signing templates by the make_signed_source sub-workflowsbuild_backend
(string, optional): see Task PackageBuildsbuild_environment_variant
(string, optional): variant of the environment to build on, e.g.buildd
enable_check_installability
(boolean, defaults to True): whether to include installability-checking taskscheck_installability_suite
(Single lookup, required ifenable_check_installability
is True): thedebian:suite
collection to check installability against; once we have a good way to look up the primary suite for a vendor and codename, this could default to doing soenable_autopkgtest
(boolean, defaults to True): whether to include autopkgtest tasksautopkgtest_backend
(string, optional): see Autopkgtest taskenable_reverse_dependencies_autopkgtest
(boolean, defaults to True): whether to include autopkgtest tasks for reverse-dependenciesreverse_dependencies_autopkgtest_suite
(Single lookup, required ifenable_reverse_dependencies_autopkgtest
is True): thedebian:suite
collection to search for reverse-dependencies; once we have a good way to look up the primary suite for a vendor and codename, this could default to doing soenable_lintian
(boolean, defaults to True): whether to include lintian taskslintian_backend
(string, optional): see Lintian tasklintian_fail_on_severity
(string, optional): see Lintian taskenable_piuparts
(boolean, defaults to True): whether to include piuparts taskspiuparts_backend
(string, optional): see Piuparts taskenable_make_signed_source
(boolean, defaults to False): whether to sign the contents of builds and make a signed source packagemake_signed_source_purpose
(string, required only ifenable_make_signed_source
is True): the purpose of the key to sign with; see Sign taskmake_signed_source_key
(Single lookup, required only ifenable_make_signed_source
is True): thedebusine:signing-key
artifact to sign with; must matchpurpose
enable_confirmation
(boolean, defaults to False): whether the generated workflow includes a confirmation step asking the user to double check what was built before the uploadenable_upload
(boolean, defaults to False): whether to upload to an upload queueupload_key
(Single lookup, optional): key used to sign the uploads. If not set and ifupload_require_signature
is True, then the user will have to remotely sign the files.upload_require_signature
(boolean, defaults to True): whether the uploads must be signedupload_include_source
(boolean, defaults to True): include source with the uploadupload_include_binaries
(boolean, defaults to True): include binaries with the uploadupload_merge_uploads
(boolean, defaults to True): if True, merge the uploads and create a single PackageUpload task to upload them all together; if False, create a separate PackageUpload task for each uploadupload_since_version
(string, optional): ifsource_artifact
is adebian:source-package
, include changelog information from all versions strictly later than this version in the.changes
file; the default is to include only the topmost changelog entryupload_target_distribution
(string, optional): ifsource_artifact
is adebian:source-package
, override the targetDistribution
field in the.changes
file to this value; the default is to use the distribution from the topmost changelog entryupload_target
(string, defaults toftp://anonymous@ftp.upload.debian.org/pub/UploadQueue/
): the upload queue, as anftp://
orsftp://
URL
The effective set of architectures is {architectures}
(defaulting to all
architectures supported by this debusine instance and the
{vendor}:{codename}
suite, plus all
), intersecting
{architectures_allowlist}
if set, and subtracting
{architectures_denylist}
if set.
The workflow creates sub-workflows and tasks as follows, with substitutions based on its own task data:
an sbuild sub-workflow, with task data:
input.source_artifact
:{source_artifact}
target_distribution
:{vendor}:{codename}
backend
:{sbuild_backend}
architectures
: the effective set of architecturesarch_all_host_architecture
:{arch_all_host_architecture}
, if setenvironment_variant
:{sbuild_environment_variant}
, if setsigning_template_names
:{signing_template_names}
, if set
if any of
enable_check_installability
,enable_autopkgtest
,enable_lintian
, andenable_piuparts
are True, a qa sub-workflow, with task data copied from the items of the same name in this workflow’s task data, plus:binary_artifacts
:internal@collections/name:build-{architecture}
, for each available architecturearchitectures
: the effective set of architectures
if
enable_confirmation
is set, a Confirm taskif
enable_make_signed_source
andsigning_template_names
are set, a make_signed_source sub-workflow, with task data:binary_artifacts
:internal@collections/name:build-{architecture}
, for each available architecturesigning_template_artifacts
:internal@collections/name:signing-template-{architecture}-{binary_package_name}
, for each architecture and binary package name fromsigning_template_names
vendor
:{vendor}
codename
:{codename}
architectures
: the effective set of architecturespurpose
:{make_signed_source_purpose}
key
:{make_signed_source_key}
sbuild_backend
:{sbuild_backend}
if
enable_upload
is set, a package_upload sub-workflow configured to require a signature from the developer, with task data:source_artifact
:{source_artifact}
(left unset ifupload_include_source
is False)binary_artifacts
:internal@collections/name:build-{architecture}
, for each available architecturemerge_uploads
:{upload_merge_uploads}
since_version
:{upload_since_version}
target_distribution
:{upload_target_distribution}
key
:{upload_key}
require_signature
:{upload_require_signature}
target
:{upload_target}
vendor
:{vendor}
codename
:{codename}
The first work request for each architecture in the make_signed_source
sub-workflow and the first work request in
the package_upload
sub-workflow depend on the Confirm task above.
Todo
Not implemented: enable_check_installability
,
check_installability_suite
, enable_reverse_dependencies_autopkgtest
,
reverse_dependencies_autopkgtest_suite
and enable_confirmation
.
See the relevant blueprints for task installability,
reverse dependencies autopkgtest or
enable confirmation.
Todo
There should also be an option to add the results to a debian:suite collection rather than uploading it to an external queue. However, this isn’t very useful until debusine has its own repository hosting, and once it does, we’ll need to be able to apply consistency checks to uploads rather than just adding them to suites in an unconstrained way. This will probably involve a new workflow yet to be designed.
Todo
The pipeline should also include the ability to schedule a debdiff against a baseline suite (either directly or in a sub-workflow).
Event reactions
The event_reactions
field on a workflow is a dictionary mapping events
to a list of actions. Each action is described with a dictionary where the
action
key defines the action to perform and where the remaining keys
are used to define the specifics of the action to be performed. See section
below for details. The supported events are the following:
on_creation
: event triggered when the work request is createdon_unblock
: event triggered when the work request is unblockedon_success
: event triggered when the work request completes successfullyon_failure
: event triggered when the work request fails or errors out
Supported actions
send-notification
Sends a notification of the event using an existing notification channel.
channel
: name of the notification channel to usedata
: parameters for the notification method
update-collection-with-artifacts
Adds or replaces artifact-based collection items with artifacts generated by the current work request.
collection
(Single lookup, required): collection to updatename_template
(string, optional): template used to generate the name for the collection item associated to a given artifact. Uses thestr.format
templating syntax (with variables inside curly braces).variables
(dict, optional): definition of variables to prepare to be able to compute the name for the collection item. Keys and values in this dictionary are interpreted as follows:Keys beginning with
$
are handled using JSON paths. The part of the key after the$
is the name of the variable, and the value is a JSON path query to execute against thedata
dictionary of the target artifact in order to compute the value of the variable.Keys that do not begin with
$
simply set the variable named by the key to the value, which is a constant string.It is an error to specify keys for the same variable name both with and without an initial
$
.
artifact_filters
(dict, required): this parameter makes it possible to identify a subset of generated artifacts to add to the collection. Each key-value represents a specific Django’s ORM filter query against the Artifact model so that one can runwork_request.artifact_set.filter(**artifact_filters)
to identify the desired set of artifacts.
Note
When the name_template
key is not provided, it is expected that
the collection will compute the name for the new artifact-based
collection item. Some collection categories might not even allow you to
override the name. In this case, after any JSON path expansion, the
variables
field is passed to the collection manager’s
add_artifact
, so it may use those expanded variables to compute its
own item names or per-item data.
As an example, you could register all the binary packages having
Section: python
and a dependency on libpython3.12 out of a sbuild
task with names like $PACKAGE_$VERSION
by using this action:
action: 'update-collection-with-artifacts'
artifact_filters:
category: 'debian:binary-package'
data__deb_fields__Section: 'python'
data__deb_fields__Depends__contains: 'libpython3.12'
collection: 'internal@collections'
name_template: '{package}_{version}'
variables:
'$package': 'deb_fields.Package'
'$version': 'deb_fields.Version'
update-collection-with-data
Adds or replaces a bare collection item based on the current work request.
This is similar to update-collection-with-artifacts, except
that of course it does not refer to artifacts. This can be used in
situations where no artifact is available, such as in on_creation
events.
collection
(Single lookup, required): collection to updatecategory
(string, required): the category of the item to addname_template
(string, optional): template used to generate the name for the collection item. Uses thestr.format
templating syntax (with variables inside curly braces, referring to keys indata
).data
(dict, optional): data for the collection item. This may also be used to compute the name for the item, either via substitution intoname_template
or by rules defined by the collection manager.
Note
When the name_template
key is not provided, it is expected that the
collection will compute the name for the new bare collection item. Some
collection categories might not even allow you to override the name.
retry-with-delays
This action is used in on_failure
event reactions. It causes the work
request to be retried automatically with various parameters, adding a
dependency on a newly-created Delay task.
The current delay scheme is limited and simplistic, but we expect that more complex schemes can be added as variations on the parameters to this action.
delays
(list, required): a list of delays to apply to each successive retry; each item is an integer suffixed withm
for minutes,h
for hours,d
for days, orw
for weeks.
The workflow data model for work requests gains a retry_count
field,
defaulting to 0 and incrementing on each successive retry. When this action
runs, it creates a Delay task with its delay_until
field set to
the current time plus the item from delays
corresponding to the current
retry count, adds a dependency from its work request to that, and marks its
work request as blocked on that dependency. If the retry count is greater
than the number of items in delays
, then the action does nothing.
Workflow implementation
On the Python side, a workflow is orchestrated by a subclass of
Workflow
, which derives from BaseTask
and has its own subclass
hierarchy.
When instantiating a “Workflow”, a new WorkRequest
is created with:
task_type
set to"workflow"
task_name
pointing to theWorkflow
subclass used to orchestratetask_data
set to the workflow parameters instantiated from the template (or from the parent workflow)
This WorkRequest
acts as the root of the WorkRequest
hierarchy
for the running workflow.
The Workflow
class runs on the server with full database access
and is in charge of:
on instantiation, laying out an execution plan under the form of a directed acyclic graph of newly created
WorkRequest
instances.analyzing the results of any completed
WorkRequest
in the graphpossibly extending/modifying the graph after this analysis
WorkRequest
elements in a Workflow can only depend among each other, and
cannot have dependencies on WorkRequest
elements outside the workflow.
They may depend on work requests in other sub-workflows that are part of the
same root workflow.
All the child work requests start in the blocked
status using the deps
unblock strategy. When the Workflow WorkRequest
is ready to run, all the
child WorkRequest
elements that don’t have any further dependencies can
immediately start.
WorkflowTemplate
The WorkflowTemplate
model has (at least) the following fields:
name
: a unique name given to the workflow within the workspaceworkspace
: a foreign key to the workspace containing the workflowtask_name
: a name that refers back to theWorkflow
class to use to manage the execution of the workflowtask_data
: JSON dict field representing a subset of the parameters needed by the workflow that cannot be overridden when instantiating the rootWorkRequest
The root WorkRequest
of the workflow copies the following fields from
WorkflowTemplate
:
workspace
task_name
task_data
, combining the user-supplied data and theWorkflowTemplate
-imposed data)
Group of work requests
When a workflow generates a large number of related/similar work requests,
it might want to hide all those work requests behind a group that would
appear a single step in the visual representation of the workflow. This is
implemented by a group
key in the workflow_data
dictionary of each
task.
Advanced workflows / sub-workflows
Advanced workflows can be created by combining multiple limited-purpose workflows.
Sub-workflows are integrated in the general graph of their parent workflow
as WorkRequests of type workflow
.
From a user interface perspective, sub-workflows are typically hidden as a single step in the visual representation of the parent’s workflow.
Cooperation between workflows is defined at the level of workflows. Individual work requests should not concern themselves with this; they are designed to take inputs using lookups and produce output artifacts that are linked to the work request.
Sub-workflow coordination takes place through the workflow’s internal collection (which is shared among all sub-workflows of the same root workflow), providing a mechanism for some work requests to declare that they will provide certain kinds of artifacts which may then be required by work requests in other sub-workflows.
On the providing side, workflows use the
update-collection-with-artifacts event reaction to add
relevant output artifacts from work requests to the internal collection, and
create promises to indicate to other workflows
that they have done so. Providing workflows choose item names in the
internal collection; it is the responsibility of workflow designers to
ensure that they do not clash, and workflows that provide output artifacts
have a optional prefix
field in their task data to allow multiple
instances of the same workflow to cooperate under the same root workflow.
On the requiring side, workflows look up the names of artifacts they require
in the internal collection; each of those lookups may return nothing, or a
promise including a work request ID, or an artifact that already exists, and
they may use that to determine which child work requests they create. They
use lookups in their child work requests to refer to
items in the internal collection (e.g.
internal@collections/name:build-amd64
), and add corresponding
dependencies on work requests that promise to provide those items.
Sub-workflows may depend on other steps within the root workflow while
still being fully populated in advance of being able to run. A
workflow that needs more information before being able to populate
child work requests should use workflow callbacks to run the workflow orchestrator again when it is
ready. (For example, a workflow that creates a source package and
then builds it may not know which work requests it needs to create
until it has created the source package and can look at its
Architecture
field.)