Skip to content
Contributing to the Task Library

icon picker

We strive to keep as much of Prefect tested regularly through automated tests on every commit, so adding tests for your new task is the next step! That being said, testing is its own skill so if you are a new Pythonista, feel free to skip automated tests and PR your task straight into the contrib section.
However, if you are trying to PR into the core task library or want to add tests to your contrib PR anyways, this section will give you all you need to know from a logistical standpoint.
Know that tests at Prefect server two main purposes:
documentation of intent
insurance against regressions
Writing tests that are so hyper specific that they don't communicate the first point, or are too brittle to protect for the second point, would defeat the purpose. As a quick rule of thumb, consider testing as its own brainstorming session, not just a formality at the end of the PR, and we think you will come up with tests that meet those standards and help maintain the code for a long time to come!

Writing automated tests in Prefect

In general, we use pytest as our test runner and test framework, and if you have never used it before a quick browse on their pytest may get you extra comfortable for the rest of this section.
For task library tests in particular, there are a couple rules of thumb to follow when setting up new tests, especially if those tests depend on imports outside the core Prefect dependencies.
First off, all tests for tasks are in the directory /prefect/tests/tasks. If you are extending an existing task library subpackage, you can add your tests to the appropriate existing file, but if you created a new subpackage in /prefect/src/prefect/tasks, you'll need to create the analogous directory here too.
If your tasks have extra dependencies, you'll need to indicate that your tests should be skipped if a user doesn't have those dependencies installed (for example, if they are running tests having only installed Prefect's core dependencies.) To do so, add to your subpackage's file a call like the following, subbing in the name of your dependent package as the argument:
import pytest

This is example is from if you want to take a look in the code. There is a Github check that enforces this, so it will be clear on your PR if this step got missed.
Once this setup is complete, create as many test files and test classes as you need to cover the tasks that you have. In the common case with just one new task, you may just need one test file with a single test class spanning all the test case methods you want to cover, or it may make sense to split them out more ー use your judgement or take some inspiration from other test subdirectories for task library tasks.

Common test cases for task library additions

You may have a couple of test cases in mind based on your task run code. The more your test covers potential regressions and documents the intention of the usage of your code, the better! Definitely start with tests that cover the expected cases, and then potential misusages, of your task code as a whole first.
However, below is also a list of some of the common test cases to consider to make sure that the hookups of your task to Prefect are correct. You may have tested some of these combinations in your scratch file during development, but now is a good time to methodically cover everything you may have tested ad-hoc during development.
does initialization work
does initialization set attributes
do additional kwargs get passed to the base task init method
do certain attributes raise runtime errors if not provided
can certain attributes be provided either at init or at runtime

Running automated tests

Automated tests runs on every pushed commit by CircleCI, a hosted test runner that we have configured to run the test suite under different conditions. The whole specification is explained in yaml in /prefect/.circle/config.yaml, but in brief the test suite will be run on
each version of Python that we support: 3.6, 3.7, 3.8
with no extra dependencies installed (this is where that importorskip before becomes really important!)
with only the lowest version allowed in of all dependencies installed
separate tests for Prefect server and the Prefect server UI

What about integration testing?

At this time, integration testing is not explicitly required for Prefect task library tasks. However, we cannot ignore that some tasks in our tasks library could benefit from integration tests that confirm that the task API actually works with the supported version of the third party service. Knowing explicitly which versions of the third-party service our tasks are demonstrated to support, catching regression errors or intermittent errors that our users might be experiencing by expanding our core usage via the integration tests, and demonstrating the expected usage with the 'full stack' for task library tasks that require dependent services are all good reasons to include it.
However, in general integration tests would ideally require us to set up either a controlled and ephemeral version of the third-party service during test set up, such as with a docker container. Not only is this extra set up likely a time cost during the test run, it usually involves setting up infrastructure outside of the base pytest setup, either with a pytest plugin or an entirely different test suite run case with its own docker dependencies. So since this might be quite a big devops lift, this is also something we don't really recommend for a first time task library committer.
All that being said, if you have an integration test plan that you think would be valuable we are happy to talk to you about it and find a way to integrate it for your tasks in the task library!


While this is all going on, CircleCI will also do automated checks to make sure that your code conforms to our style guide. Both black and mypy static analysis will run during automated tests. I recommend getting in the habit of running black before committing any code - it's easy and then you don't have to make a separate commit for your style changes. But as long as your PR passes the static analysis checks at the end, it's fine, so if you prefer to wait until the end and don't mind the noise of the linting checks whenever you check circleCI, that's fine too!
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
) instead.