Simon Willison’s Weblog

Subscribe

How to cheat at unit tests with pytest and Black

11th February 2020

I’ve been making a lot of progress on Datasette Cloud this week. As an application that provides private hosted Datasette instances (initially targeted at data journalists and newsrooms) the majority of the code I’ve written deals with permissions: allowing people to form teams, invite team members, promote and demote team administrators and suchlike.

The one thing I’ve learned about permissions code over the years is that it absolutely warrants comprehensive unit tests. This is not code that can afford to have dumb bugs, or regressions caused by future development!

I’ve become a big proponent of pytest over the past two years, but this is the first Django project that I’ve built using pytest from day one as opposed to relying on the Django test runner. It’s been a great opportunity to try out pytest-django, and I’m really impressed with it. It maintains my favourite things about Django’s test framework—smart usage of database transactions to reset the database and a handy test client object for sending fake HTTP requests—and adds all of that pytest magic that I’ve grown to love.

It also means I get to use my favourite trick for productively writing unit tests: the combination of pytest and Black, the “uncompromising Python code formatter”.

Cheating at unit tests

In pure test-driven development you write the tests first, and don’t start on the implementation until you’ve watched them fail.

Most of the time I find that this is a net loss on productivity. I tend to prototype my way to solutions, so I often find myself with rough running code before I’ve developed enough of a concrete implementation plan to be able to write the tests.

So… I cheat. Once I’m happy with the implementation I write the tests to match it. Then once I have the tests in place and I know what needs to change I can switch to using changes to the tests to drive the implementation.

In particular, I like using a rough initial implementation to help generate the tests in the first place.

Here’s how I do that with pytest. I’ll write a test that looks something like this:

def test_some_api(client):
    response = client.get("/some/api/")
    assert False == response.json()

Note that I’m using the pytest-django client fixture here, which magically passes a fully configured Django test client object to my test function.

I run this test, and it fails:

pytest -k test_some_api

(pytest -k blah runs just tests that contain blah in their name)

Now… I run the test again, but with the --pdb option to cause pytest to drop me into a debugger at the failure point:

$ pytest -k test_some_api --pdb
== test session starts ===
platform darwin -- Python 3.7.5, pytest-5.3.5, py-1.8.1, pluggy-0.13.1
django: settings: config.test_settings (from ini)
...
client = <django.test.client.Client object at 0x10cfdb510>

    def test_some_api(client):
        response = client.get("/some/api/")
>       assert False == response.json()
E       assert False == {'this': ['is', 'an', 'example', 'api']}
core/test_docs.py:27: AssertionError
>> entering PDB >>
>> PDB post_mortem (IO-capturing turned off) >>
> core/test_docs.py(27)test_some_api()
-> assert False == response.json()
(Pdb) response.json()
{'this': ['is', 'an', 'example', 'api'], 'that_outputs': 'JSON'}
(Pdb) 

Running response.json() in the debugger dumps out the actual value to the console.

Then I copy that output—in this case {'this': ['is', 'an', 'example', 'api'], 'that_outputs': 'JSON'}—and paste it into the test:

def test_some_api(client):
    response = client.get("/some/api/")
    assert {'this': ['is', 'an', 'example', 'api'], 'that_outputs': 'JSON'} == response.json()

Finally, I run black . in my project root to reformat the test:

def test_some_api(client):
    response = client.get("/some/api/")
    assert {
        "this": ["is", "an", "example", "api"],
        "that_outputs": "JSON",
    } == response.json()

This last step means that no matter how giant and ugly the test comparison has become I’ll always get a neatly formatted test out of it.

I always eyeball the generated test to make sure that it’s what I would have written by hand if I wasn’t so lazy—then I commit it along with the implementation and move on to the next task.

I’ve used this technique to write many of the tests in both Datasette and sqlite-utils, and those are by far the best tested pieces of software I’ve ever released.

I started doing this around two years ago, and I’ve held off writing about it until I was confident I understood the downsides. I haven’t found any yet: I end up with a robust, comprehensive test suite and it takes me less than half the time to write the tests than if I’d been hand-crafting all of those comparisons from scratch.

Also this week

Working on Datasette Cloud has required a few minor releases to some of my open source projects:

Unrelated to Datasette Cloud, I also shipped twitter-to-sqlite 0.16 with a new command for importing your Twitter friends (previously it only had a command for importing your followers).

In bad personal motivation news… I missed my weekly update to Niche Museums and lost my streak!

This is How to cheat at unit tests with pytest and Black by Simon Willison, posted on 11th February 2020.

Part of series My open source process

  1. Documentation unit tests - July 28, 2018, 3:59 p.m.
  2. How to cheat at unit tests with pytest and Black - Feb. 11, 2020, 6:56 a.m.
  3. Open source projects: consider running office hours - Feb. 19, 2021, 9:54 p.m.
  4. How to build, test and publish an open source Python library - Nov. 4, 2021, 10:02 p.m.
  5. How I build a feature - Jan. 12, 2022, 6:10 p.m.
  6. Writing better release notes - Jan. 31, 2022, 8:13 p.m.
  7. Software engineering practices - Oct. 1, 2022, 3:56 p.m.
  8. … more

Next: Things I learned about shapefiles building shapefile-to-sqlite

Previous: Weeknotes: Shaving yaks for Datasette Cloud