Objc.io
The monthly software journal Objc.io has established itself as a key voice in contemporary software. The new issue on testing is terrific.
You might think that testing is a way to locate defects before delivery, and that was in fact the role of testing for many years. Very recently, however, prominent software developers have advocated a style of work in which tests play a central role in design. For each component of the system, you begin by sketching out a series of tests that define its behavior. You write a test, and then write just enough of the component’s functionality to pass that specific test. Then you write the next test. If the original test plan was correct, then when you're passing every test, you know that you’ve built the entire component, and that it works.
Arne Schroppe and Daniel Eggert’s article on Real World Testing doesn’t break much new ground, but it’s a solid and readable example that doesn’t gloss over the difficult parts. It describes a real system, not a toy: too much of the literature describes either toy examples or incomprehensibly large and ill-described enterprise systems. They’ve got 190 files, 18,000 lines. That works out to about 200 lines per object (since about half the files will be headers and protocols). That’s pretty impressive; small objects and tiny methods with a vengeance. (Tinderbox Six has 1066 files today.)
Testing asynchronous tasks is discussed here, though not in anything like the detail that’s needed. The documentation on XCTestExpectation must be somewhere, right? Probably same place as the documentation on time zones before November, 1883. But that’s not the only problem with asynchronous testing: I’ve read fairly deeply in the test-driven design literature – in book form, at any rate – and there’s really very little on pragmatic testing of asynchronous systems.
And nowadays, everything is asynchronous! Every network fetch, of course, has to be asynchronous because we don’t want the user to wait. Likewise, scaling and adjusting images. Every user interface transition is animated nowadays, so all of that needs to be asynchronous. Every processing-intensive step (Tinderbox agents, I’m looking at you!) needs to be asynchronous.
So there’s lots of stuff being shuttled off to the extra cores of your processor, and that means lots of opportunities for contention, starvation, and unexpected deadlocks. Sure, good design will save you from the worst of this, but not every design starts out to be good. That’s the whole point of test-driven design: we evolve and grow toward correct behavior.
In particular, suppose you discover and fix a deadlock. How do you design a test that protects against a reversion error for that bug?
For that matter, suppose I’ve got a task that’s asynchronous. It typically takes 5ms, occasionally 50ms; a naive test might say, “wait 100ms, see if it’s finished, and if it's not finished, something’s gone wrong.” And that’s fine, except you really don’t want to pause for a whole tenth of a second when, most of the time, you’d be wasting 95% of that time. A tenth of a second doesn’t sound too bad, but when you’ve got 843 tests, that’s 84 seconds every time you turn around. (The Tinderbox test suite currently runs in about 12sec, which is bearable but too slow.) And you don't want to cut it too close, either, because sometimes your system is busy and sometimes it’s slow and you definitely don’t want tests that work most of the time and fail occasionally. (If your tests have a false failure rate of 0.1% and you’ve got 843 tests, your test suite is going to fail most of the time for no good reason and you’ll never notice the real failures.)
If there’s much about making asynchronous tests fast without also making them brittle, I can’t find it. But this article is a great place to start.