A couple days ago, I wrote about writing tests for tickets. (Didn’t read that post? Go on and read it. All done? Good.) I figured I should say more about the theory behind the practice
Theory
The tests I write for tickets are based on the concept of “acceptance tests” from agile development. Acceptance tests are usually written by customers and developers together to determine when a given feature is done. A feature cannot be considered finished (and should not be released to the customer) until all of the acceptance tests for the feature pass.
Acceptance tests are written using the language of the business domain and are understandable by both the customers and the developers. They often follow the pattern of “Given/When/Then”, i.e. “Given
something is true, When
I do something, Then
I should see something.” Implementation-specific language is not used in these tests.
My testing process is based on the practice of “acceptance test-driven development” (ATDD). Under ATDD, acceptance tests are written before any code is written. Once the test is verified to fail, the developer writes the code. (By verifying that the test failed first, you ensure that you know that the code you wrote caused the test to pass.) When the developer believes they are done or wants to check their work so far, they run the tests. Once, and only once, the tests pass, they an consider their work done.
Sysadmin reality
As a system administrator, I do not have the luxury to work out these tests directly with my customer. I have to rely on what they have written in their ticket (or said over the phone) to figure out what they want. Sometimes, I have to ask for clarification to make sure I understand their request well enough to write the tests.
Since I have to work out on my own what my customer wants, the tests I write are not guaranteed to be as effective for establishing when I am truly done. After all, my understanding of what they want may be incomplete and, therefore while my tests say I am done with the request, I’m not really done: I have not done everything my customer wanted.
One benefit of having to devise these tests myself is that I can use implementation-specific language. This lets me simplify the implementations of tests although maybe not the tests themselves. For example, I can write implementation-specific steps for getting the list of email accounts from mailservers (e.g. “When I get the list of accounts in Postfix”, “When I get the list of accounts in exchange”, etc.) without having to write conditional and abstraction logic within the test implementation.
To learn more
ATDD is covered in detail in ATDD By Example by Markus Gärtner. In addition to discussing the idea behind ATDD, he provides some coverage of two tools, Cucumber and FitNesse. The Cucumber Book by Matt Wynne and Aslak Hellesøy covers Cucumber in more detail.
The second part of The RSpec Book by David Chelimsky et al covers behavior-driven development (BDD) which is similar to and incorporates aspects of ATDD. (One of the foundations of BDD is “acceptance test-driven planning.”) I do not believe my testing methodology follows BDD since I am not always testing behavior. (See my comments about indirect tests in my last post.)
Finally, in chapter seven of The Clean Coder, Robert Martin discusses acceptance testing and the importance of establishing a correct definition of done.
Add new comment