Metatest - a new and simple framework for writing tests
Jonathan Fine (The Open University)
J.Fine at open.ac.uk
We describe metatest, a work-in-progress framework for writing tests. Tests written in metatest can be run using any test-running framework, such as unittest, py.test and nose. Metatest will also provide an elegant and simple language for writing tests.
The initial release of metatest is (will be) available from the Sourceforge project site http://sourceforge.net/projects/metatest
Warning: Metatest is work-in-progress, and this talk anticipates some work intended to take place between the submission of the talk and its delivery.
Our focus is on writing tests, so we'll use the really simple function plus as our running example.
# Source for mymod.py. def plus(a, b): return a + b
The function plus is a pure function. In other words plus
Testing procedures that change state is harder.
In real life, we might want to test a function that parses string data into objects, say by using regular expressions. But in this talk we have:
def plus(a, b): return a + b
Here's the simple way to test that 2 + 2 = 4.
from mymod import plus assert plus(2, 2) == 4 # And more examples to follow.
This is easy to write, and works fine until a test fails! Then we get something like:
Traceback (most recent call last): File "test_mymod", line 57, in ? assert plus(-7, 12) == 7 AssertionError
We'd like to see something like
TestError (file "test_mymod", line 57): assert plus(-7, 12) == 7 Expecting 7, getting 5
And we'd probably like to run to remaining tests, even after the failure of this test.
Assertion tests are easy to write but report and run poorly.
We can't, in Python, add strings to integers.
Here's how to test, bare hands, for plus raising an exception.
try: plus(2, '') raise TestException except TypeError: pass
Do you see how it works? The try code block is sure to raise an exception, and only if it is a TypeError is the exception caught.
Should the test fail, we'll get something like
Traceback (most recent call last): File "test_mymod.py", line 63, in ? TestException
Bare hands testing for exceptions is clumsy to write and as with testing for return values reports and runs poorly.
unittest is the Python version of Kent Beck's unit test framework for Smalltalk. Here's how to use it to write our tests.
import unittest from mymod import plus class ModTest(unittest.TestCase): def test_two_plus_two(self): self.assertEqual(plus(2, 2), 4) def test_think_of_a_name(self): self.assertEqual(plus(-7, 12), 7) def test_number_plus_string(self): self.assertRaises(TypeError, plus, 2, '') if __name__ == '__main__': unittest.main()
Later, we'll compare this to other ways of writing the same tests.
With unittest, to test (-7) + 12 = 7, we wrote:
def test_think_of_a_name(self): self.assertEqual(plus(-7, 12), 7)
This is what unittest gives back to us:
FAIL: test_think_of_a_name (__main__.ModTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "unittest_mymod.py", line 12, in test_think_of_a_name self.assertEqual(plus(-7, 12), 7) AssertionError: 5 != 7
This gives us the line of code
self.assertEqual(plus(-7, 12), 7)
that contains the test that failed. This is a very nice feature.
It also tells us, indirectly, that we got 5 when we were expecting 7.
py.test was created by the PyPy project, because they needed it. It's rather more Pythonic than unittest, in my opinion. Here's how to use it to write our tests:
# File pytest_mymod.py import py.test from mymod import plus def test_two_plus_two(): assert plus(2, 2) == 4 def test_think_of_a_name(): assert plus(-7, 12) == 7 def test_number_plus_string(): py.test.raises(TypeError, plus, 2, '')
Later, we'll compare this to other ways of writing the same tests. To run the test, one issues the command:
$ py.test pytest_mymod.py
With py.test, to test (-7) + 12 = 7, we wrote:
def test_think_of_a_name(): assert plus(-7, 12) == 7
This is what py.test gives back to us:
def test_think_of_a_name(): E assert plus(-7, 12) == 7 > assert 5 == 7 + where 5 = plus(-7, 12)
As with unittest, this gives us the line of code
assert plus(-7, 12) == 7
that contains the test that failed. This is a very nice feature.
It also tells us, explicitly, why we got the 5 == 7 assertion error.
Metatest has as design goals for test files:
Here's how to use metatest to write our tests:
from metatest.py.mymod import plus plus(2, 2, _eq=4) plus(-7, 12, _eq=7) plus(2, '', _ex=TypeError) if __name__ == '__main__': import metatest metatest.run()
We think the above meets the design goals. Later, we'll compare it to other ways of writing the same tests.
With py.test, to test (-7) + 12 = 7, we wrote:
plus(-7, 12, _eq=7)
This is what metatest (fingers crossed) gives back to us:
TestError (file "test_mymod", line 4): plus(-7, 12, _eq=7) Expecting 7, getting 5
As with unittest and py.test, we get back the line of code
plus(-7, 12, _eq=7)
that contains the test that failed. This is a very nice feature.
It also tells us, explicitly, why the test failed.
For the technically minded, here's how metatest works, line by line.
from metatest.py.mymod import plus
The module metatest.py uses sys.meta_path (see PEP 302) to provide stub objects as the module metatest.py.mymod and as the attribute plus of this module.
In particular, the module mymod does not have to exist, for this import to succeed.
plus(2, 2, _eq=4)
The stub objects are somewhat special. Each of the stub objects logs the operations performed on it. It addition, each such stub object returns another stub object in response to
Each metatest stub object logs its events. Here is an example:
from metatest.py.mymod import plus
The above line of code produces two events:
# File + lineno, event, src+tgt, args, kwargs ('test_mymod', 1, 'import', None, 0, ('mymod',), {}) ('test_mymod', 1, 'getattr', 0, 1, ('plus,), {})
Next we look at the first test in the file:
plus(2, 2, _eq=4)
This also produces two events:
# File + lineno, event, src+tgt, args, kwargs ('test_mymod', 3, 'call', 1, 2, (2, 2,), {'_eq': 4}) ('test_mymod', 3, 'del', 2, None, (), {})
Finally, the command metatest.run() replays these events through a special player.
unittest
class ModTest(unittest.TestCase): def test_two_plus_two(self): self.assertEqual(plus(2, 2), 4) def test_two_plus_string(self): self.assertRaises(TypeError, plus, 2, '')
py.test
def test_two_plus_two(): assert plus(2, 2) == 4 def test_two_plus_string(): py.test.raises(TypeError, plus, 2, '')
metatest
plus(2, 2, _eq=4) plus(2, '', _ex=TypeError)
unittest
FAIL: test_think_of_a_name (__main__.ModTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "unittest_mymod.py", line 12, in test_think_of_a_name self.assertEqual(plus(-7, 12), 7) AssertionError: 5 != 7
py.test
def test_think_of_a_name(): E assert plus(-7, 12) == 7 > assert 5 == 7 + where 5 = plus(-7, 12) [/home/jfine/domslides/pytest_mymod.py:11]
metatest
TestError (file "test_mymod", line 4): plus(-7, 12, _eq=7) Expecting 7, getting 5
Recall that with metatest:
Or in brief, metatest does its magic when it reads the tests, not when it runs them.
So why not write a special player for testing modules written in a different language?
from metatest.php.mymod import wibble # or Perl, Ruby, C, ... # code to test the PHP function wibble
I'd like to see something like this, if possible, to happen. But it's not something I need right now. So perhaps someone else would like to have a go?
Perhaps Javascript?