mirror of
https://expo.survex.com/repositories/expoweb/.git/
synced 2024-11-22 23:31:56 +00:00
88 lines
5.0 KiB
HTML
88 lines
5.0 KiB
HTML
<!DOCTYPE html>
|
|
<html>
|
|
<head>
|
|
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
|
|
<title>Handbook Troggle - Automated Testing</title>
|
|
<link rel="stylesheet" type="text/css" href="../../css/main2.css" />
|
|
</head>
|
|
<body><style>body { background: #fff url(/images/style/bg-system.png) repeat-x 0 0 }</style>
|
|
<h2 id="tophead">CUCC Expedition Handbook</h2>
|
|
<h1>Handbook Troggle - Automated Testing</h1>
|
|
|
|
<h2>Troggle Automated Testing</h2>
|
|
<p>We have a suite of more than 100 <a href="https://en.wikipedia.org/wiki/Smoke_testing_(software)">smoke tests</a>.
|
|
|
|
<p>These are 'end to end' tests which very quickly show whether something is badly broken. The tests are for two purposes only:
|
|
<ul>
|
|
<li>To check whether anything has broken when we try a new version of python, Django or a Django plugin
|
|
<li>To check that the troggle system has been installed correctly on a new machine
|
|
</ul>
|
|
<p>This is surprisingly effective. Django produces excellently detailed tracebacks when an fault happens,
|
|
which allow us to home in on the precise part of the code which has been broken by a version upgrade.
|
|
<p>We do also have a handful of unit tests which just poke data into the database and check that it can be read out again.
|
|
<p>
|
|
The test code is all in <a href="http://expo.survex.com/repositories/troggle/.git/tree/core/TESTS/"><var>troggle/core/TESTS/</var></a>.
|
|
|
|
<h4>Running the tests</h4>
|
|
The tests are run manually by troggle programmers like this:
|
|
<pre><code> troggle$ python3 manage.py test --parallel auto -v 1</code></pre>
|
|
or, if someone has made a mistake and the tests interfere with each other:
|
|
<pre><code> troggle$ python manage.py test -v 1</code></pre>
|
|
|
|
<p>Running the tests in parallel should work on the server too (without the 'auto' keyword on Django 3.2 though)
|
|
but they fail with the message
|
|
<pre>
|
|
(1044, "Access denied for user 'expo'@'localhost' to database 'test_troggle_1'")
|
|
</pre>
|
|
<p> On the server, running them sequentially (not parallel) is still quite quick:
|
|
<pre>
|
|
Ran 104 tests in 21.944s
|
|
</pre>
|
|
|
|
<h4>Example test</h4>
|
|
<p>The test 'test_page_expofile' checks that a particular PDF is being served correctly by the web server
|
|
and that the resulting page is the correct length of 2,299,270 bytes:
|
|
|
|
<pre><code>
|
|
def test_page_expofile(self):
|
|
# Flat file tests.
|
|
response = self.client.get('/expofiles/documents/surveying/tunnel-loefflerCP35-only.pdf')
|
|
self.assertEqual(response.status_code, 200)
|
|
self.assertEqual(len(response.content), 2299270)
|
|
</code></pre>
|
|
|
|
<h3>Django test system</h3>
|
|
<p>This test suite uses the <a href="https://docs.djangoproject.com/en/3.2/topics/testing/">the
|
|
django test system</a>. One of the things this does
|
|
is to ensure that all the settings are imported correctly and makes it easy to specify a test as an input URL and expected
|
|
HTML output using a Django object <a href="https://docs.djangoproject.com/en/3.2/topics/testing/tools/">text client</a>.
|
|
It sets up a very fast in-memory sqlite database purely for tests.
|
|
No tests are run with the real expo database.
|
|
|
|
<h3>Troggle tests</h3>
|
|
<p>The tests can be run at a more verbose level by setting the <var>-v 3</var> flag.
|
|
|
|
<p>As yet we have no test database set up, so the in-memory database starts entirely empty. However we have 'fixtures' in
|
|
<var>troggle/core/fixtures/ </var>
|
|
which are JSON files containing dummy data which is read in before a few of the tests.
|
|
|
|
<p>Current wisdom is that <a href="https://lukeplant.me.uk/blog/posts/test-factory-functions-in-django/">factory methods in the test suite</a> are a superior way of managing tests for very long-term projects like ours. We have one of these <var>make_person()</var> in <var>core/TESTS/test_parsers.py</var> which we use to create 4 people, which are then used when testing the import parser for a fragment of an invented logbook in <var>test_logbook_parse()</var>.
|
|
|
|
<h4>How you can help</h4>
|
|
<p>We could do with a lot more unit tests which test small, specific things. If we have a lot of these it will make future re-engineering of troggle easier, as we can more confidently tackle big re-writes and still be sure that nothing is broken.
|
|
|
|
<p>We have got only one test which checks that the <a href="trogimport.html">input parsers</a> work. We need tests for parsing survex files and for reading the JSON files for the wallets. We could laos do with a directory browser/parser test for the survey scan files and for the HTML fragment file which make up the cave descriptions.
|
|
|
|
<p>Have a look at Wikpedia's <a href="https://en.wikipedia.org/wiki/Software_testing">review of types of software testing</a> for ideas.
|
|
<p>If you want to write some tests and are having trouble finding something which is untested, have a look at the list of
|
|
url paths in the routing system in <var>troggle/urls.py</var>
|
|
and look for types of url which do not appear in the test suite checks.
|
|
|
|
<hr />
|
|
Go on to: <a href="trogarch.html">Troggle architecture</a><br />
|
|
Return to: <a href="trogintro.html">Troggle intro</a><br />
|
|
Troggle index:
|
|
<a href="trogindex.html">Index of all troggle documents</a><br /><hr />
|
|
</body>
|
|
</html>
|