<p>Troggle manages all cave and expo data in a logical and maintainable way
and publishes it on the web.
<p>
The troggle software is written and maintained by expo members.
<p>Examples of troggle-generated pages from data:
<ul>
<li><ahref="/caves">expo.survex.com/caves</a> - list of caves surveyed and links to guidebook descriptions
<li><ahref="/pubs.htm">expo.survex.com/pubs.htm</a> - reports, accounts and logbooks
<li><ahref="/expedition/2018">expo.survex.com/expedition/2018</a> - Members on expo 2018: . Scroll down for a list of all the data typed in from survey trips.
<li><ahref="/survexfile/caves/">expo.survex.com/survexfile/caves/</a> - List of caves with all the surveys done for each.
<li><ahref="/survexfile/caves-1623/115/cucc/futility.svx">expo.survex.com/survexfile/caves-1623/115/cucc/futility.svx</a> - Cave survey data from 1983 in Schnellzughohle.
<li><ahref="/survey_scans/">expo.survex.com/survey_scans/</a> - List of all scanned original survey notes.
<li><ahref="/survey_scans/2018%252343/">expo.survex.com/survey_scans/2018%252343/</a> - list of links to scanned notes for wallet #43 during the 2018 expo.
<h3id="where">Troggle - where it gets the data</a></h3>
<p>
All the data of all kinds is stored in files. When troggle starts up it imports that data from the files. There are other scripts doing useful things (folk, wallets) and these too get their data from files. Troggle is completely unlike any other django installation: it has a database, but the database is rebuilt from files.
<p>There is never any need to back up or archive the database as it is rebuilt from files. Rebuilding troggle and re-importing all the data takes about half an hour.