The core of troggle is the data architecture: the set of tables into which all the cave survey and expo data is poured and stored. These tables are what enables us to produce a large number of different but consistent reports and views.
<figure>
<ahref="../i/troggle-tables.jpg">
<imgsrc="../i/troggle-tables-small.jpg"/></a>
<figurecaption>
</figure>
<h3>Architecture description</h3>
<p>Read the proposal: "<ahref="../../documents/troggle_paper.pdf"download>Troggle: a novel system for cave exploration information management</a>", by Aaron Curtis</em>. But remember that this paper is an over-ambitious proposal. Only the core datamanagement features have been built. We have none of the person management features and only two forms: for entering cave and cave entrance data.
<h3>Troggle parsers and input files</h3>
[describe which files they read and which tables they write to. Also say what error messages are likely on import and what to do about them.]
<ul>logbooks
<li>surveyscans
<li>survex files (caves)
<li>folk (people)
<li>QMs
<li>subcaves
<li>entrances
<li>drawings (tunnel)
</ul>
<h3>Files generated by troggle</h3>
<p>There are only two places where this happens. This is where online forms are used to create cave entrance records and cave records. These are created in the database but also exported as files so that when troggle is rebuilt and data reimported the new cave data is there.
<h3>Helpful tools and scripts</h3>
[ALSO talk about useful tools, such as those which interrogate MySQL or sqlite databases directly so that one can see the internals chnage as data is imported]