The python stand-alone script <var>databaseRest.py</var> imports data from files into the troggle database (sqlite or MariaDB). It is separate from the process which runs troggle and serves the data as webpages (via apache), but it is plugged in to the same hierarchy of django python files.
<p>In the :troggle: directory:
<code><pre>$ python databaseReset.py
Usage is 'python databaseReset.py <command> [runlabel]'
where command is:
reset - normal usage: clear database and reread everything from files - time-consuming
caves - read in the caves
logbooks - read in the logbooks
people - read in the people from folk.csv
QMs - read in the QM csv files (older caves only)
reinit - clear database (delete everything) and make empty tables. Import nothing.
scans - the survey scans in all the wallets
survex - read in the survex files - all the survex blocks but not the x/y/z positions
survexpos - just the x/y/z Pos out of the survex files
tunnel - read in the Tunnel files - which scans the survey scans too
profile - print the profile from previous runs. Import nothing.
test - testing...
and [runlabel] is an optional string identifying this run of the script
in the stored profiling data 'import-profile.json'
caves and logbooks must be run on an empty db before the others as they
[This data is from May 2020 immediately after troggle had been ported from python2 to python3 but before the survex import was re-engineered. It now takes ~600s in total.]
<p>The 'survexblks' option loaded all the survex files recursively following the <var>*include</var>
statements. It takes a long time if memory is low and the operating system has to page a lot. This has now been rewritten.