Apart from these scripts, troggle in full deployment also needs a running mySQL database, a running apache webserver and cgit to display git repos.
There are also scripts running cron jobs on the server to fix file permissions and to periodically tidy repositories, and example rsync and scp scripts to help manage synchronisation of the expofiles directories which are not under version control.
Online wallets are maintained using the wallets.py script, but troggle also directly imports all the expofiles/surveyscans/ directories of scanned survey notes.
Survex files contain a reference to the wallet which contains the original survey notes for that surveyed passage. These sometimes have errors and also get out of date as caves get renamed when they get a kataster number issued. Each online survey wallet also has a reference to the survex file(s) which have been typed up from that data.
Folk update process produces a webpage listing all expo participants but it also runs some validation checks on the input file /folk/folk.csv . Troggle also directly imports folk.csv so that it knows who everyone is, but errors during the importing are not as easy to see as validation errors printed when running the make-folklist.py script.
updatephotos uses the BINS package to generate the webpages. BINS is no longer maintained by its author so expo has taken on the responsibility for keeping it running. (Wookey is in the process of packaging it as a proper debian package).