<p>See index to the full list of these scripts at <ahref="scriptsother.html">Other Scripts</a>. This page only documents a sub-set which is not more fully documented elsewhere.
<p><ahref="../computing/folkupdate.html">Folk update</a> process produces a webpage listing all expo participants and it also runs some validation checks on the input file /folk/folk.csv . Troggle also directly imports folk.csv so that it knows who everyone is, but errors during the importing are not as easy to see as validation errors printed when running the <ahref="../computing/folkupdate.html">make-folklist.py</a> script.
<h4id="photos">Photos</a></h4>
<p><ahref="">updatephotos</a> (in the :loser: repo) uses the BINS package to generate photo albums. BINS uses the EXIF data (date, location if available) in the uploaded image files to produce a page showing the information available about the picture. All image meta-data are stored in XML files.
<p>BINS is no longer maintained by its author so expo has taken on the responsibility for keeping it running. (Wookey is in the process of packaging it as a proper debian package).
<p>How these are used once produced is <ahref="../bierbook.html">documented in the the handbook</a>
<p>These are LaTeX files and the instructions for how to process them are in each .tex file. The protractors do not change but the others need a configuration file for all the cavers expected to attend expo.
<p>The .tex files are in :expoweb:/documents/. There is a style file also there bierbook-style.sty which is used by both the bierbook and seshbook. Read the readme.txt file which explains which LaTeX packages you need. Build like this:
Due to the way LaTeX works out table column witdths, these commands may need to be run several times until a stable output is produced.
The design of these files is intended to confine all changes year to year to the names.txt and dates.txt files, thanks to LaTeX's capability to read an external file and iterate through line by line performing the same action for each name.
<p>which are indexes to passage names and locations in the very extensive cave descriptions for Kaninchenhohle and Steinbruckenhohle. We may need this again for Tunnocks/Balkonhohle.
<p>Writes out legs and entrances in json format. In :loser:/fixedpts/ (along with <em>make-fb-map.pl</em>which does Austrian coordinate transformations).
Also in the :loser:fixedpts/scripts/convert_shx/ folder is a 135-line short script convert_shx.ml written in
<ahref="https://en.wikipedia.org/wiki/OCaml">OCaml</a> which constructs input to the
<ahref="https://gdal.org/programs/ogr2ogr.html">ogr2ogr</a> GIS feature file format conversion utility.
<p>The documented workflow today does not appear to need this, but that might be a documentation fault. It might do a key job. <spanstyle="color:red">[to be investigated]</span>
<p>This runs "cavern" (a commandline tool installed as part of survex) to produce a text (or HTML)
report of the key statistics from the master svx file for a cave
(the one that includes all the svx files for the individual passages).
It is unclear who uses this or for what. It may be the script that generates the input data used by <ahref="#tabular">caves-tabular.html</a>
<h4id="tabular">caves-tabular.html</h4>
<p> This webpage <ahref="../../scripts/caves-tabular.html">caves-tabular.html</a> uses a page-specifc JavaScript file TableSort.js which allows the user to resort the table of all the cave data by any of the columns by clicking on it [by Radost]. The exact source of the data in the table is undocumented, but presumably from cavern .3d file export at an unknown date. This may be that generated by <ahref="#summ">summarizecave.sh</a> .
<h4id="prosp">make-prospectingguide-new.py and prospecting_guide_short.py</h4>
Uebersicht_2011.svx doing conversions on a dataset generated from dataset generated from CaveRenderer. The documented workflow today does not appear to need this, but that might be a documentation fault. It might do a key job.
<spanstyle="color:red">[to be investigated]</span>
<p>Obsolete. Traced all the svx file dependencies via the *include statements.
It produced a file used by <ahref="#makefile">makefile</a> above.
The troggle <var>parser/survex.py</var> code now (sinc 2020) produces an indented list of the current *include tree in a file in the /troggle/ folder whenever the survex files are imported by <var>databaseReset.py</var>.