See index to the full list of these scripts at Other Scripts. This page only documents a sub-set which is not more fully documented elsewhere.
[This page should be split so that the obsolete stuff is recorded but doesn't get in the way.]
See the Expo server page for what javascript packages are used by some troggle pages (CaveView, jquery etc.).
In :expoweb:/noinfo/
This may be obsolete. It used to coordinate running essential updates but also definitely includes redundant stuff. It needs some serious attention.
It coordinates producing the 3d surveys used in the cave description pages, updates the area pages, runs the folk script, runs the QM list generation within each of the cave pages that needs it, runs svxtrace, and reports on everything using "bigbro" which we don't have any other reference to.
Today, troggle generates the .3d and .pos files, parses and loads the QM list and parses the include tree of the survex files.
Online wallets are initially maintained using the wallets.py script, but troggle also directly imports all the expofiles/surveyscans/ directories of scanned survey notes and produces reports on them. There are several bash and python scripts in the surveyscans directory to create wallets for the coming year, and to re-run the wallet processing on all past years (for when we improve the script). For 2021 we have converted wallets.py to python3, so be careful of older versions which are python2.
Folk update process produces a webpage listing all expo participants and it also runs some validation checks on the input file /folk/folk.csv . Troggle also directly imports folk.csv so that it knows who everyone is, but errors during the importing are not as easy to see as validation errors printed when running the make-folklist.py script.
updatephotos (in the :loser: repo) uses the BINS package to generate photo albums. BINS uses the EXIF data (date, location if available) in the uploaded image files to produce a page showing the information available about the picture. All image meta-data are stored in XML files.
BINS is no longer maintained by its author so expo has taken on the responsibility for keeping it running. (Wookey is in the process of packaging it as a proper debian package).
How these are used once produced is documented in the the handbook
These are LaTeX files and the instructions for how to process them are in each .tex file. The protractors do not change but the others need a configuration file for all the cavers expected to attend expo.
The .tex files are in :expoweb:/documents/. There is a style file also there bierbook-style.sty which is used by both the bierbook and seshbook. Read the readme.txt file which explains which LaTeX packages you need. Build like this:
pdflatex.exe -synctex=1 -interaction=nonstopmode -shell-escape bierbook.tex
pdflatex.exe -synctex=1 -interaction=nonstopmode -shell-escape seshbook.tex
Due to the way LaTeX works out table column witdths, these commands may need to be run several times until a stable output is produced.
The design of these files is intended to confine all changes year to year to the names.txt and dates.txt files, thanks to LaTeX's capability to read an external file and iterate through line by line performing the same action for each name.
In :loser:/gpx/
Makes essentials.gpx - see GPS on expo. This requires the gpx2survex program. Get this from GitHub https://github.com/mshinwell/gps2survex;
Read the README file in :loser:/gpx/y.
Someone needs to document this and and make_svx.sh properly.
In :loser:/gpx/
Regenerates the surface tracks as survex files from GPS .gpx files.
[to be documented]
Unusually, this is in the :loser: repository, in :loser:/fixedpoints/scripts/convert_shx/
It runs ogr2ogr -f csv -lco GEOMETRY=AS_WKT outputfile inputfile and then extensively post-processes the shapefile output. It is written in OCAML. Therefore it must be Mark Shinwell's responsibility.
ogr2ogr is a file conversion utility. It seems to be being run to convert CSV files into something else. The "shx" part of the name implies a shapefile index format.
We suspect this is part of the production process for making essentials.gpx.
Traces all the svx file dependencies via the *include statements. In :expoweb:/ . The documented workflow today does not need this, but that might be a documentation fault. It might do a key job. It is used by makefile above. Its job has almost certainly been superceded by the survex file parser in troggle.
Survex files contain a reference to the wallet which contains the original survey notes for that surveyed passage. These sometimes have errors and also get out of date as caves get renamed when they get a kataster number issued. Each online survey wallet also has a reference to the survex file(s) which have been typed up from that data.
Validating the references is checked by scripts check-svx.sh, check-refs.sh, check-refs.awk in the :loser: repository to produce svxvalid.html which lists mismatches between the svx files and the survey scan wallets.
This is a horrible proof-of-concept hack that needs replacing with a proper python script instead of an assemblage of awk, bash and sed.
Tunnel files contain references to the wallet which contained the original survey notes for that surveyed and drawn passage.
The results of validation checks are in xmlvalid.html and generated by script check-xml.sh in the :drawings: repository.
(Therion files would too, if people inserted "#Ref" comments. In which case the script would need improving.)
Currently the intermediate data it works from has to be hand-generated so a proper parsing script needs to be written.
"cavestats" is compiled by noinfo/cavestats.build source code and is used by "create_dplong_table.py".
:loser:/docs/smklengths is a brief bash script that runs cavern on all the top-level cave svx files and extracts the total lengths.
In :expoweb:/1623/204/ and /1623/161/. It reads a cave-specific glossary.csv and produces the HTML files for caves 161 and 204:
which are indexes to passage names and locations in the very extensive cave descriptions for Kaninchenhohle and Steinbruckenhohle. We may need this again for Tunnocks/Balkonhohle.
Writes out legs and entrances in json format. In :loser:/fixedpts/ (along with make-fb-map.plwhich does Austrian coordinate transformations). Also in the :loser:fixedpts/scripts/convert_shx/ folder is a 135-line short script convert_shx.ml written in OCaml which constructs input to the ogr2ogr GIS feature file format conversion utility.
The documented workflow today does not appear to need this, but that might be a documentation fault. It might do a key job. [to be investigated]
This runs "cavern" (a commandline tool installed as part of survex) to produce a text (or HTML) report of the key statistics from the master svx file for a cave (the one that includes all the svx files for the individual passages). It is unclear who uses this or for what. It may be the script that generates the input data used by caves-tabular.html
This webpage caves-tabular.html uses a page-specifc JavaScript file TableSort.js which allows the user to resort the table of all the cave data by any of the columns by clicking on it [by Radost]. The exact source of the data in the table is undocumented, but presumably from cavern .3d file export at an unknown date. This may be that generated by summarizecave.sh .
In :expoweb:/noinfo/prospecting_guide_scripts/
These are now obsolete, replaced by the troggle code (troggle/core/views/prospect.py) that generates prospecting_guide on the fly (taking a couple of minutes each time).
In /scripts/noinfo/scripts/loser-caves1624-raw-data/ there is convert.py and split.sh which operate on Uebersicht_2011.svx doing conversions on a dataset generated from dataset generated from CaveRenderer. The documented workflow today does not appear to need this, but that might be a documentation fault. It might do a key job. [to be investigated]
Obsolete.
This function is now done by the troggle input parsers.
[for historic interest only]
Obsolete.
See the history document which refers to CAVETAB2.CSV and make-indxal4.pl during the "script and spreadsheet" phase of the system development from the mid 1990s to the mid 2000s: website history