CUCC Expedition Handbook

Current scripts

Current scripts

See index to the full list of these scripts at Other Scripts. This page only documents a sub-set which is not more fully documented elsewhere.

[This page should be split so that the obsolete stuff is recorded but doesn't get in the way.]

Javascript

See the Expo server page for what javascript packages are used by some troggle pages (CaveView, jquery etc.).

Makefiles

In :expoweb:/noinfo/

This may be obsolete. It used to coordinate running essential updates but also definitely includes redundant stuff. It needs some serious attention.

It coordinates producing the 3d surveys used in the cave description pages, updates the area pages, runs the folk script, runs the QM list generation within each of the cave pages that needs it, runs svxtrace, and reports on everything using "bigbro" which we don't have any other reference to. (Generation of the .3d files as required is now done by troggle.)

Wallets

Online wallets are initially maintained using the wallets.py script, but troggle also directly imports all the expofiles/surveyscans/ directories of scanned survey notes and produces reports on them. There are several bash and python scripts in the surveyscans directory to create wallets for the coming year, and to re-run the wallet processing on all past years (for when we improve the script). For 2021 we have converted wallets.py to python3, so be careful of older versions which are python2.

Folk

Folk update process produces a webpage listing all expo participants and it also runs some validation checks on the input file /folk/folk.csv . Troggle also directly imports folk.csv so that it knows who everyone is, but errors during the importing are not as easy to see as validation errors printed when running the make-folklist.py script.

Photos

updatephotos (in the :loser: repo) uses the BINS package to generate photo albums. BINS uses the EXIF data (date, location if available) in the uploaded image files to produce a page showing the information available about the picture. All image meta-data are stored in XML files.

BINS is no longer maintained by its author so expo has taken on the responsibility for keeping it running. (Wookey is in the process of packaging it as a proper debian package).

svx2qm.py, tablize-qms.pl, find-dead-qms.py,qmreader.pl

See the entire page devoted to the various QM scripts.

svxtrace.py

Traces all the svx file dependencies via the *include statements. In :expoweb:/ . The documented workflow today does not appear to need this, but that might be a documentation fault. It might do a key job. [to be investigated]

Survex files - reference checking

Survex files contain a reference to the wallet which contains the original survey notes for that surveyed passage. These sometimes have errors and also get out of date as caves get renamed when they get a kataster number issued. Each online survey wallet also has a reference to the survex file(s) which have been typed up from that data.

Validating the references is checked by scripts check-svx.sh, check-refs.sh, check-refs.awk in the :loser: repository to produce svxvalid.html which lists mismatches between the svx files and the survey scan wallets.

This is a horrible proof-of-concept hack that needs replacing with a proper python script instead of an assemblage of awk, bash and sed.

Drawings files - reference checking

Tunnel files contain references to the wallet which contained the original survey notes for that surveyed and drawn passage.

The results of validation checks are in xmlvalid.html and generated by script check-xml.sh in the :drawings: repository.

(Therion files would too, if people inserted "#Ref" comments. In which case the script would need improving.)

Currently the intermediate data it works from has to be hand-generated so a proper parsing script needs to be written.

caves-tabular.html

This webpage caves-tabular.html uses a page-specifc JavaScript file TableSort.js which allows the user to resort the table of all the cave data by any of the columns by clicking on it [by Radost]. The exact source of the data in the table is undocumented, but presumably from cavern .3d file export at an unknown date. This may be that generated by summarizecave.sh .

create_dplong_table.py, cavestats and smklengths

"cavestats" is compiled by noinfo/cavestats.build source code and is used by "create_dplong_table.py".

:loser:/docs/smklengths is a brief bash script that runs cavern on all the top-level cave svx files and extracts the total lengths.

make_svx.sh

[to be documented]

make-prospectingguide-new.py and prospecting_guide_short.py

In :expoweb:/noinfo/prospecting_guide_scripts/

These are now obsolete, replaced by the troggle code (troggle/core/views/prospect.py) that generates prospecting_guide on the fly (taking a couple of minutes each time).

seshbook, bierbook & protractors

How these are used once produced is documented in the the handbook

These are LaTeX files and the instructions for how to process them are in each .tex file. The protractors do not change but the others need a configuration file for all the cavers expected to attend expo.

The .tex files are in :expoweb:/documents/. There is a style file also there bierbook-style.sty which is used by both the bierbook and seshbook. Read the readme.txt file which explains which LaTeX packages you need. Build like this:

pdflatex.exe -synctex=1 -interaction=nonstopmode -shell-escape bierbook.tex
pdflatex.exe -synctex=1 -interaction=nonstopmode -shell-escape seshbook.tex
Due to the way LaTeX works out table column witdths, these commands may need to be run several times until a stable output is produced. The design of these files is intended to confine all changes year to year to the names.txt and dates.txt files, thanks to LaTeX's capability to read an external file and iterate through line by line performing the same action for each name.

summarizecave.sh

This runs "cavern" (a commandline tool installed as part of survex) to produce a text (or HTML) report of the key statistics from the master svx file for a cave (the one that includes all the svx files for the individual passages). It is unclear who uses this or for what. It may be the script that generates the input data used by caves-tabular.html

make_essentials.sh GPS

In :expoweb:/noinfo/

Makes essentials.gpx - see GPS on expo.

make-glossary.pl

In :expoweb:/1623/204/ and /1623/161/. It reads a cave-specific glossary.csv and produces the HTML files for caves 161 and 204:

which are indexes to passage names and locations in the very extensive cave descriptions for Kaninchenhohle and Steinbruckenhohle. We may need this again for Tunnocks/Balkonhohle.

make-alljs.py

Writes out legs and entrances in json format. In :loser:/fixedpts/ (along with make-fb-map.plwhich does Austrian coordinate transformations). Also in the :loser:fixedpts/scripts/convert_shx/ folder is a 135-line short script convert_shx.ml written in OCaml which constructs input to the ogr2ogr GIS feature file format conversion utility.

The documented workflow today does not appear to need this, but that might be a documentation fault. It might do a key job. [to be investigated]

Old and possibly obsolete scripts

Loser-1624 scripts

In /scripts/noinfo/scripts/loser-caves1624-raw-data/ there is convert.py and split.sh which operate on Uebersicht_2011.svx doing conversions on a dataset generated from dataset generated from CaveRenderer. The documented workflow today does not appear to need this, but that might be a documentation fault. It might do a key job. [to be investigated]

logbk.pl

Obsolete.

This function is now done by the troggle input parsers.

[for historic interest only]

make-indxal4.pl

Obsolete.

See the history document which refers to CAVETAB2.CSV and make-indxal4.pl during the "script and spreadsheet" phase of the system development from the mid 1990s to the mid 2000s: website history


Return to: Other scripts
Return to: Troggle intro
Troggle index: Index of all troggle documents