<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Other scripts supporting troggle</title>
<link rel="stylesheet" type="text/css" href="/css/main2.css" />
</head>
<body>
<style>body { background: #fff url(/images/style/bg-system.png) repeat-x 0 0 }</style>
<style>
h4 {
    margin-top: 1.5em;
    margin-bottom: 0;
}
p {
    margin-top: 0;
    margin-bottom: 0.5em;
}
</style>
<h2 id="tophead">CUCC Expedition Handbook</h2>
<h1>Current scripts</h1>


<h2>Current scripts</h2>
<p>See index to the full list of these scripts at <a href="scriptsother.html">Other Scripts</a>. This page only documents a sub-set which is not more fully documented elsewhere.
<p>[This page should be split so that the obsolete stuff is recorded but doesn't get in the way.]

<h4 id="js">Javascript</h4>
<p>See <a href="serverconfig.html#js">the Expo server page</a> for what javascript packages are used by some troggle pages (CaveView, jquery etc.).

<h4 id="makefile">Makefiles</h4>
<p>In :expoweb:/noinfo/
<p>This may be obsolete. It used to coordinate running essential updates but also definitely includes redundant  stuff. It <span 
style="color:red">needs some serious attention</span>.
<p>It coordinates producing the 3d surveys used in the cave description pages, updates the area pages, runs the folk script, 
runs the QM list generation within each of the cave pages that needs it, runs <a href="#svxtrace">svxtrace</a>, 
and reports on everything using "bigbro" which we don't have any other reference to. 
<p>Today, troggle generates the .3d and .pos files, parses and loads the QM list and parses the include tree of the survex files.

<h4 id="folk">Folk</a></h4>
<p><a href="../computing/folkupdate.html">Folk update</a> process produces a webpage listing all expo participants and it also runs some validation checks on the input file /folk/folk.csv . Troggle also directly imports folk.csv so that it knows who everyone is, but errors during the importing are not as easy to see as validation errors printed when running the <a href="../computing/folkupdate.html">make-folklist.py</a> script.

<h4 id="photos">Photos</a></h4>
<p><a href="">updatephotos</a> (in the :loser: repo) uses the BINS package to generate photo albums. BINS uses the EXIF data (date, location if available) in the uploaded image files to  produce a page showing the information available about the picture. All image meta-data are stored in XML files.
<p>BINS is no longer maintained by its author so expo has taken on the responsibility for keeping it running. (Wookey is in the process of packaging it as a proper debian package).

 <h4 id="geophotos">Geolocated Photos</a></h4>
<p>In Autumn 2023 we searched all the EXIF data on all our hoto archive looking for geo-located photos. This found a few entrances which had been lost.
  <p>The script is <code><a href="">/troggle/photomap/pmap.py</a></code>
    which currently generates a single file <var>photos_jpg.gpx</var> which can be imported into mapping software (such as GPSprune), but does not yet hot-link to the cave description pages or the photos themselves.
    <p>Each line of the gpx file is of this form:
      <code>&lt;wpt lat="47.616123" lon="13.812214"&gt;<br />&lt;name&gt;[img_20170801_143731431]&lt;/name&gt;<br />&lt;type&gt;photo&lt;/type&gt;<br />&lt;desc&gt;/2017/PhilipSargent/img_20170801_143731431.jpg&lt;/desc&gt;<br />&lt;/wpt&gt;
</code>
      <p>We would want to add &lt;ele&gt; for elevation and we could use GPX extensions to insert the URL info we need to make this clickable and more useful, e.g. see <a href="https://hikingguy.com/how-to-hike/what-is-a-gpx-file/">What is a GPX file</a> and <a href="https://www.mapsmarker.com/kb/user-guide/how-to-use-gpx-extensions-to-customize-tracks">GPX extensions</a>.
  
<h4 id="latex">seshbook, bierbook & protractors</h4>
<p>How these are used once produced is <a href="../bierbook.html">documented in the the handbook</a>
<p>These are LaTeX files and the instructions for how to process them are in each .tex file. The protractors do not change but the others need a configuration file for all the cavers expected to attend expo.
<p>The .tex files are in :expoweb:/documents/. There is a style file also there bierbook-style.sty which is used by both the bierbook and seshbook. Read the readme.txt file which explains which LaTeX packages you need. Build like this:
<p><pre><code>pdflatex.exe -synctex=1 -interaction=nonstopmode -shell-escape bierbook.tex
pdflatex.exe -synctex=1 -interaction=nonstopmode -shell-escape seshbook.tex
</code></pre>
<p>
Due to the way LaTeX works out table column witdths, these commands may need to be run several times until a stable output is produced.
<p>
The design of these files is intended to confine all changes year to year to the names.txt and dates.txt files, thanks to LaTeX's capability to read an external file and iterate through line by line performing the same action for each name.

<p>Packages needed (LaTeX) are:
<pre>
geometry
fancyhdr
tikz
booktabs
tongtable
multirow
tocloft
yfonts
anyfontsize
ifthen
</pre>
On Debian/Ubuntu install:
<pre><code>sudo apt texlive-latex-extra</code></pre>
and run
<pre><code>make</code></pre>

<p>Links to: 
<ul>
<li><a href="/documents/bierbook/Makefile">Makefile</a>
<li><a href="/documents/bierbook/bierbook.tex">bierbook.tex</a>
<li><a href="/documents/bierbook/seshbook.tex">seshbook.tex</a>
<li><a href="/documents/bierbook/bierbook-style.sty">bierbook-style.sty</a>
<li><a href="/documents/bierbook/dates.txt">dates.txt</a>
<li><a href="/documents/bierbook/names.txt">names.txt</a>
<li><a href="/documents/bierbook/readme.txt">readme.txt</a>
</ul>

<h4 id="gps">make_essentials.sh</h4>
<p>In :loser:/gpx/
<p>Makes essentials.gpx - see <a href="../essentials.html">GPS on expo</a>. 
This used to require the gpx2survex program (written in OCAML) but now doesn't (since 2023).
Get the OCAML file from GitHub <a href="https://github.com/mshinwell/gps2survex">https://github.com/mshinwell/gps2survex;</a>
<p>Read the <a href="make-essentialsREADME.txt">README</a> file in :loser:/gpx/y.
<p>Someone needs to document this and and make_svx.sh properly.

<h4 id="surface">gpx2survex and make_svx.sh</h4>
<p>In :loser:/gpx/
<p>Regenerates the surface tracks as survex files from GPS .gpx files. Also requires the gpx2survex program.
  <p>We used to use the OCAML program gpx2survex but we now also have a python equivalent gpx2survex.py which is used by make_svx2.sh This is part of the make_essentials generation process.
<p>gpx2survex simplifies a track so that it is less voluminous.
  <p>For the reverse process we don't need a script. For svx-to-gps we can use <var>survexport</var>: Olly says [2022]: "you shouldn't need to mess around with undocumented scripts - since 2018, you can just do: 
    <var>survexport --entrances all.3d essentials.gpx</var>"
    <p>But that does rather rely on <var>all.3d</var> being properly generated, which  troggle does not currently do reliably and automatically.
      <p>Documented, by <var>man survexport, survexport --help</var>,
        and in the Survex manual on <a href="https://survex.com/docs/manual/survexport.htm">survexport</a>.
        
        

<h4 id="ocaml">convert_shx.ml</h4>
<p>Not quite obsolete, but nearly.
<div style="margin-left: 5%">
<p>Unusually, this is in the <var>:loser:</var> repository, in :loser:/fixedpoints/scripts/convert_shx/
  <p>We think this turns a shapefile which holds the coordinates of the 1623, 1624 boundaries into GPX. But we have mislaid the shapefile containing this vital data. 
<p>It runs  <var>ogr2ogr -f csv -lco GEOMETRY=AS_WKT outputfile inputfile</var> and then extensively post-processes the shapefile output.
It is written in OCAML. Therefore it must be Mark Shinwell's responsibility.
<p><a href="https://gdal.org/programs/ogr2ogr.html">ogr2ogr</a> is a file conversion utility. 
It seems to be being run to convert CSV files into something else. The "shx" part of the name implies a 
<a href="https://docs.fileformat.com/gis/shx/">shapefile index format</a>.
<p>We suspect this was part of the production process for originally making essentials.gpx, but we don't need it as we now have the boundary data in other formats.
  </div>

<h4>svx2qm.py, tablize-qms.pl, find-dead-qms.py,qmreader.pl</h4>
See the entire page devoted to the various 
<a href="scriptsqms.html">QM scripts</a>.



<h4 id="dplong">create_dplong_table.py, <span id="cavestats">cavestats</span>
 and smklengths</h4>
 
<p>"cavestats" is compiled by noinfo/cavestats.build source code and is used by 
"create_dplong_table.py".
<h4 id="smklengths">smklengths</h4>

<p><em>:loser:/docs/smklengths</em> is a brief bash script that runs cavern on all the top-level cave svx files and extracts the total lengths:
<pre>
cave length depth
32:  1973m   161m
40:  7154m   262m
41:  9925m   387m
78:  7847m   328m
87:   520m   289m
88:  1849m   201m
115:  6407m   741m
142:   645m    53m
143:  3382m   309m
144:  3263m   366m
158:  3061m   345m
216:   105m    27m
83:   493m    62m
107:  3788m   254m
136:  3461m   438m
161: 26284m   526m
204: 18593m   622m
258: 20736m   912m
264: 18265m   591m
290:  5968m   456m
291:   868m   137m
359:  3442m   376m
Sat Dec 2 21:00:40 GMT 2023</pre>
<code><pre>#!/bin/sh 

echo "cave length depth"
for cave in 32 40 41 78 87 88 115 142 143 144 158 216 83 107 136 161 204 258 264 290 
do 
  echo -n "$cave:"
  cavern -o /tmp ../caves-1623/${cave}/${cave}.svx | grep -o "(.*m adjusted)\|Vertical range = [.[:digit:]]*m " | grep -o [.[:digit:]]*m | gawk '{ gsub(/m/,""); printf "%6.0fm", $1 }'
  echo
done
for cave in  359
do 
  echo -n "$cave:"
  cavern -o /tmp ../caves-1626/${cave}/${cave}.svx | grep -o "(.*m adjusted)\|Vertical range = [.[:digit:]]*m " | grep -o [.[:digit:]]*m | gawk '{ gsub(/m/,""); printf "%6.0fm", $1 }'
  echo
done
echo `date`</pre></code>

<h4 id="glossary">make-glossary.pl</h4>
<p>In :expoweb:/1623/204/ and /1623/161/. It reads a cave-specific glossary.csv and produces the HTML files for caves 161 and 204:
<ul>
<li><a href="/1623/204/atoz.html">/1623/204/atoz.html</a>
<li><a href="/1623/161/a-z.htm">/1623/161/a-z.htm</a>
</ul>
<p>which are indexes to passage names and locations in the very extensive cave descriptions for Kaninchenhohle and Steinbruckenhohle. We may need this again for Tunnocks/Balkonhohle.

<h4 id="alljs">make-alljs.py</h4>
<p>Writes out legs and entrances in json format. In :loser:/fixedpts/ (along with <em>make-fb-map.pl</em>which does Austrian coordinate transformations). 
Also in the :loser:fixedpts/scripts/convert_shx/ folder is a 135-line short script convert_shx.ml written in 
<a href="https://en.wikipedia.org/wiki/OCaml">OCaml</a> which constructs input to the 
<a href="https://gdal.org/programs/ogr2ogr.html">ogr2ogr</a> GIS feature file format conversion utility.
<p>The documented workflow today does not appear to need this, but that might be a documentation fault. It might do a key job. <span style="color:red">[to be investigated]</span>


<h3 id="inscripts">Old and possibly obsolete scripts</a></h3>

<h4 id="summ">summarizecave.sh</h4>
<p>This runs "cavern" (a commandline tool installed as part of survex) to produce a text (or HTML) 
    report of the key statistics from the master svx file for a cave 
    (the one that includes all the svx files for the individual passages).
    It is unclear who uses this or for what. It may be the script that generates the input data used by <a href="#tabular">caves-tabular.html</a>
    
<h4 id="tabular">caves-tabular.html</h4>
<p> This webpage <a href="../../scripts/caves-tabular.html">caves-tabular.html</a> uses a page-specifc JavaScript file TableSort.js which allows the user to resort the table of all the cave data by any of the columns by clicking on it [by Radost]. The exact source  of the data in the table is undocumented, but presumably from cavern .3d file export at an unknown date. This may be that generated by <a href="#summ">summarizecave.sh</a> .

<h4 id="prosp">make-prospectingguide-new.py and prospecting_guide_short.py</h4>
<p>In :expoweb:/noinfo/prospecting_guide_scripts/
<p>These are now obsolete, replaced by the troggle code (
  <var>troggle/core/views/prospect.py</var>) that generates 
<a href="http://expo.survex.com/prospecting_guide">prospecting_guide</a> on the fly (taking a couple of minutes each time).
<br>
 <span style="color:red">[Disabled. 
   <br>Bad links, incompatible image package use and very, very out of date.
 ] 
<li>#   Prospecting Guide document
<li>re_path(r'^prospecting_guide/$', prospecting), 
  
</span>
  
<h4 id="loser1624">Loser-1624 scripts</h4>
<p>
   In /scripts/noinfo/scripts/loser-caves1624-raw-data/ there is convert.py and split.sh which operate on 
   Uebersicht_2011.svx doing conversions on a dataset generated from dataset generated from CaveRenderer. The documented workflow today does not appear to need this, but that might be a documentation fault. It might do a key job. 
  <span style="color:red">[to be investigated]</span>
</p>

<h4 id="svxtrace">svxtrace.py</h4>
<p>Obsolete. Traced all the svx file dependencies via the *include statements. 
 It produced a file used by <a href="#makefile">makefile</a> above. 
  The troggle <var>parser/survex.py</var> code now (sinc 2020) produces an indented list of the current *include tree in a file in the /troggle/ folder whenever the survex files are imported by <var>databaseReset.py</var>.

<h4 id="wallets">Wallets</h4>
<p>Obsolete. Functions in <var>wallets.py</var> were integrated into troggle in July/August 2022.

<h4 id="logbk">logbk.pl</h4>
<p>Obsolete.
<p>This function is now done by the troggle input parsers.

<h4 id="indxl4">make-indxal4.pl</h4>
<p>Obsolete.
<p>See the history document which refers to CAVETAB2.CSV and make-indxal4.pl  during the 
"script and spreadsheet" phase of the system development from the mid 1990s to the mid 2000s:
<a href="../website-history.html">website history</a>
<hr />

Return to: <a href="scriptsother.html">Other scripts</a><br />
Return to: <a href="trogintro.html">Troggle intro</a><br />
Troggle index: 
<a href="trogindex.html">Index of all troggle documents</a><br />
<hr /></body>
</html>