diff --git a/handbook/computing/todo-data.html b/handbook/computing/todo-data.html index 538c20bb6..ea3571bdf 100644 --- a/handbook/computing/todo-data.html +++ b/handbook/computing/todo-data.html @@ -372,8 +372,142 @@ A short 2m climb through a boulder choke leads to break down chamber directly be ----Little boy bolt climbs----
+Jan 2015 +Arge dataset merge: +32/links.svx says + "*export 27_5 ; neu2013.0" in ARGE dataset + "*export 27_5 ; forever.5" in ours. +are both right? +There is also an "Anschluss Schnellzug" exported in our dataset, biut not theirs. +32/forever has p1-p22 in it, which is confusing given our 'p' convention for entrances. + +outstanding issues: +named instead of numbered caves: +loutotihoele, blaubeer, gassischacht etc +*fix in each cave (but commented) +File headers outside surveys +All single-line exports + +Named caves should go to + +Dec 2013 +p87 moved 46m between ARGE git dataset and cucc dataset. Check! +flusstunnel was missing from CUCC dataset. added. + +lengths: + + 944 99680m CUCC: 59666m Arge: 40027m All fixed (50m difference in 40) + 936 98786m CUCC: 59666m Arge: 39120m 142 missing, and Data missing in 40 + 880 97915m CUCC: 59175m Arge: 38741m + +Arge in git repo 2012 39017m + +Comparing arge per-cave length spreadsheet finds that 40 and 41 are +500 and 600m shorter in CUCC dataset than ARGE one. 41 is due to 142 +being separated out in our dataset. 40 is due to more missing +flusstunnel data. +alice-umgehang (bypass) file from 1998 exists in RobertW dataset, +unconnected. + + +March/April 2013 stuff +BS17 data added. + +There are two 233s. How shall we fix this? 'blaubeer' has no entrance +fix so not connected. + +144 merged. should me2 be better called meander2.svx? + +88 merged. sophy survey moved out of lerche1 + +merged all olly's date adittions back in merged set +merged olaf's gps recalcs into merged set +merged olly's changes in plateau area (2004_03 gps point replaced with +surface survey). other surafce survey connection to p107 fixed. + +In 143/canyon.svx this line was added: +; anpassung zu vermessung +3_6_7 6_7 0 0 0 +to 2010 survey. Was this in fact done in 2011 or 2012? when the +; Messteam: Andreas Scheurer, Schnitzel, Lothar Midden +; Zeichner: Lothar Midden +*date 2011.08.07 +survey was done? + +If so it should have relevant metadata, or infact just be put down at +the bottom with a note about where it refers to. +You've commented it out in latest dataset. +"; wozu? auskommentiert 2011-11-19 (thomas)" + +where did 143/krone.svx go? Just superceded? + + +41 merged: +germanrt split out of e41 survey + + +115: b9 duplicate survey - keep or remove? + CUCC surveys all moved into cucc subdir. Old SU conversions replaced + with newer ones. Akte surveys renamed. Stream split in surveys. + +Juttahoehle: 'jutta.svx'. This is 1984 data from Franz Lindenmayr. Has +been under '40' since 2000. Now moved to Juttahoehle dir. Is it really +a separate cave? + +We don't have entrance locations for: E08, E16, E18, Nachbarschacht +(in 233 dir), gruenstein. Does data exist? + +------------ + +-- 233. Robert Seebacher's kataster spreadsheet has 233 = Betthupferle, and the length and depth match the svx file betthupferle.svx. I have thus renamed betthupferle.svx to 233/233.svx and copied the ent coords out of RS's file. But Blaubeerschacht also claims to be no. 233. I have put blaubeerschacht in but not linked it, as we don't have an entrance fix. + +-- Which points of Griesskogelschacht are entrances? + +-- File "neu.svx". What is the deal with this? Does the cave have a name or a number? + Example data file for 'new survey'? + +-- 41/entlink.svx -- what does this do? + +-- The 1987 extension in 87. This doesn't match anything in the ARGE data, +whereas the original 1980 survey data looks like the ARGE data rotated +somewhat. I have left this unlinked. Perhaps best to ask Robert Winkler. + +-- 87 location fix and the 115 connection. The entrance fix for 87 in ARGE's file is over 50m different from the entrance fix from our surface survey. Bizarrely, CUCC's ent fix gives a smaller misclosure than ARGE's when you tie into 115. + +-- 113. ARGE's data and CUCC's data cover different bits of the cave and don't really match very well. + +-- 145. I have combined CUCC's data and ARGE's for the upper level resurvey. Any comments? + +-- 41-144 connection. Is the line +*equate 144.144verb.58 41.entlueft.9_10 +correct? It appears in some of the ARGE index files and not others! + +113 cucc replaced by ARGE + + +2012: +why is e142 survey inside 41? 142 is separate cave. We have an +antrance location for it. Is it in fact used in any of the surveys? + +152 (bananahoehle) is connected to 113 (sonnetrahlhoehle). but p152=Q3 +on stogersteig. No GPS or fixed point in ents file. Why not? + +136 cannot be processed on its own due to 136d not being connected. + +Need to get better info on errors with/without surface and GPS and old +and new. And decide wht to do about caves that can't be processed +alone. Put fixes into all cave files? How do we do updates then? + + +diff --git a/handbook/computing/todo-styles.css b/handbook/computing/todo-styles.css index 06215e6ea..3db90766c 100644 --- a/handbook/computing/todo-styles.css +++ b/handbook/computing/todo-styles.css @@ -99,7 +99,11 @@ input[type=checkbox]:checked ~ dl dd { b { font-size: 10pt; } - dt, p { + p { + font-size: 10pt; + + } + dt{ font: 1.0em Calibri, sanserif; font-weight: bold; } diff --git a/handbook/computing/todo.html b/handbook/computing/todo.html index 6c1e019c3..651a8359e 100644 --- a/handbook/computing/todo.html +++ b/handbook/computing/todo.html @@ -38,11 +38,6 @@ If a heading is in italics, then there are hidden items.
updatephotos (in the :loser: repo) uses the BINS package to generate photo albums. BINS uses the EXIF data (date, location if available) in the uploaded image files to produce a page showing the information available about the picture. All image meta-data are stored in XML files.
BINS is no longer maintained by its author so expo has taken on the responsibility for keeping it running. (Wookey is in the process of packaging it as a proper debian package). -
Philip Withnall's QM extractor (in :loser:/qms/). It generates a list of all the QMs in all the svx files in either text or CSV format. This will produce a text output of all the QMs: -
cd loser
-find -name '*.svx' | xargs ./svx2qm.py --format csv
-
-
-Takes a CSV file name as the program's argument (e.g. qm.csv as generated by svx2qm.py) and generates an HTML page listing all the QMs. -
In :expoweb:/1623/204/ and several other subdirectories of /1623/ -
This finds references to completed qms in the cave descriptions. -
Also -qmreader.pl which reads and parses qm.html (why?) - -
In :expoweb:/1623/204/ - Nial Peters (2011) - +
Traces all the svx file dependencies via the *include statements. In :expoweb:/ . +
Traces all the svx file dependencies via the *include statements. In :expoweb:/ . The documented workflow today does not appear to need this, but that might be a documentation fault. It might do a key job. [to be investigated]
Survex files contain a reference to the wallet which contains the original survey notes for that surveyed passage. These sometimes have errors and also get out of date as caves get renamed when they get a kataster number issued. Each online survey wallet also has a reference to the survex file(s) which have been typed up from that data.
Validating the references is checked by scripts check-svx.sh, check-refs.sh, check-refs.awk in the :loser: repository to produce svxvalid.html which lists mismatches between the svx files and the survey scan wallets. +
This is a horrible proof-of-concept hack that needs replacing with a proper python script instead of an assemblage of awk, bash and sed.
Tunnel files contain references to the wallet which contained the original survey notes for that surveyed and drawn passage.
The results of validation checks are in xmlvalid.html and generated by script check-xml.sh in the :drawings: repository.
(Therion files would too, if people inserted "#Ref" comments. In which case the script would need improving.) +
Currently the intermediate data it works from has to be hand-generated so a proper parsing script needs to be written. + +
This webpage caves-tabular.html uses a page-specifc JavaScript file TableSort.js which allows the user to resort the table of all the cave data by any of the columns by clicking on it [by Radost]. The exact source (i.e. the report or script) of the data in the table is unknown. -
In :expoweb:/noinfo/ -
Makes essentials.gpx - see GPS on expo. -
In :expoweb:/noinfo/ +
"cavestats" is compiled by noinfo/cavestats.build source code and is used by +"create_dplong_table.py". +
:loser:/docs/smklengths is a brief bash script that runs cavern on all the top-level cave svx files and extracts the total lengths. + +
[to be documented] + +
In :expoweb:/noinfo/ +
In :expoweb:/noinfo/ - a key part (?) of generating the prospecting guides ? See below.
[to be documented] +
In :expoweb:/noinfo/prospecting_guide_scripts/ -
[to be documented] +
[to be documented] - We need a whole webpage on how to construct the various prospecting guides. +
How these are used once produced is documented in the the handbook
These are LaTeX files and the instructions for how to process them are in each .tex file. The protractors do not change but the others need a configuration file for all the cavers expected to attend expo. @@ -94,15 +93,14 @@ Due to the way LaTeX works out table column witdths, these commands may need to The design of these files is intended to confine all changes year to year to the names.txt and dates.txt files, thanks to LaTeX's capability to read an external file and iterate through line by line performing the same action for each name. -
[to be documented] -
[to be documented] +
This runs "cavern" (a commandline tool installed as part of survex) to produce a text (or HTML) report of the key statistics from the master svx file for a cave (the one that includes all the svx files for the individual passages). + +
In :expoweb:/noinfo/ +
Makes essentials.gpx - see GPS on expo. -
Writes out legs and entrances in json format. In :loser:/fixedpts/
In :expoweb:/1623/204/ and /1623/161/. It reads a cave-specific glossary.csv and produces the HTML files for caves 161 and 204: @@ -110,25 +108,34 @@ The design of these files is intended to confine all changes year to year to the
which are indexes to passage names and locations in the very extensive vcave descriptions for Kaninchenhohle and Steinbruckenhohle. +
which are indexes to passage names and locations in the very extensive cave descriptions for Kaninchenhohle and Steinbruckenhohle. We may need this again for Tunnocks/Balkonhohle. + +
Writes out legs and entrances in json format. In :loser:/fixedpts/ (along with make-fb-map.plwhich does Austrian coordinate transformations). +Also in the :loser:fixedpts/scripts/convert_shx/ folder is a 135-line short script convert_shx.ml written in +OCaml which constructs input to the +ogr2ogr GIS feature file format conversion utility. +
The documented workflow today does not appear to need this, but that might be a documentation fault. It might do a key job. [to be investigated] -
Obsolete. -
See history documents which refer to CAVESTATS.CSV -
[to be documented - for historic interest only] -
Obsolete. -
This function is now done by the troggle input parsers. -
[to be documented - for historic interest only] +
In /scripts/noinfo/scripts/loser-caves1624-raw-data/ there is convert.py and split.sh which operate on - Uebersicht_2011.svx doing coinversions on a dataset generated from dataset generated from CaveRenderer. - + Uebersicht_2011.svx doing conversions on a dataset generated from dataset generated from CaveRenderer. The documented workflow today does not appear to need this, but that might be a documentation fault. It might do a key job. [to be investigated]
+Obsolete. +
This function is now done by the troggle input parsers. +
[for historic interest only] + +
Obsolete. +
See the history document which refers to CAVETAB2.CSV and make-indxal4.pl during the +"script and spreadsheet" phase of the system development from the mid 1990s to the mid 2000s: +website history
text here
+Note that " - " delimits the "cave" (which could be "basecamp", or "plateau") + from the rest of the title. The "cave" is parsed by the troggle logbook importer.
text here