Troggle runs much of the the cave survey data management, presents the data on the website and manages the Expo Handbook.
This part of the handbook is intended for people maintaining the troggle software. Day to day cave recording and surveying tasks are documented in the expo "survey handbook"
This troggle manual describes these:
This page is mostly an index to other records of what troggle is and what plans have been made - but never implemented - to improve it. Today troggle is used for only three things:
[Note that /survey_scans/ is generated by troggle and is not the same thing as /expofiles/surveyscans/ at all.]
Only a small part of troggle's original plan was fully implemented and deployed. Many of the things it was intended to replace are still operating as a motley collection written by many different people in several languages (but mostly perl and python; we won't talk about the person who likes to use OCamL). Today troggle is used for only three things:
The first thing to do is to read: "Troggle: a novel system for cave exploration information management", by Aaron Curtis, CUCC.
Two things to remember are
Yes you can log in to the troggle control panel: expo.survex.com/troggle.
It has this menu of commands:
All Survex | Scans | Tunneldata | 107 | 161 | 204 | 258 | 264 | Expo2016 | Expo2017 | Expo2018 | Django admin
Assumptions (points to necessarily agree upon)
Two page preliminary design document for 'caca' (Cave Catalogue) rev.2 2013-07-26 by Wookey (copied from http://wookware.org/software/cavearchive/caca_arch2.pdf)
At one time Martin Green attempted to reimplement troggle as "stroggle" using flask instead of Django at git@gitorious.org:stroggle/stroggle.git (but gitorious has been deleted).
A copy of this project is archived by Wookey on wookware.org/software/cavearchive/stroggle/.
There is also a copy of stroggle on the backed-up, read-only copy of gitorious on "gitorious valhalla"
stroggle code
stroggle-gitorious-wiki
but note that this domain has an expired ertificate so https:// complains.
Ths section is entirely out of date (June 2014), and moved here for historic interest
.The way things normally work, python or perl scripts turn CSV input into HTML for the data management system. Note that:
The CSV files are actually tab-separated, not comma-separated despite the extension.
The scripts can be very picky and editing the CSVs with microsoft excel has broken them in the past- not sure if this is still the case.
Overview of the automagical scripts on the expo data management system
[Clearly very out of date is it is assuming the version control is svn whereas we changed to mercurial years ago.]Script location Input file Output file Purpose /svn/trunk/expoweb/noinfo/make-indxal4.pl /svn/trunk/expoweb/noinfo/CAVETAB2.CSV many produces all cave description pages /svn/trunk/expoweb/scripts/make-folklist.py /svn/trunk/expoweb/folk/folk.csv http://expo.survex.com/folk/index.htm Table of all expo members /svn/trunk/surveys/tablize-csv.pl /svn/trunk/surveys/tablizebyname-csv.pl /svn/trunk/surveys/Surveys.csv http://expo.survex.com/expo/surveys/surveytable.html http://expo.survex.com/surveys/surtabnam.html Survey status page: "wall of shame" to keep track of who still needs to draw which surveys
Since 2008 we have been keeping detailed records of all data management system updates in the version control system. Before then we manually maintained a list of updates which are now only of historical interest.
A history of the expo website and software was published in Cambridge Underground 1996. A copy of this article Taking Expo Bullshit into the 21st Century is archived here.
This is likely to change with structural change to the site, with style changes which we expect to implement and with the method by which the info is actually stored and served up.
... and it's not written yet, either :-)
CUCC still has an archive list of things that at one time were live tasks: from camcaving.uk/Documents/Expo/Legacy/Misc/... and that page is reproduced in the table below (so don't worry if the URL link goes dark when CUCC reorganise their legacy pages).
Troggle is a system under development for keeping track of all expo data in a logical and accessible way, and displaying it on the web. At the moment, it is [no longer] under development at http://troggle.cavingexpedition.com/ But note that this is Aaron's version of troggle, forked from the version of troggle we use. Aaron uses this for the Erebus expedition.
Note that the information there is incomplete and editing is not yet enabled.
Feature |
Old expo website |
Troggle: planned |
Troggle: progress so far |
---|---|---|---|
Logbook |
Yes; manually formatted each year |
Yes; wiki-style |
Start at the front page, troggle.cavingexpedition.com/ [1] and click to logbook for year. The logbooks have been parsed back to 1997. |
Cave index and stats generated from survex file |
Yes |
Yes |
|
Survey workflow helper |
Yes; minimal. surveys.csv produced an html table of whose surveys were not marked “finished” |
Yes. Makes table of surveys per expo which shows exactly what needs doing. Displays scans. Integrated with survex, scanner software, and tunnel. |
See it at troggle.cavingexpedition.com/survey . Be sure to try a recent year when we should have data. Survex, scanner, and tunnel integration still needs doing. |
QM lists generated automatically |
Depends on the cave. Each cave had a different system. |
Yes; unified system. |
Done, but only 204 and 234 Qms have been imported from old system so far. No view yet. |
Automatic calendar for each year of who will be on expo when |
No, manually produced some years |
Yes |
Done; see troggle.cavingexpedition.com/calendar/2007 (replace 2007 with year in question) |
Web browser used to enter data |
No |
Yes |
Everything can be edited through admin, at troggle.cavingexpedition.com/admin . Ask aaron, martin, or julian for the password if you want to have a look / play around with the admin site. Any changes you make will be overwritten. Eventually, data entry will probably be done using custom forms. |
Cave and passage descriptions |
Yes, manually html coded. |
Yes, wiki-style. |
Not done yet. |
Expo handbook |
Yes, manually html coded. |
|
Not done yet. |
Table of who was on which expo |
Yes |
Yes |
Data has been parsed, this view hasn't been written yet. |
Signup form, System for keeping contact, medical and next of kin info |
No |
Yes |
Signup form should be ready by 20 Jan. |
Automated photo upload and gallery |
No; some manual photo galleries put together with lots of effort |
Yes |
Photo upload done, gallery needs writing. |
Search |
No |
Yes |
ckan is something like this - could we use it? esri online CUCC (troggle) http://cucc.survex.com/ - this site. virgina caves database (access+arcgis) (futrell) each country database Austria (spelix) ( www.spelix.at/ UK cave registry mendip cave registry: (access) www.mcra.org.uk/wiki/doku.php White mountains database (gpx + google earth) Matienzo (?) Fisher ridge (stephen cladiux) hong meigui (erin) (ask erin later) Wikicaves www.grottocenter.org/ multilingual, slippymap, wiki data entry. includes coordinate-free caves. focus on sport-caving type info (access, basic gear list, overall description, bibliography) e.g. australians only publish coordinates to nearest 10km turkey www.tayproject.org. www.uisic.uis-speleo.org/contacts.html change link. no-one looks for list of databases under 'contacts' graziano ferrari northern italy list (access + google earth)
Generally I'd like to find some people (geeks) that share these technical ideas: (1) store things in a file system, (2) use XML, (3) do not aim too high (do not try designing a general system for handling all caving-related data for the whole world). If I could find some people that agree with this, then we could try to reach a compromise on: (1) how do we store our data in a file system, (2) how do we use this XML (let's do a common spec, but keep it simple) (3) how do we aim not too high and not end up dead like CaveXML :) After we do that, everyone goes away to do their own projects and write their own code. Or maybe we have some degree of co-operation in actually writing the code. Normal life. But the idea is that all geeks working on "cave inventory" and systems making extensive use of cave inventories try to adhere to this framework as much as possible. So that we can then exchange our tools. I think things like "which revision system do we use" or "do we use web or Python" are really secondary. Everyone has their own views, habits, backgrounds. My idea is to work on this in a small group (no more than a few persons) - to get things going fast, even if they are not perfect from the beginning. If it works, we try to convince others to use it and maybe push it through UIS.
forms ----- 1) members read/write folk.csv and year/members 2) cave read/write cave_data, entrance_data, surveys/pics 3) trips -> logbook , QMs, or surveys (more than one survey or location possible) 4) logbook reads/write year/logbook 5) survey 6) prospecting app forms show who is logged in. databases --------- trips, read from logbook entry folder year#index .svx files description QMs members (cache from form) caves caves_data entrance_data storage: expoweb data/ cave_entrances caves descriptions loser foo.svx
frontpage --------- quick to load: Links: Caves number, name, location Years Handbook Data Entry Main Index Slippy map: Indexes to cave page Cave page: Access, description, photos, QMs, Survey Years: Logbooks/surveynotes/survexdata/people matrix Documents Data Entry: Logbook entry Survey data Survey Notes Cave description QMs Photos New cave Backend datafiles: caves/ cave_entrance cave_data directory of info years/ year/ logbook pubs/ reports admin/ lists who_and_when travel jobs surveyscans/ year/ index #num handbook/ (all static info) Storage: non-html or > 280K go in 'files' (PDF, PNG, JPEG, DOC, ODF, SVG) convert small 1024x768 version into website by default. (matching structure?
Radost Waszkiewicz (CUCC member 2016-2019) proposed a plan for superceding troggle
Hey, on the design sesh we've looked a bit on the way data is organised in the loser repo and how to access it via troggle. A proposal arised that all this database shannaingans is essentially unnecessary - we have about 200 caves, about 250 entrances, about 200 people and couple dozen expos. We don't need efficient lookups at all. We can write something which will be 'slow' and make only things we actually care about. Similarly I see little gain from doing the html - python himera template pages. These contain mainly nested for loops which could just as well be written in e.g. python. I'd advocate following solution: - give up on fixing things in troggle - have a small number of .py scripts which generate static webpages with content we want Why do this: - we don't have multiple intermediate layers all of which are difficult to maintain - anyone can run this on their machine without need for local vm's - to change something there is one thing you change - script that generates pages belonging to particular pattern. currently you need to take care of parsers, view script, url script, and template - it runs offline (just like that as pages are just blocks of literal text + some javascript) - we never end up with no working copy during expo - run scripts once before expedition and put results in /backup and just leave it there till next year. Something goes wrong on the daily updated version - we have a very easy fallback - just one (or very close to it) programming language - change to python3 to parse silly chars - wayyyy fewer lines of code and substantially better code to comment ratio (if this is us writing this now) - compatible with all the already existing static pages - we can host this on a potato hosting as we're not running anything fancy anymore - get's rid of the horrifying url rewrites that correspond to no files that exist How much work would this actually take: - most likely one script per page type, so far page types that are obviously needed: --cave index --individual cave descriptions --logbooks - more than half of the parsers are already rewriten by me and can be changed to do this instead of modifying SQL database with minimal effort - html/css side of things already exists, but it would be nice to go for a more modern look there as well Things this solution doesn't solve: - no webpage-based 'wizzard' for creating caves and such (did people use it much? do we care?) -> maybe 'send an email to this address' is the ultimate solution to this - uploading photos is still difficult (in the sense that they don't get minified automatically) Rad