CUCC Expedition Handbook

Troggle - what you may need to know

Troggle runs much of the the cave survey data management, presents the data on the website and manages the Expo Handbook.

You may have arrived here by accident when where you really need to be is website history.

This page needs to be restructured and rewritten so that it describes these things:


Everything here should be updated or replaced - this page just records a lot of unfinished ideas. Most people will not want to read this at all. This is for speleosoftwarearcheologists only.

This page is mostly an index to other records of what troggle is and what plans have been made - but never implemented - to improve it.

Troggle - what it is

Troggle is the software collection (not really a "package") based on Django originally intended to manage all expo data in a logical and accessible way and publish it on the web. It was first used on the 2009 expo - see 2009 logbook.

Only a small part of troggle's original plan was fully implemented and deployed. Many of the things it was intended to replace are still operating as a motley collection written by many different people in several languages (but mostly perl and python; we won't talk about the person who likes to use OCamL).

Examples of troggle-generated pages from data:

Today troggle is used for only three things:
  1. Reformatting all the visible webpages such that they have a coherent style and have a contents list at the top-left hand corner. This is particularly true of the handbook you are reading now and the historic records of past expeditions.
  2. Publishing the "guidebook descriptions" of caves. The user who is creating a new guidebook description can do this by filling-in some online forms. (And managing all the cave suvey data to produce this.)
  3. Providing a secondary way of editing individual pages of the handbook and historic records pages for very quick and urgent changes. This is the "Edit this page" capability; see for how to use it and how to tidy up afterwards.

The first thing to do

The first thing to do is to read: "Troggle: a novel system for cave exploration information management", by Aaron Curtis, CUCC.

Two things to remember are

Troggle Login

Yes you can log in to the troggle control panel: expo.survex.com/troggle.

It has this menu of commands:

All Survex | Scans | Tunneldata | 107 | 161 | 204 | 258 | 264 | Expo2016 | Expo2017 | Expo2018 | Django admin 

Future Developments: Preamble

Assumptions (points to necessarily agree upon)

  1. Let's NOT try to design a generic catalogue for storing all kind of data about caves of the whole world, intended for every kind of user (sports, exploration, science). Let's just settle for a generic framework. Let geeks in individual countries or individual communities write their tools operating within this framework.
  2. Let's try make it available for the layman, but still well-playable for the geeks.
  3. Let's rely on already existing, popular technologies. Let's keep it open source and multiplatform. Let's try not to reinvent the wheel.
  4. Let's not assume everyone has an Internet connection while working with their data.
  5. Let's version-control as much as possible.
  6. Let's support i18n - let's use UTF-8 everywhere and cater for data in many languages(entrance names, cave descriptions, location descriptions etc.)

Two page preliminary design document for 'caca' (Cave Catalogue) rev.2 2013-07-26 by Wookey (copied from http://wookware.org/software/cavearchive/caca_arch2.pdf)

stroggle

At one time Martin Green attempted to reimplement troggle as "stroggle" using flask instead of Django at git@gitorious.org:stroggle/stroggle.git (but gitorious has been deleted).

A copy of this project is archived by Wookey on wookware.org/software/cavearchive/stroggle/.

There is also a copy of stroggle on the backed-up, read-only copy of gitorious on "gitorious valhalla"
stroggle code
stroggle-gitorious-wiki.

CUCC wiki on troggle

CUCC still has an archive list of things that at one time were live tasks, reproduced here: from camcaving.uk/Documents/Expo/Legacy/Misc/...

Troggle is a system under development for keeping track of all expo data in a logical and accessible way, and displaying it on the web. At the moment, it is [no longer] under development athttp://troggle.cavingexpedition.com/ But note that this is Aaron's version of troggle, forked from the version of troggle we use. Aaron uses this for the Erebus expedition.

Note that the information there is incomplete and editing is not yet enabled.

Feature

Old expo website

Troggle: planned

Troggle: progress so far

Logbook

Yes; manually formatted each year

Yes; wiki-style

Start at the front page, troggle.cavingexpedition.com/ [1] and click to logbook for year. The logbooks have been parsed back to 1997.

Cave index and stats generated from survex file

Yes

Yes

Done; see troggle.cavingexpedition.com/survey/caves/264 [2]

Survey workflow helper

Yes; minimal. surveys.csv produced an html table of whose surveys were not marked “finished”

Yes. Makes table of surveys per expo which shows exactly what needs doing. Displays scans. Integrated with survex, scanner software, and tunnel.

See it at troggle.cavingexpedition.com/survey . Be sure to try a recent year when we should have data. Survex, scanner, and tunnel integration still needs doing.

QM lists generated automatically

Depends on the cave. Each cave had a different system.

Yes; unified system.

Done, but only 204 and 234 Qms have been imported from old system so far. No view yet.

Automatic calendar for each year of who will be on expo when

No, manually produced some years

Yes

Done; see troggle.cavingexpedition.com/calendar/2007 (replace 2007 with year in question)

Web browser used to enter data

No

Yes

Everything can be edited through admin, at troggle.cavingexpedition.com/admin . Ask aaron, martin, or julian for the password if you want to have a look / play around with the admin site. Any changes you make will be overwritten. Eventually, data entry will probably be done using custom forms.

Cave and passage descriptions

Yes, manually html coded.

Yes, wiki-style.

Not done yet.

Expo handbook

Yes, manually html coded.

Maybe. Needs to be discussed further.


Not done yet.

Table of who was on which expo

Yes

Yes

Data has been parsed, this view hasn't been written yet.

Signup form, System for keeping contact, medical and next of kin info

No

Yes

Signup form should be ready by 20 Jan.

Automated photo upload and gallery

No; some manual photo galleries put together with lots of effort

Yes

Photo upload done, gallery needs writing.

Search

No

Yes

List of cave database software

from wookware.org/software/cavearchive/databasesoftwarelist
ckan is something like this - could we use it?
esri online

CUCC (troggle) http://cucc.survex.com/ - this site.
virgina caves database (access+arcgis) (futrell)
each country database
Austria (spelix) ( www.spelix.at/
UK cave registry 
mendip cave registry: (access) www.mcra.org.uk/wiki/doku.php
White mountains database (gpx + google earth)
Matienzo (?)
Fisher ridge (stephen cladiux)
hong meigui (erin)  (ask erin later)
Wikicaves www.grottocenter.org/
 multilingual, slippymap, wiki data entry. includes coordinate-free caves.
 focus on sport-caving type info (access, basic gear list, overall description, bibliography)
 e.g. australians only publish coordinates to nearest 10km
turkey www.tayproject.org.

www.uisic.uis-speleo.org/contacts.html change link. no-one looks for list of databases under 'contacts'

graziano ferrari northern italy list (access + google earth)

Wookey's notes on things to do

from wookware.org/software/cavearchive/goliczmail
Generally I'd like to find some people (geeks) that share these technical
ideas: (1) store things in a file system, (2) use XML, (3) do not aim too high
(do not try designing a general system for handling all caving-related data
for the whole world).

If I could find some people that agree with this, then we could try to reach a
compromise on:
(1) how do we store our data in a file system,
(2) how do we use this XML (let's do a common spec, but keep it simple)
(3) how do we aim not to high and not end up dead like CaveXML :)

After we do that, everyone goes away to do their own projects and write their
own code. Or maybe we have some degree of co-operation in actually writing the
code. Normal life. But the idea is that all geeks working on "cave inventory"
and systems making extensive use of cave inventories try to adhere to this
framework as much as possible. So that we can then exchange our tools.

I think things like "which revision system do we use" or "do we use web or    
Python" are really secondary. Everyone has their own views, habits,
backgrounds.

My idea is to work on this in a small group (no more than a few persons) - to
get things going fast, even if they are not perfect from the beginning. If it
works, we try to convince others to use it and maybe push it through UIS. 

Wookey's other notes on things to do

from wookware.org/software/cavearchive/troggle2design
forms
-----
1) members read/write folk.csv and year/members
2) cave read/write cave_data, entrance_data, surveys/pics 
3) trips -> logbook , QMs, or surveys (more than one survey or location possible)
4) logbook reads/write year/logbook
5) survey 
6) prospecting app

forms show who is logged in.

databases
---------
trips, read from 
 logbook entry
 folder year#index
 .svx files
 description
 QMs

members (cache from form)

caves
 caves_data
 entrance_data

storage:
 expoweb
 data/
 cave_entrances
 caves
 descriptions

 loser
 foo.svx

Yet more of Wookey's notes

from wookware.org/software/cavearchive/expoweb-design
frontpage
---------
quick to load:
Links:
 Caves number, name, location
 Years 
 Handbook
 Data Entry
 Main Index

Slippy map:
 Indexes to cave page

Cave page:
 Access, description, photos, QMs, Survey
 
Years:
 Logbooks/surveynotes/survexdata/people matrix
 Documents

Data Entry:
 Logbook entry
 Survey data
 Survey Notes
 Cave description
 QMs
 Photos
 New cave

Backend datafiles:
 caves/
 cave_entrance
 cave_data
 directory of info
 
 years/
 year/
  logbook
	 pubs/
	  reports
  admin/
   lists
		 who_and_when
		 travel
		 jobs
 
 surveyscans/
  year/
	  index
		 #num
 handbook/
  (all static info)
 
Storage:
 non-html or > 200K go in 'files' (PDF, PNG, JPEG, DOC, ODF, SVG)
 convert small 800x600 version into website by default. (matching structure?