Troggle runs much of the the cave survey data management, presents the data on the website and manages the Expo Handbook.
This part of the handbook is intended for people
maintaining the troggle software.
Day to day cave recording and surveying tasks are documented
in the expo "survey handbook"
Radost Waszkiewicz (CUCC member 2016-2019) proposed a plan for superceding troggle
Hey,
on the design sesh we've looked a bit on the way data is organised in the
loser repo and how to access it via troggle.
A proposal arised that all this database shannaingans is essentially
unnecessary - we have about 200 caves, about 250 entrances, about 200
people and couple dozen expos. We don't need efficient lookups at all. We
can write something which will be 'slow' and make only things we actually
care about.
[What Rad has misunderstood here is that the database is not for speed. We use it mostly so that we can manage
'referential integrity' i.e. have all the different bits of information match up correctly.
While the total size of the data is small, the interrelationships and complexity is quite large.
From the justification for troggle:
"A less obvious but more deeply rooted problem was the lack of relational information. One table named folk.csv stored
names of all expedition members, the years in which they were present, and a link to a biography page. This was great
for displaying a table of members by expedition year, but what if you wanted to display a list of people who wrote in the
logbook about a certain cave in a certain expedition year? Theoretically, all of the necessary information to produce that
list has been recorded in the logbook, but there is no way to access it because there is no connection between the
person's name in folk.csv and the entries he wrote in the logbook".
[
Aaron Curtis]
And for ensuring survey data does not get lost we need to coordinate people, trips, survex blocks,
survex files, drawing files (several formats), QMs, wallet-progress pages, rigging guides, entrance photos, GPS tracks, kataster boundaries, scans of sketches, scans of underground notes, and dates for all those - Philip Sargent]
Similarly I see little gain from doing the html - python himera
template pages. These contain mainly nested for loops which could just as
well be written in e.g. python.
[He could indeed. But for most people producing HTML while writing in python is just unnecessarily difficult.
But it has to be said that the
django
HTML templating mechanism is sufficiently powerful that it does almost
amount to an additional language to learn.
Troggle has 66 different url recognisersand there are 71 HTML django
template files which the recognisers direct to.
Not all page templates are currently used but still some kind of templating system would seem to be
probably necessary.
The django system is sufficiently well-thought-of
that it forms the basis for the framework-independent templaing engine
Jinja - and that site has a good discussion
on whether templating is a good thing or not. There are about
20 different python template engines.]
I'd advocate following solution:
- give up on fixing things in troggle
- have a small number of .py scripts which generate static webpages with
content we want
[A reasonable proposal, but needs quantifying with all the things troggle does
which Rad was unaware of. This will not be a "small number" but it needs estimating. We don't need
everything troggle does for us of course, but that doesn't mean that removing django/troggle
will reduce the total amount of code. The input data parsers will be nearly the same size obviously.]
Why do this:
- we don't have multiple intermediate layers all of which are difficult to
maintain
- anyone can run this on their machine without need for local vm's
- to change something there is one thing you change - script that generates
pages belonging to particular pattern. currently you need to take care of
parsers, view script, url script, and template
- it runs offline (just like that as pages are just blocks of literal text
+ some javascript)
- we never end up with no working copy during expo - run scripts once
before expedition and put results in /backup and just leave it there till
next year. Something goes wrong on the daily updated version - we have a
very easy fallback
- just one (or very close to it) programming language
- change to python3 to parse silly chars
- wayyyy fewer lines of code and substantially better code to comment ratio
(if this is us writing this now)
- compatible with all the already existing static pages
- we can host this on a potato hosting as we're not running anything fancy
anymore
- get's rid of the horrifying url rewrites that correspond to no files that
exist
[This vastly underestimates the number of things that troggle does for us.
See "
Troggle: a revised system for cave data management".] And a VM is not required to run and debug troggle.
Sam has produced a docker variant which he uses extensively.
Troggle today has 6,400 non-comment lines of python and 2,500 non-comment lines of django HTML template code. Plus there is the integration with the in-browser HTML editor in JavaScript. Half of the python is in the parsers which will not change whatever we do. Django itself is much, much bigger and includes all the security middleware necessary in the web today.
But maintaining the code with the regular Django updates is a heavy job.]
How much work would this actually take:
- most likely one script per page type, so far page types that are
obviously needed:
- cave index
- individual cave descriptions
- logbooks
- more than half of the parsers are already rewriten by me and can be
changed to do this instead of modifying SQL database with minimal effort
- html/css side of things already exists, but it would be nice to go for a
more modern look there as well
[The effort estimate is similarly a gross underestimate because (a) he assumes one script per page of output, forgetting all the core work to create a central consistent dataset, and (b) he is missing out most
of the functionality we use without realizing it because it is built into
django's SQL system, such as multi-user operations.
Eventually we will have to migrate from django of course, as it will eventually
fail to keep up with the rest of the world. Right now we need to get ourselves onto python3
so that we can use an LTS release which has current security updates. This is
more urgent for django than for Linux. In Ubuntu terms we are on 18.04 LTS (Debian 10) which has no free maintenance updates from 2023. We should plan to migrate troggle from django to another framework in about 2025. See stroggle below.]
Things this [Rad's] solution doesn't solve:
- no webpage-based 'wizzard' for creating caves and such (did people use it
much? do we care?) -> maybe 'send an email to this address' is the ultimate
solution to this
- uploading photos is still difficult (in the sense that they don't get
minified automatically)
Rad
[Creating a cave description for a new cave, and especially linking in images,
is currently so difficult that only a couple of people can do it. Fixing this is
a matter of urgency. No one should have to imagine where the path to a file will be but isn't now.
We need a file uploading system to put things in the right place; and this would help photos too.]