CUCC Expedition Handbook

Logbooks Import

Importing the logbook into troggle

This is usually done after expo but it is in excellent idea to have a nerd do this a couple of times during expo to discover problems while the people are still around to ask.

The nerd needs to login to the expo server using their own userid, not the 'expo' userid. The nerd also needs to be in the group that is allowed to do 'sudo'.

Ideal situation

Ideally this would all be done on a stand-alone laptop to get the bugs in the logbook parsing sorted out before we upload the corrected file to the server. Unfortunately this requires a full troggle software development laptop as the parser is built into troggle. The expo laptop in the potato hut is not set up to do this (yet - 2022).

However, the expo laptop (or any 'bulk update' laptop) is configured to allow an authorized user to log in to the server itself and to run the import process directly on the server.

Importing the Blog

During expo lots of people post text and photos to the UK Caving (rope competition) website. During the winter after expo, an extra nerd task is to fold in all those entries into the main logbook so that the trips are indexed and we can see who was doing what where.

This is sufficiently complicated that it is documented in another page. But read this page first.

Current situation

The nerd needs to do this:

  1. Look at the list of pre-existing old import errors at Data Issues
  2. You need to get the list of people on expo sorted out first.
    This is documented in the Folk Update process.
  3. Log in to the expo server and run the update script (see below for details)
  4. Watch the error messages scroll by, they are more detailed than the messages archived in the old import errors list
  5. Edit the logbook.html file to fix the errors. These are usually typos, non-unique tripdate ids or unrecognised people. Some unrecognised people will mean that you have to fix them using the Folk Update process first.
  6. Re-run the import script until you have got rid of all the import errors.
  7. Pat self on back. Future data managers and people trying to find missing surveys will worship you.

The procedure is like this. It will be familiar to you because you will have already done most of this for the Folk Update process.

ssh  expo@expo.survex.com
cd troggle
python databaseReset.py logbooks

It will produce a list of errors like these below, starting with the most recent logbook which will be the one for the expo you are working on. You can abort the script (Ctrl-C) when you have got the errors for the current expo that you are going to fix

Loading Logbook for: 2017
 - Parsing logbook: 2017/logbook.html
 - Using parser: Parseloghtmltxt
Calculating GetPersonExpeditionNameLookup for 2017
   - No name match for: 'Phil'
   - No name match for: 'everyone'
   - No name match for: 'et al.'
("can't parse: ", u'\n\n<img src="logbkimg5.jpg" alt="New Topo" />\n\n')
   - No name match for: 'Goulash Regurgitation'
   - Skipping logentry: Via Ferata: Intersport - Klettersteig - no author for entry
   - No name match for: 'mike'
   - No name match for: 'Mike'

Errors are usually misplaced or duplicated <hr /> tags, names which are not specific enough to be recognised by the parser (though it tries hard) such as "everyone" or "et al." or are simply missing, or a bit of description which has been put into the names section such as "Goulash Regurgitation".

When you have sorted out the logbooks formatting and it is no longer complaining, you will need to do a full database reset as this will have trashed the online database and none of the troggle webpages will be working:

ssh  expo@expo.survex.com
cd troggle
python databaseReset.py reset
which takes between 300s and 15 minutes on the server.

The logbooks format

This is documented on the logbook user-documentation page as even expoers who can do nothing else technical can at least write up their logbook entries.

Historical logbooks format

Older logbooks (prior to 2007) were stored as logbook.txt with just a bit of consistent markup to allow troggle parsing.

The formatting was largely freeform, with a bit of markup ('===' around header, bars separating date, - , and who) which allows the troggle import script to read it correctly. The underlines show who wrote the entry. There is also a format for time-underground info so it can be automagically tabulated.

There were also several previous (different) styles of using HTML. The one we are using now is the 5th variant. These older variants are steadily being reformatted into the current HTML format so that we only need to maintain the code for one parser.


Back to Logbooks for Cavers documentation.
Forward to Importing the UK Caving Blog.