CUCC Expedition Handbook - Online systems

Expo Online Systems Manual

Expo data management systems manual

Editing the expo data management system is an adventure. Until 2007, there was no guide which explained the whole thing as a functioning system. Learning it by trial and error is non-trivial. There are lots of things we could improve about the system, and anyone with some computer nous is very welcome to muck in. It is slowly getting better organised.

This manual is organized in a how-to sort of style. The categories, rather than referring to specific elements of the data management system, refer to processes that a maintainer would want to do.

Note that to display the survey data you will need a copy of the survex software.

Contents

  1. Getting a username and password
  2. The repositories
  3. How the data management system works
  4. Quick start
  5. Editing the data management system
  6. Using version control software in Windows
  7. The expoweb-update script
  8. Updating cave pages
  9. Updating expo year pages
  10. Adding typed logbooks
  11. Uploading photos
  12. Ticking off QMs
  13. Maintaining the survey status table
  14. Automation
  15. Archived updates
Appendices:

Getting a username and password

Use these credentials for access to the site. The user is 'expo', with a cavey:beery password. Ask someone if this isn't enough clue for you. This password is important for security. The whole site will get hacked by spammers or worse if you are not careful with it. Use a secure method for passing it on to others that need to know (i.e not unencrypted email), don't publish it anywhere, don't check it in to the data management system by accident. A lot of people use it and changing it is a pain for everyone so do take a bit of care.

Note that you don't need a password to view most things, but you will need one to change them

The repositories

All the expo data is contained in 4 "repositories" at expo.survex.com. This is currently hosted on a server at the university. We use a distributed version control system (DVCS) to manage these repositories because this allows simultaneous collaborative editing and keeps track of all changes so we can roll back and have branches if needed.

The site has been split into four parts:

All the scans, photos, presentations, fat documents and videos are stored just as files (not in version control) in 'expofiles'. See below for details on that.

How the data management system works

Part of the data management system is static HTML, but quite a lot is generated by scripts. So anything you check in which affects cave data or descriptions won't appear on the site until the data management system update scripts are run. This happens automatically every 30 mins, but you can also kick off a manual update. See 'The expoweb-update script' below for details.

Also note that the data management system you see is its own Mercurial checkout (just like your local one) so that has to be 'pulled' from the server before your changes are reflected.

Using 'Edit This Page'

This edits the file served by the webserver on the expo server in Cambridge but it does not update the copy of the file in the repository in expo.survex.com. To properly finish the job you need to

Quick start

If you know what you are doing here is the basic info on what's where:
(if you don't know what you're doing, skip to Editing the data management system below.)

expoweb (The data management system)
hg clone ssh://expo@expo.survex.com/expoweb (read/write)
hg clone http://expo.survex.com/repositories/home/expo/expoweb/ (read-only checkout)
troggle (The data management system backend)
hg clone ssh://expo@expo.survex.com/troggle (read/write)
hg clone http://expo.survex.com/repositories/home/expo/troggle/ (read-only checkout)
loser (The survey data)
hg clone ssh://expo@expo.survex.com/loser (read/write)
hg clone http://expo.survex.com/repositories/home/expo/loser/ (read-only)
tunneldata (The Tunnel drawings)
hg clone ssh://expo@expo.survex.com/tunneldata (read/write)
hg clone http://expo.survex.com/repositories/home/expo/expoweb/ (read-only)
expofiles (all the big files and documents)

Photos, scans (logbooks, drawn-up cave segments) (This was about 60GB of stuff in 2017 which you probably don't actually need locally).

If you don't need an entire copy of all 60GB, then it is probably best to use Filezilla to copy just a small part of the filesystem to your own machine and to upload the bits you add to or edit. Instructions for installing and using Filezilla are found in the expo user instructions for uploading photographs: uploading.html.

To sync all the files from the server to local expofiles directory:

rsync -av expo@expo.survex.com:expofiles /home/expo

To sync the local expofiles directory back to the server (but only if your machine runs Linux):

rsync --dry-run --delete-after -a /home/expo/expofiles expo@expo.survex.com

then CHECK that the list of files it produces matches the ones you absolutely intend to delete forever! ONLY THEN do:

rsync -av /home/expo/expofiles expo@expo.survex.com:

(do be incredibly careful not to delete piles of stuff then rsync back, or to get the directory level of the command wrong - as it'll all get deleted on the server too, and we may not have backups!). It's absolutely vitalUse rsync --dry-run --delete-after -a first to check what would be deleted.

If you are using rsync from a Windows machine you will not get all the files as some filenames are incompatible with Windows. What will happen is that rsync will invisibly change the names as it downloads them from the Linux expo server to your Windows machine, but then it forgets what it has done and tries to re-upload all the renamed files to the server even if you have touched none of them. Now there won't be any problems with simple filenames using all lowercase letters and no funny characters, but we have nothing in place to stop anyone creating such a filename somewhere in that 60GB or of detecting the problem at the time. So don't do it. If you have a Windows machine use Filezilla not rsync.

(We may also have an issue with rsync not using the appropriate user:group attributes for files pushed back to the server. This may not cause any problems, but watch out for it.)

Editing the data management system

To edit the data management system fully, you need to use the disributed version control system (DVCM) software which is currently mercurial/TortoiseHg. Some (static text) pages can be edited directly on-line using the 'edit this page link' which you'll see if you are logged into troggle. In general the dynamically-generated pages, such as those describing caves which are generated from the cave survey data, can not be edited in this way, but forms are provided for some types of these like 'caves'.

What follows is for Linux. If you are running Windows then see below Using Mercurial/TortoiseHg in Windows.

Mercurial can be used from the command line, but if you prefer a GUI, TourtoiseHg is highly recommended on all OSes.

Linux: Install mercurial and tortoisehg-nautilus from synaptic, then restart nautilus nautilus -q. If it works, you'll be able to see the menus of tortoise within your Nautilus windows.

Once you've downloaded and installed a client, the first step is to create what is called a checkout of the data management system. This creates a copy on your machine which you can edit to your heart's content. The command to initially check out ('clone') the entire expo data management system is:

hg clone ssh://expo@expo.survex.com/expoweb

for subsequent updates

hg update

will generally do the trick.

In TortoiseHg, merely right-click on a folder you want to check out to, choose "Mercurial checkout," and enter

ssh://expo@expo.survex.com/expoweb

After you've made a change, commit it to you local copy with:

hg commit (you can specify filenames to be specific)

or right clicking on the folder and going to commit in TortoiseHg. Mercurial can't always work out who you are. If you see a message like "abort: no username supplied" it was probably not set up to deduce that from your environment. It's easiest to give it the info in a config file at ~/.hgrc (create it if it doesn't exist, or add these lines if it already does) containing something like

[ui]
username = Firstname Lastname <myemail@example.com>

The commit has stored the changes in your local Mercurial DVCS, but it has not sent anything back to the server. To do that you need to:

hg push

Before pushing, you should do an hg pull to sync with upstream first. If someone else has edited the same files you may also need to do:

hg merge

(and sort out any conflicts if you've both edited the same file) before pushing again

Simple changes to static files will take effect immediately, but changes to dynamically-generated files (cave descriptions, QM lists etc) will not take effect, until the server runs the expoweb-update script.

Using Mercurial/TortoiseHg in Windows

Read the instructions for setting up TortoiseHG in Tortoise-on-Windows.

In Windows: install Mercurial and TortoiseHg of the relevant flavour from https://tortoisehg.bitbucket.io/ (ignoring antivirus/Windows warnings). This will install a submenu in your Programs menu)

To start cloning a repository: first create the folders you need for the repositories you are going to use, e.g. D:\CUCC-Expo\loser and D:\CUCC-Expo\expoweb. Then start TortoiseHg Workbench from your Programs menu, click File -> Clone repository, a dialogue box will appear. In the Source box type

ssh://expo@expo.survex.com/expoweb

for expoweb (or similar for the other repositories). In the Destination box type whatever destination you want your local copies to live in on your laptop e.g. D:\CUCC-Expo\expoweb. Hit Clone, and it should hopefully prompt you for the usual beery password.

The first time you do this it will probably not work as it does not recognise the server. Fix this by running putty (downloading it from https://www.chiark.greenend.org.uk/~sgtatham/putty/), and connecting to the server 'expo@expo.survex.com' (on port 22). Confirm that this is the right server. If you succeed in getting a shell prompt then ssh connection are working and TortoiseHg should be able to clone the repo, and send changes back.

The expoweb-update script

The script at the heart of the data management system update mechanism is a makefile that runs the various generation scripts. It is run every 15 minutes as a cron job (at 0,15,30 and 45 past the hour), but if you want to force an update more quickly you can run it he

The scripts are generally under the 'noinfo' section of the site just because that has (had) some access control. This will get changed to something more sensible at some point

Updating cave pages

Cave description pages are automatically generated from a set of cave files in noinfo/cave_data/ and noinfo/entrance_data/. These files are named -.html (where area is 1623 or 1626). These files are processed by troggle. Use python databaseReset.py caves in /expofiles/troggle/ to update the site/database after editing these files.

Clicking on 'New cave' (at the bottom of the cave index) lets you enter a new cave. Info on how to enter new caves has been split into its own page.

(If you remember something about CAVETAB2.CSV for editing caves, that was superseded in 2012).

This may be a useful reminder of what is in a survex file how to create a survex file.

Updating expo year pages

Each year's expo has a documentation index which is in the folder

/expoweb/years

, so to checkout the 2011 page, for example, you would use

hg clone ssh://expo@expo.survex.com/expoweb/years/2011

Once you have pushed your changes to the repository you need to update the server's local copies, by ssh into the server and running hg update in the expoweb folder.

Adding a new year

Edit noinfo/folk.csv, adding the new year to the end of the header line, a new column, with just a comma (blank cell) for people who weren't there, a 1 for people who were there, and a -1 for people who were there but didn't go caving. Add new lines for new people, with the right number of columns.

This proces is tedious and error-prone and ripe for improvement. Adding a list of people, fro the bier book, and their aliases would be a lot better, but some way to make sure that names match with previous years would be good.

Ticking off QMs

To be written.

Maintaining the survey status table

There is a table in the survey book which has a list of all the surveys and whether or not they have been drawn up, and some other info.

This is generated by the script tablizebyname-csv.pl from the input file Surveys.csv

Automation on expo.survex.com

Ths section is entirely out of date (June 2014), and awaiting deletion or removal

.

The way things normally work, python or perl scripts turn CSV input into HTML for the data management system. Note that:

The CSV files are actually tab-separated, not comma-separated despite the extension.

The scripts can be very picky and editing the CSVs with microsoft excel has broken them in the past- not sure if this is still the case.

Overview of the automagical scripts on the expo data management system

[Clearly very out of date is it is assuming the version control is svn whereas we changed to hg years ago.]
Script location 	Input file 	Output file 	Purpose
/svn/trunk/expoweb/noinfo/make-indxal4.pl 	/svn/trunk/expoweb/noinfo/CAVETAB2.CSV 	many 	produces all cave description pages
/svn/trunk/expoweb/noinfo/make-folklist.py 	/svn/trunk/expoweb/noinfo/folk.csv 	http://expo.survex.com/folk/index.htm 	Table of all expo members

/svn/trunk/surveys/tablize-csv.pl /svn/trunk/surveys/tablizebyname-csv.pl
	/svn/trunk/surveys/Surveys.csv 	

http://expo.survex.com/expo/surveys/surveytable.html http://expo.survex.com/surveys/surtabnam.html
	Survey status page: "wall of shame" to keep track of who still needs to draw which surveys

Archived updates

Since 2008 we have been keeping detailed records of all data management system updates in the version control system. Before then we manually maintained a list of updates which are now only of historical interest.

A history of the expo website and software was published in Cambridge Underground 1996. A copy of this article Taking Expo Bullshit into the 21st Century is archived here.

The data management system conventions bit

This is likely to change with structural change to the site, with style changes which we expect to implement and with the method by which the info is actually stored and served up.

... and it's not written yet, either :-)