From 48f6d099807622a2f154257b5b980567b50b6b47 Mon Sep 17 00:00:00 2001
From: Wookey The pages which make up this handbook were originally based on the paper documents you might find lying around the Potato Hut or Top Camp. Increasingly, the web pages are becoming the master documents. They don't tell you everything you need to know about Expo, but there is a basic minimum here, with links to more detailed info when you need it. There are more sections each year, though only three are anything like complete at the moment: The pages which make up this handbook were originally based on the paper documents you might find lying around the Potato Hut or Top Camp. These web pages are now the master documents. They should tell you everything you need to know about Expo. Please update them/add info as required. After many years of using complicated radio systems of varying degrees of complication and reliability, we have finally settled on a foolproof method for communicating callouts from top camp to base camp: mobile phones. Cheap Austrian pay-as-you-go mobiles have sufficiently good reception on the plateau for sending SMS messages, and even occasionally for conversation. We are using the "B-Free" mobile scheme. (In 2011 we tried using another proveder which picked up the T-Mobile network, however the reception was not as good as B-Free. B-Free has an annual renewal of the SIM which gets you the phone number and connection (plus some credit). More credit comes in the form of a card with a scratch-off secret number. This has to be done in less than 13 months otherwise it costs a great deal extra (equivalent to starting from scratch). The phone cannot be used in the last month, but renewal is much cheaper than starting from scratch. If you need to buy more credits for a phone either: You should be given a aufladecode, (this may require scratching off a panel at lower right back of card) After many years of using complicated radio systems of varying degrees of complication and reliability, we have finally settled on a foolproof method for communicating callouts from top camp to base camp: mobile phones. Cheap Austrian pay-as-you-go mobiles have sufficiently good reception on the plateau for sending SMS messages, and even occasionally for conversation. We are using the "B-Free" mobile scheme. (In 2011 we tried using another proveder which picked up the T-Mobile network, however the reception was not as good as B-Free. B-Free has an annual renewal of the SIM which gets you the phone number and connection (plus some credit). More credit comes in the form of a card with a scratch-off secret number. This has to be done in less than 13 months otherwise it costs a great deal extra (equivalent to starting from scratch). The phone cannot be used in the last month, but renewal is much cheaper than starting from scratch. If you need to buy more credits for a phone either: You should be given a aufladecode, (this may require scratching off a panel at lower right back of card) Simple instructions for updating website Please refer to the latest web updating guide in CUCC website DExpo data is kept in a number of different locations. The website is now large and complicated with a lot (too many!) of moving parts. This handbook section contains info at various levels: simple 'Howto add stuff' information for the typical expoer, more detailed info for cloning it onto your own machine for more significant edits, and structural info on how it's all put together for people who want/need to change things. Simple instructions for updating website You can update the site via the troggle pages, by editing pages online via a browser, by editing them locally on disk, or by checking out the relevant part to your computer and editing it there. Which is best depends on your knowledge and what you want to do. For simple addition of cave or survey data troggle is recommended. For other edits it's best if you can edit the files directly rather than using the 'edit this page' button, but that means you either need to be on expo with the expo computer, or be able to check out a local copy. If neither of these apply then using the 'edit this page' button is fine. It's important to understand that everything on the site is stored in a distributed version control system (DVCS) called 'mercurial', which means that every edited file needs to be 'checked in' at some point. The Expo website manual goes into more detail about this, below. This stops us losing data and makes it very hard for you to screw anything up permanently, so don't worry about making changes - they can always be reverted if there is a problem. It also means that several people can work on the site on different computers at once and normally merge their changes easily. Increasing amounts of the site are autogenerated, not just files, so you have to edit the base data, not the generated file. All autogenerated files say 'This file is autogenerated - do not edit' at the top - so check for that before wasting time on changes that will just be overwritten Editing the expo website is an adventure. Until now, there was no guide which explains the whole thing as a functioning system. Learning it by trial and error is non-trivial. There are lots of things we could improve about the system, and anyone with some computer nous is very welcome to muck in. It is slowly getting better organised This manual is organized in a how-to sort of style. The categories, rather than referring to specific elements of the website, refer to processes that a maintainer would want to do. Use these credentials for access to the site: The user is 'expo', with a beery password. Ask someone if this isn't enough clue for you. All the expo data is contained in 4 'mercurial' repositories at expo.survex.com. This is currently hosted on Julian Todd's server, 'seagrass'. Mercurial* is a distributed version control system which allows collaborative editing and keeps track of all changes so we can roll back and have branches if needed. The site has been split into four parts: All the scans, photos, presentation, fat documents and videos have been removed from version-control and are just files. See below for details on that. Part of the website is static HTML, but quite a lot is generated by scripts. So anything you check in which affects cave data or descriptions won't appear on the site until the website update scripts are run. This happens automatically every 30 mins, but you can also kick off a manual update. See 'The expoweb-update script' below for details. Also note that the website you see is its own mercurial checkout (just like your local one) so that has to be 'pulled' from the server before your changes are reflected. If you know what you are doing here is the basic info on what's where: Photos, scans (logbooks, drawn-up cave segments) (This is about 16GB of stuff which you probably don't actually need locally) To sync the files from seagrass to local expoimages directory: rsync -av expo@seagrass.goatchurch.org.uk:expoimages /home/expo/fromserver To sync the local expoimage directory back to seagrass: rsync -av /home/expo/fromserver/expoimages expo@seagrass.goatchurch.org.uk: (do be careful not to delete piles of stuff then rsync back - as it'll all get deleted on the server too, and we may not have backups!) To edit the website, you need a mercurial client. If you are using Windows, [1] is highly recommended. Lots of tools for Linux and mac exist too [2], both GUI and command-line: For Ubuntu dummies and GUI lovers, check this how to install the latest Mercurial version which is not in the usual repositories. In Ubuntu 11.04 you can just install mercurial and tortoisehg from synaptic, then restart nautilus $nautilus -q. If it works, you'll be able to see the menus of Tortoise within your Nautilus windows. Once you've downloaded and installed a client, the first step is to create what is called a checkout of the website or section of the website which you want to work on. This creates a copy on your machine which you can edit to your heart's content. The command to initially check out ('clone') the entire expo website is: hg clone ssh://expo@seagrass.goatchurch.org.uk/expoweb for subsequent updates hg update will generally do the trick. In TortoiseHg, merely right-click on a folder you want to check out to, choose "Mercurial checkout," and enter ssh://expo@seagrass.goatchurch.org.uk/expoweb After you've made a change, commit it to you local copy with: hg commit (you can specify filenames to be specific) or right clicking on the folder and going to commit in TortoiseSVN. That has stored the changes in your local mercurial DVCS, but it has not sent anything back to the server. To do that you need to: hg push If someone else is editing the same bit at the same time you may also need to: hg merge None of your changes will take effect, however, until the server checks out your changes and runs the expoweb-update script. In Windows: install Mercurial and TortoiseHg of the relevant flavour from http://mercurial.selenic.com/downloads/ (ignoring antivirus/Windows warnings). To start cloning a repository: start TortoiseHg Workbench, click File -> Clone repository, a dialogue box will appear. In the Source box type ssh://expo@seagrass.goatchurch.org.uk/expoweb or similar for the other repositories. In the Destination box type whatever destination you want your local copies to live in. Hit Clone, and it should hopefully prompt you for the usual beery password. (to be continued) --Zucca 14:25, 25 January 2012 (UTC) The script at the heart of the website update mechanism is a makefile that runs the various generation scripts. It (along with an update from the repository) is run every 15 minutes as a cron job (at 0,15,30 and 45 past the hour), but if you want to force an update more quickly you can run it here: [Wooknote - this is not actually happening right now - FIXME!] The scripts are generaly under the 'noinfo' section of the site just because that has some access control. This will get changed to something more sensible at some point Cave description pages are automatically generated from a comma separated values (CSV) table named CAVETAB2.CSV by a perl script called make-indxal4.pl . make-indxal4.pl is called automatically. The first step is to check out, edit, and check in CAVETAB2.CSV, which is atCambridge University Caving Club Expedition Handbook
-
+Cambridge University Caving Club Expedition Handbook
+
+
diff --git a/handbook/phone.htm b/handbook/phone.htm
index 5bbf96304..51ec21e5d 100644
--- a/handbook/phone.htm
+++ b/handbook/phone.htm
@@ -2,28 +2,29 @@
CUCC Expedition Handbook
-Mobile Phone Use Guide
-Annual renewal
-Adding credit
-
-
-
-
+CUCC Expedition Handbook
+Mobile Phone Use Guide
+Annual renewal
+Adding credit
+
+
+
+
diff --git a/handbook/update.htm b/handbook/update.htm
index fd106913b..91fc6b789 100644
--- a/handbook/update.htm
+++ b/handbook/update.htm
@@ -1,31 +1,266 @@
-CUCC Expedition Handbook
-Updating the website - HOWTO
-
-
+Expo Website
+Updating the website - HOWTO
+
+Expo website manual
+
+Contents
+
+
+
+
+Getting a username and password
+
+The repositories
+
+
+
+
+
+How the website works
+
+Quick start
+
+
+
+
+Editing the website
+
+Using Mercurial/TortoiseHg in Windows
+
+The expoweb-update script
+
+Updating cave pages
+
+
You need to be somewhat careful with the formatting; each cell needs to be only one line long (i.e. no newlines) or the script will get confused.
+ +And then run expoweb-update as above.
+ +Each year's expo has a documentation index which is in the folder
+ +/expoweb/years
+ +, so to checkout the 2011 page, for example, you would use
+ +hg clone ssh://expo@seagrass.goatchurch.org.uk/expoweb/years/2011
+ +Logbooks are typed up and put under the years/nnnn/ directory as 'logbook.html'.
+ +Do whatever you like to try and represent the logbook in html. The only rigid structure is the markup to allow troggle to parse the files into 'trips':
+<div class="tripdate" id="t2007-07-12B">2007-07-12</div> +<div class="trippeople"><u>Jenny Black</u>, Olly Betts</div> +<div class="triptitle">Top Camp - Setting up 76 bivi</div> +<div class="timeug">T/U 10 mins</div> +Note that the ID's must be unique, so are generated from 't' plus the trip date plus a,b,c etc when there is more than one trip on a day.
+ +Older logbooks (prior to 2007) were stored as logbook.txt with just a bit of consistent markup to allow troggle parsing.
+ +The formatting was largely freeform, with a bit of markup ('===' around header, bars separating date,
So the format should be:
+ +===2009-07-21|204 - Rigging entrance series| Becka Lawson, Emma Wilson, Jess Stirrups, Tony Rooke===
+ +<Text of logbook entry>
+ +T/U: Jess 1 hr, Emma 0.5 hr
+ + +To be written.
+ + +At [3] there is a table which has a list of all the surveys and whether or not they have been drawn up, and some other info.
+ +This is generated by the script tablizebyname-csv.pl from the input file Surveys.csv
+ +The CUCC Website was originally created by Andy Waddington in the early 1990s and was hosted by Wookey. The VCS was CVS. The whole site was just static HTML, carefully designed to be RISCOS-compatible (hence the short 10-character filenames) as both Wadders and Wookey were RISCOS people then. Wadders wrote a huge amount of info collecting expo history, photos, cave data etc.
+ +Martin Green added the SURVTAB.CSV file to contain tabulated data for many caves around 1999, and a script to generate the index pages from it. Dave Loeffler added scripts and programs to generate the prospecting maps in 2004. The server moved to Mark Shinwell's machine in the early 2000s, and the VCS was updated to subversion.
+ +After expo 2009 the VCS was updated to hg, because a DVCS makes a great deal of sense for expo (where it goes offline for a month or two and nearly all the year's edits happen).
+ +The site was moved to Julian Todd's seagrass server, but the change from 32-bit to 64-bit machines broke the website autogeneration code, which was only fixed in early 2011, allowing the move to complete. The data has been split into 3 separate repositories: the website, troggle, the survey data, the tunnel data.
+ +The way things normally work, python or perl scripts turn CSV input into HTML for the website. Note that:
+ + The CSV files are actually tab-separated, not comma-separated despite the extension. + The scripts can be very picky and editing the CSVs with microsoft excel has broken them in the past- not sure if this is still the case. + +Overview of the automagical scripts on the expo website Script location Input file Output file Purpose +/svn/trunk/expoweb/noinfo/make-indxal4.pl /svn/trunk/expoweb/noinfo/CAVETAB2.CSV many produces all cave description pages +/svn/trunk/expoweb/noinfo/make-folklist.py /svn/trunk/expoweb/noinfo/folk.csv http://cucc.survex.com/expo/folk/index.htm Table of all expo members + +/svn/trunk/surveys/tablize-csv.pl /svn/trunk/surveys/tablizebyname-csv.pl + /svn/trunk/surveys/Surveys.csv + +http://cucc.survex.com/expo/surveys/surveytable.html http://cucc.survex.com/expo/surveys/surtabnam.html + Survey status page: "wall of shame" to keep track of who still needs to draw which surveys + Prospecting guide + + + + + +Mercurial is a distributed revision control system. On expo this means that many people can edit and merge their changes with each other either when they can access the internet. Mercurial is over the top for scanned survey notes, which do not get modified, so they are kept as a plain directory of files.
If you run windows, you are recommended to install Tortoise Hg, which nicely interfaces with windows explorer.
hg clone RepositoryURL
-InstallTortoise Hg. In windows explorer right click, select Tortoise Hg .. and click Clone repository.
Set the source path to RepositoryURL
Set the destination to somewhere on your local harddisk.
Press clone.
rsync -av expoimages expo@seagrass.goatchurch.org.uk:
-Not sure yet
+rsync -av expoimages expo@seagrass.goatchurch.org.uk:
+Not sure yet
This is likely to change with structural change to the site, with style changes which we expect to implement and with the method by which the info is actually stored and served up.
+This is likely to change with structural change to the site, with style changes which we expect to implement and with the method by which the info is actually stored and served up.
... and it's not written yet, either :-)