mirror of
https://expo.survex.com/repositories/expoweb/.git/
synced 2024-11-22 07:11:55 +00:00
Update the website manual to reflect the merging with Troggle.
This commit is contained in:
parent
9ea39a6adf
commit
3aaab8d043
@ -17,7 +17,8 @@
|
||||
|
||||
<h2><a id="update">Updating the website - HOWTO</a></h2>
|
||||
|
||||
<p>Simple <a href="checkin.htm">instructions</a> for updating website</p>
|
||||
<p>Simple <a href="checkin.htm">instructions</a> for updating the website
|
||||
(on the expo machine).</p>
|
||||
|
||||
<p>You can update the site via the troggle pages, by editing pages online via a browser, by editing them locally on disk, or by checking out the relevant part to your computer and editing it there. Which is best depends on your knowledge and what you want to do. For simple addition of cave or survey data troggle is recommended. For other edits it's best if you can edit the files directly rather than using the 'edit this page' button, but that means you either need to be on expo with the expo computer, or be able to check out a local copy. If neither of these apply then using the 'edit this page' button is fine.</p>
|
||||
|
||||
@ -52,7 +53,8 @@
|
||||
|
||||
<h3><a id="usernamepassword">Getting a username and password</a></h3>
|
||||
|
||||
<p>Use these credentials for access to the site. The user is 'expo', with a beery password. Ask someone if this isn't enough clue for you.</p>
|
||||
<p>Use these credentials for access to the site. The user is 'expo',
|
||||
with a cavey:beery password. Ask someone if this isn't enough clue for you.</p>
|
||||
|
||||
<h3><a id="repositories">The repositories</a></h3>
|
||||
|
||||
@ -68,7 +70,8 @@
|
||||
</ul>
|
||||
|
||||
|
||||
<p>All the scans, photos, presentation, fat documents and videos have been removed from version-control and are just files. See below for details on that.</p>
|
||||
<p>All the scans, photos, presentations, fat documents and videos are
|
||||
stored just as files (not in version control). See below for details on that.</p>
|
||||
|
||||
<h3><a id="howitworks">How the website works</a></h3>
|
||||
|
||||
@ -108,8 +111,9 @@
|
||||
|
||||
<p>To edit the website, you need a mercurial client. If you are using Windows, [1] is highly recommended. Lots of tools for Linux and mac exist too [2], both GUI and command-line:</p>
|
||||
|
||||
<p>For Ubuntu dummies and GUI lovers, in Debian 6 or Ubuntu 11.04
|
||||
onwards you can just install mercurial and tortoisehg from synaptic, then restart nautilus $nautilus -q. If it works, you'll be able to see the menus of Tortoise within your Nautilus windows. </p>
|
||||
<p>For Ubuntu dummies and GUI lovers, from Debian 6 or Ubuntu 11.04
|
||||
onwards you can just install mercurial and tortoisehg from synaptic,
|
||||
then restart nautilus (<tt>nautilus -q</tt>). If it works, you'll be able to see the menus of Tortoise within your Nautilus windows. </p>
|
||||
|
||||
<p>Once you've downloaded and installed a client, the first step is to create what is called a checkout of the website or section of the website which you want to work on. This creates a copy on your machine which you can edit to your heart's content. The command to initially check out ('clone') the entire expo website is:</p>
|
||||
|
||||
@ -151,23 +155,25 @@ onwards you can just install mercurial and tortoisehg from synaptic, then restar
|
||||
|
||||
<p>or similar for the other repositories. In the Destination box type whatever destination you want your local copies to live in. Hit Clone, and it should hopefully prompt you for the usual beery password. (to be continued) --Zucca 14:25, 25 January 2012 (UTC)</p>
|
||||
|
||||
|
||||
<h3><a id="expowebupdate">The expoweb-update script</a></h3>
|
||||
|
||||
<p>The script at the heart of the website update mechanism is a makefile that runs the various generation scripts. It (along with an update from the repository) is run every 15 minutes as a cron job (at 0,15,30 and 45 past the hour), but if you want to force an update more quickly you can run it here: [Wooknote - this is not actually happening right now - FIXME!]</p>
|
||||
<p>The script at the heart of the website update mechanism is a makefile that runs the various generation scripts. It (along with an update from the repository) is run every 15 minutes as a cron job (at 0,15,30 and 45 past the hour), but if you want to force an update more quickly you can run it here: [Wooknote - this is</p>
|
||||
|
||||
<p>The scripts are generally under the 'noinfo' section of the site just because that has some access control. This will get changed to something more sensible at some point</p>
|
||||
|
||||
<p>The scripts are generaly under the 'noinfo' section of the site just because that has some access control. This will get changed to something more sensible at some point</p>
|
||||
|
||||
<h3><a id="cavepages">Updating cave pages</a></h3>
|
||||
|
||||
<p>Cave description pages are automatically generated from a comma separated values (CSV) table named CAVETAB2.CSV by a perl script called make-indxal4.pl . make-indxal4.pl is called automatically.</p>
|
||||
<p>Cave description pages are automatically generated from a set of
|
||||
cave files in noinfo/cave_data/ and noinfo/entrance_data/. These files
|
||||
are named <area>-<cavenumber>.html (where area is 1623 or 1626). These
|
||||
files are processed by troggle. Use <tt>python databaseReset.py
|
||||
cavesnew</tt> in /expofiles/troggle/ to update the site/database after
|
||||
editing these files.</p>
|
||||
|
||||
<p>The first step is to check out, edit, and check in CAVETAB2.CSV, which is at</p>
|
||||
|
||||
/expoweb/noinfo/CAVETAB2.CSV</tt></p>
|
||||
|
||||
<p>You need to be somewhat careful with the formatting; each cell needs to be only one line long (i.e. no newlines) or the script will get confused.</p>
|
||||
|
||||
<p>And then run expoweb-update as above.</p>
|
||||
<p>(If you remember something about CAVETAB2.CSV for editing caves, that was
|
||||
superseded in 2012).</p>
|
||||
|
||||
<h3><a id="updatingyears">Updating expo year pages</a></h3>
|
||||
|
||||
@ -190,6 +196,7 @@ onwards you can just install mercurial and tortoisehg from synaptic, then restar
|
||||
<div class="timeug">T/U 10 mins</div>
|
||||
<p>Note that the ID's must be unique, so are generated from 't' plus the trip date plus a,b,c etc when there is more than one trip on a day.</p>
|
||||
|
||||
<hr>
|
||||
<p>Older logbooks (prior to 2007) were stored as logbook.txt with just a bit of consistent markup to allow troggle parsing.</p>
|
||||
|
||||
<p>The formatting was largely freeform, with a bit of markup ('===' around header, bars separating date, <place> - <description>, and who) which allows the troggle import script to read it correctly. The underlines show who wrote the entry. There is also a format for time-underground info so it can be automagically tabulated.</p>
|
||||
@ -201,7 +208,7 @@ onwards you can just install mercurial and tortoisehg from synaptic, then restar
|
||||
<p><Text of logbook entry></p>
|
||||
|
||||
<p>T/U: Jess 1 hr, Emma 0.5 hr</p>
|
||||
|
||||
<hr>
|
||||
|
||||
<h3><a id="tickingoff">Ticking off QMs</a></h3>
|
||||
|
||||
@ -220,9 +227,12 @@ onwards you can just install mercurial and tortoisehg from synaptic, then restar
|
||||
|
||||
<p>Martin Green added the SURVTAB.CSV file to contain tabulated data for many caves around 1999, and a script to generate the index pages from it. Dave Loeffler added scripts and programs to generate the prospecting maps in 2004. The server moved to Mark Shinwell's machine in the early 2000s, and the VCS was updated to subversion.</p>
|
||||
|
||||
<p>In 2006 Aaron Curtis decided that a more modern set of generated, database-based pages made sense, and so wrote Troggle. This uses Django to generate pages. This reads in all the logbooks and surveys and provides a nice way to access them, and enter new data. It was separate for a while until Martin Green added code to merge the old static pages and new troggle dynamic pages into the same site. Work on Troggle still continues sporadically.</p>
|
||||
|
||||
<p>After expo 2009 the VCS was updated to hg, because a DVCS makes a great deal of sense for expo (where it goes offline for a month or two and nearly all the year's edits happen).</p>
|
||||
|
||||
<p>The site was moved to Julian Todd's seagrass server, but the change from 32-bit to 64-bit machines broke the website autogeneration code, which was only fixed in early 2011, allowing the move to complete. The data has been split into 3 separate repositories: the website, troggle, the survey data, the tunnel data.</p>
|
||||
<p>The site was moved to Julian Todd's seagrass server, but the change from 32-bit to 64-bit machines broke the website autogeneration code, which was only fixed in early 2011, allowing the move to complete. The data has been split into 3 separate repositories: the website, troggle, the survey data, the tunnel data. Seagrass was turned off at the end of 2013, and the site is now hosted by Sam Wenham at the university.</p>
|
||||
|
||||
|
||||
<h3 id="automation">Automation on cucc.survex.com/expo</h3>
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user