<h3id="import">Importing the logbook into troggle</a></h3>
<p>This is usually done after expo but it is in excellent idea to have a nerd do this a couple of times during expo to discover problems while the people are still around to ask.
<p>The nerd needs to login to the expo server using <em>their own userid</em>, not the 'expo' userid. The nerd also needs to be in the group that is allowed to do 'sudo'.
<p>The nerd needs to do this:
<ol>
<li>Look at the list of pre-existing old import errors at </br><ahref="http://expo.survex.com/admin/core/dataissue/">http://expo.survex.com/admin/core/dataissue/</a></br>
The nerd will have to login to the troggle management console to do this, not just the usual troggle login.
<li>You need to get the list of people on expo sorted out first. </br>
This is documented in the <ahref="folkupdate.html">Folk Update</a> process.
<li>Log in to the expo server and run the update script (see below for details)
<li>Watch the error messages scroll by, they are more detailed than the messages archived in the old import errors list
<li>Edit the logbook.html file to fix the errors. These are usually typos, non-unique tripdate ids or unrecognised people. Some unrecognised people will mean that you have to fix them using the <ahref="folkupdate.html">Folk Update</a> process first.
<li>Re-run the import script until you have got rid of all the import errors.
<li>Pat self on back. Future data managers and people trying to find missing surveys will worship you.
</ol>
<p>The procedure is like this. It will be familiar to you because
you will have already done most of this for the <ahref="folkupdate.html">Folk Update</a> process.
<pre><code>ssh {youruserid}@expo.survex.com
cd ~expo
cd troggle
sudo python databaseReset.py logbooks
</code></pre>
<p>It will produce a list of errors like these below, starting with the most recent logbook which will be the one for the expo you are working on.
You can abort the script (Ctrl-C) when you have got the errors for the current expo that you are going to fix
<pre><code>Loading Logbook for: 2017
- Parsing logbook: 2017/logbook.html
- Using parser: Parseloghtmltxt
Calculating GetPersonExpeditionNameLookup for 2017
- Skipping logentry: Via Ferata: Intersport - Klettersteig - no author for entry
- No name match for: 'mike'
- No name match for: 'Mike'</code></pre>
<p>Errors are usually misplaced or duplicated <hr /> tags, names which are not specific enough to be recognised by the parser (though it tries hard) such as "everyone" or "et al." or are simply missing, or a bit of description which has been put into the names section such as "Goulash Regurgitation".
<h3id="history">The logbooks format</h3>
<p>This is documented on the <ahref="..logbooks.html#format">logbook user-documentation page</a> as even expoers who can do nothing else technical can at least write up their logbook entries.
<p>Older logbooks (prior to 2007) were stored as logbook.txt with just a bit of consistent markup to allow troggle parsing.</p>
<p>The formatting was largely freeform, with a bit of markup ('===' around header, bars separating date, <place> - <description>, and who) which allows the troggle import script to read it correctly. The underlines show who wrote the entry. There is also a format for time-underground info so it can be automagically tabulated.</p>
<p>So the format should be:</p>
<code>
===2009-07-21|204 - Rigging entrance series| Becka Lawson, Emma Wilson ===
</br>
{Text of logbook entry}
</br>
T/U: Jess 1 hr, Emma 0.5 hr
</code>
<p>
<ahref="../logbooks.html">Back to Logbooks for Cavers</a> documentation.