diff --git a/handbook/troggle/namesredesign.html b/handbook/troggle/namesredesign.html index d63de7d83..b726e15bb 100644 --- a/handbook/troggle/namesredesign.html +++ b/handbook/troggle/namesredesign.html @@ -19,30 +19,43 @@ -

Names: Why we need a change

+

Names: Why it is a problem

-

The current system completely fails with names which are in any way "non standard". -Troggle can't cope with a name not structured as +

The former system completely failed with names which are in any way "non standard". +Troggle ccouldn't cope with a name not structured as "Forename Surname": where it is only two words and each begins with a capital letter (with no other punctuation, capital letters or other names or initials). -

There are 19 people for which the troggle name parsing and the separate folklist script parsing -are different. Reconciling these (find easily using a link checker scanner on the -folk/.index.htm file) is a job that needs to be done. Every name in the generated -index.htm now has a hyperlink which goes to the troggle page about that person. Except -for those 19 people. +

There were 19 people for which the troggle name parsing and the separate folklist script parsing +were different. -This has to be fixed as it affects ~5% of our expoers. -

[This document originally written 31 August 2022]

Names: Maintenance constraints

We have special code scattered across troggle to cope with "Wookey", "Wiggy" and "Mike the Animal". This is a pain to maintain. -

Names: How it works now

-

Fundamentally we have regexes detecting whether something is a name or not - in several places. These should all be replaced by properly delimited strings. +

Names: How it works

+

Fundamentally we have regexes detecting whether something is a name or not - in several places in the different types of raw data. However we do now use unique 'slugs' for the references between pages (since Sept. 2023).

Four different bits

+ +

Frankly it's amazing it even appears to work at all. + +

+In urls.py we used to have re_path(r'^person/(?P[A-Z]*[a-z\-\'&;]*)[^a-zA-Z]*(?P[a-z\-\']*[^a-zA-Z]*[\-]*[A-Z]*[a-zA-Z\-&;]*)/?', person, name="person"), @@ -50,23 +63,19 @@ This has to be fixed as it affects ~5% of our expoers. re_path('wallets/person/(?P[A-Z]*[a-z\-\'&;]*)[^a-zA-Z]*(?P[a-z\-\']*[^a-zA-Z]*[\-]*[A-Z]*[a-zA-Z\-&;]*)/?', walletslistperson, name="walletslistperson"), -where the transmission noise is attmpting to recognise a name and split it into <first_name> and <last_name>. -Naturally this fails horribly even for relatively straightforward names such as Ruairidh MacLeod. -

  • We have the folklist script holding "Forename Surname (nickname)" and "Surname" as the first two columns in the CSV file. -These are used by the standalone script to produce the /folk/index.html which is run manually, and which is also parsed by troggle (by a regex in -parsers/people.py) only when a full data import is done. Which it gets wrong for people like Lydia-Clare Leather and various 'von' and 'de' middle -'names', McLean, MacLeod and McAdam. - -
  • We have the *team notes Becka Lawson lines in all our survex files which are parsed (by regexes in parsers/survex.py) only when a full data -import is done. - - - -
  • We have the <div class="trippeople"><u>Luke</u>, Hannah</div> trip people line in each logbook entry. -These are recognised by a regex in parsers/logbooks.py only when a full data import is done. - -

    Frankly it's amazing it even appears to work at all. +where the 'transmission noise' is attmpting to recognise a name and split it into <first_name> and <last_name>. +Naturally this failed horribly even for relatively straightforward names such as Ruairidh MacLeod. +

    +We now [October 2023] have + + path('person/<slug:slug>', person, name="person"),
    + + path('personexpedition/<slug:slug>/<int:year>', personexpedition, name="personexpedition"),
    + + path('wallets/person/<slug:slug>', walletslistperson, name="walletslistperson"), +
    +which is a lot easier to maintain.

    Troggle folk data importing

    @@ -91,11 +100,6 @@ and trying to fix this breaks something else (weirdly, not fully investigated). There seems to be a problem with importing blurbs with more than one image file, even those the code in people.py only looks for the first image file but then fails to use it.] -

    Proposal

    -

    I would start by replacing the recognisers in urls.py with a slug for an arbitrary text string, and interpreting it in the python code handling the page. -This would entail replacing all the database parsing bits to produce the same slug in the same way. -

    At that point we should get the 19 people into the system even if all the other crumdph is still there. -Then we take a deep breath and look at it all again.

    Folk: pending possible improvements