CUCC Expedition Handbook

Troggle Data Import

Troggle - Reset and import data

The python stand-alone script databaseRest.py imports data from files into the troggle database (sqlite or MariaDB). It is separate from the process which runs troggle and serves the data as webpages (via apache), but it is plugged in to the same hierarchy of django python files.

In the :troggle: directory:

$ python databaseReset.py

Usage is 'python databaseReset.py <command> [runlabel]'
             where command is:
             reset     - normal usage: clear database and reread everything from files - time-consuming
             caves     - read in the caves
             logbooks  - read in the logbooks
             people    - read in the people from folk.csv
             QMs       - read in the QM csv files (older caves only)
             reinit    - clear database (delete everything) and make empty tables. Import nothing.
             scans     - the survey scans in all the wallets
             survex    - read in the survex files - all the survex blocks but not the x/y/z positions
             survexpos - just the x/y/z Pos out of the survex files

             tunnel    - read in the Tunnel files - which scans the survey scans too
             profile   - print the profile from previous runs. Import nothing.

             test         - testing...

             and [runlabel] is an optional string identifying this run of the script
             in the stored profiling data 'import-profile.json'

             caves and logbooks must be run on an empty db before the others as they
             set up db tables used by the others.

On a clean computer with 16GB of memory and using sqlite a complete import takes about 10 minutes now if nothing else is running. On the shared expo server it could take a couple of hours if the server was in use (we have only a share of it).

Here is an example of the output after it runs, showing which options were used recently and how long each option took (in seconds).

--   troggle.sqlite django.db.backends.sqlite3
** Running job  Profile
** Ended job Profile -  0.0 seconds total.
     days ago     -4.28    -4.13    -4.10   -3.03    -3.00
  runlabel (s)      svx      NULL   RESET    svx2    RESET2
    reinit (s)       -       1.9      1.9      -       1.8
     caves (s)       -        -      39.1      -      32.2
    people (s)       -        -      35.0      -      24.4
  logbooks (s)       -        -      86.5      -      67.3
       QMs (s)       -        -      19.3      -       0.0
survexblks (s)   1153.1       -    3917.0  1464.1   1252.9
 survexpos (s)    397.3       -     491.9   453.6    455.0
    tunnel (s)       -        -      25.5      -      23.1
     scans (s)       -        -      52.5      -      45.9
[This data is from May 2020 immediately after troggle had been ported from python2 to python3 but before the survex import was re-engineered. It now takes ~600s in total.]

The 'survexblks' option loaded all the survex files recursively following the *include statements. It takes a long time if memory is low and the operating system has to page a lot. This has now been rewritten.

(That value of 0 seconds for QMs looks suspicious..)

The file import_profile.json holds these historic times. Delete it to get a clean slate.


Return to: Troggle data model in python code