forked from expo/troggle
Compare commits
92 Commits
Faster-sur
...
old-master
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
37553da556 | ||
|
|
8861e2e240 | ||
|
|
09e9932711 | ||
|
|
7fe34bedb8 | ||
|
|
d134a58931 | ||
|
|
90a5524036 | ||
|
|
69f72184a6 | ||
|
|
e0d8df0a79 | ||
|
|
15d4defe0e | ||
|
|
9052982089 | ||
|
|
0a35824b9c | ||
|
|
bc5c0b9e53 | ||
|
|
e873dedcf2 | ||
|
|
a0c5a34b3f | ||
|
|
6c3c70a02c | ||
|
|
43394facdf | ||
|
|
d5b4a0b1d9 | ||
|
|
8feb1774bb | ||
|
|
d55a58bfc8 | ||
|
|
fffb083aee | ||
|
|
b9aa447cac | ||
|
|
932b1a2ae3 | ||
|
|
367854c9a6 | ||
|
|
c76aed3bf6 | ||
|
|
079f528963 | ||
|
|
972e6f3a95 | ||
|
|
7af6c3cb9c | ||
|
|
501a5122d8 | ||
|
|
35f85c55f1 | ||
|
|
b69bdcd126 | ||
|
|
49d5857b36 | ||
|
|
40ad04b79f | ||
|
|
a3e564855a | ||
|
|
15d0d05185 | ||
|
|
819eca5dea | ||
|
|
edbe793c68 | ||
|
|
e017c6effc | ||
|
|
d4ac28af18 | ||
|
|
931aa4e3cb | ||
|
|
cc4017e481 | ||
|
|
38adb9a52f | ||
|
|
ccc5813b3f | ||
|
|
314d0e8b71 | ||
|
|
0338889905 | ||
|
|
876cd8909f | ||
|
|
ac7cb45f61 | ||
|
|
f326bf9148 | ||
|
|
b1596c0ac4 | ||
|
|
13d3f37f05 | ||
|
|
e4290c4ab0 | ||
|
|
2918b4b92c | ||
|
|
39c622d5bf | ||
|
|
76a6b501f3 | ||
|
|
ecf92e2079 | ||
|
|
b4c0c4d219 | ||
|
|
4be8c81291 | ||
|
|
a8460065a4 | ||
|
|
2b39dec560 | ||
| 0b85a9d330 | |||
| b123f6ada7 | |||
| e5c288c764 | |||
| 9db7d8e589 | |||
| 5e48687347 | |||
| 09bbf81915 | |||
| 78f8ea2b5b | |||
| e08b4275a9 | |||
| ac9f3cf061 | |||
| 98fd314a62 | |||
| 79a31a41f9 | |||
| 6aae9083c3 | |||
| d71e31417b | |||
| fbe6c0c859 | |||
| 53b797fb53 | |||
| 98eb9173ee | |||
| ecfa95310d | |||
| 0e75a9163b | |||
|
|
59633d94f5 | ||
| 53206ad1d7 | |||
| 9aa91bf3e2 | |||
| 867479e05d | |||
| bb1f69dd90 | |||
| d219f7b966 | |||
| 3f812e5275 | |||
| cdef395f89 | |||
|
|
66f6a9ce90 | ||
|
|
b07c888c7a | ||
|
|
d170a3c36e | ||
| 429c21a8e9 | |||
|
|
8c10908353 | ||
|
|
e0963a1c39 | ||
| 9a7a1728a4 | |||
| 240c7eff10 |
20
.gitignore
vendored
20
.gitignore
vendored
@@ -14,3 +14,23 @@ media/images/*
|
||||
.vscode/*
|
||||
.swp
|
||||
imagekit-off/
|
||||
localsettings-expo-live.py
|
||||
.gitignore
|
||||
desktop.ini
|
||||
troggle-reset.log
|
||||
troggle-reset0.log
|
||||
troggle-surveys.log
|
||||
troggle.log
|
||||
troggle.sqlite
|
||||
troggle.sqlite.0
|
||||
troggle.sqlite.1
|
||||
my_project.dot
|
||||
memdump.sql
|
||||
troggle-sqlite.sql
|
||||
import_profile.json
|
||||
import_times.json
|
||||
ignored-files.log
|
||||
tunnel-import.log
|
||||
posnotfound
|
||||
troggle.sqlite-journal
|
||||
loadsurvexblks.log
|
||||
|
||||
102
README.txt
102
README.txt
@@ -8,13 +8,71 @@ Troggle setup
|
||||
Python, Django, and Database setup
|
||||
-----------------------------------
|
||||
Troggle requires Django 1.4 or greater, and any version of Python that works with it.
|
||||
It is currently (Feb.2020) on django 1.7.11 (1.7.11-1+deb8u5).
|
||||
Install Django with the following command:
|
||||
|
||||
apt-get install python-django (on debian/ubuntu)
|
||||
sudo apt install python-django (on debian/ubuntu) -- does not work now as we need specific version
|
||||
|
||||
If you want to use MySQL or Postgresql, download and install them. However, you can also use Django with Sqlite3, which is included in Python and thus requires no extra installation.
|
||||
requirements.txt:
|
||||
Django==1.7.11
|
||||
django-registration==2.1.2
|
||||
mysql
|
||||
#imagekit
|
||||
#django-imagekit
|
||||
Image
|
||||
django-tinymce==2.7.0
|
||||
smartencoding
|
||||
unidecode
|
||||
|
||||
Install like this:
|
||||
sudo apt install pip # does not work on Ubuntu 20.04 for python 2.7. Have to install from source. Use 18.04
|
||||
pip install django==1.7
|
||||
pip install django-tinymce==2.0.1
|
||||
sudo apt install libfreetype6-dev
|
||||
pip install django-registration==2.0
|
||||
pip install unidecode
|
||||
pip install --no-cache-dir pillow==2.7.0 # fails horribly on installing Ubuntu 20.04
|
||||
pip install --no-cache-dir pillow # installs on Ubuntu 20.04 , don't know if it works though
|
||||
|
||||
If you want to use MySQL or Postgresql, download and install them.
|
||||
However, you can also use Django with Sqlite3, which is included in Python and thus requires no extra installation.
|
||||
pip install pygraphviz
|
||||
apt install survex
|
||||
|
||||
pip install django-extensions
|
||||
pip install pygraphviz # fails to install
|
||||
pip install pyparsing pydot # installs fine
|
||||
django extension graph_models # https://django-extensions.readthedocs.io/en/latest/graph_models.html
|
||||
|
||||
Or use a python3 virtual environment: (python3.5 not later)
|
||||
$ cd troggle
|
||||
$ cd ..
|
||||
$ python3.5 -m venv pyth35d2
|
||||
(creates folder with virtual env)
|
||||
cd pyth35d2
|
||||
bin/activate
|
||||
(now install everything - not working yet..)
|
||||
$ pip install -r requirements.txt
|
||||
|
||||
MariaDB database
|
||||
----------------
|
||||
Start it up with
|
||||
$ sudo mysql -u -p
|
||||
when it will prompt you to type in the password. Get this by reading the settings.py file in use on the server.
|
||||
then
|
||||
> CREATE DATABASE troggle;
|
||||
> use troggle;
|
||||
> exit;
|
||||
|
||||
Note the semicolons.
|
||||
|
||||
You can check the status of the db service:
|
||||
$ sudo systemctl status mysql
|
||||
|
||||
You can start and stop the db service with
|
||||
$ sudo systemctl restart mysql.service
|
||||
$ sudo systemctl stop mysql.service
|
||||
$ sudo systemctl start mysql.service
|
||||
|
||||
Troggle itself
|
||||
-------------
|
||||
@@ -29,10 +87,15 @@ If you want to work on the source code and be able to commit, your account will
|
||||
|
||||
Next, you need to fill in your local settings. Copy either localsettingsubuntu.py or localsettingsserver.py to a new file called localsettings.py. Follow the instructions contained in the file to fill out your settings.
|
||||
|
||||
Setting up survex
|
||||
-----------------
|
||||
You need to have survex installed as the command line 'cavern' is used as part of the survex
|
||||
import process.
|
||||
|
||||
Setting up tables and importing legacy data
|
||||
------------------------------------------
|
||||
Run "python databaseReset.py reset" from the troggle directory.
|
||||
Run "sudo python databaseReset.py reset" from the troggle directory.
|
||||
|
||||
|
||||
Once troggle is running, you can also log in and then go to "Import / export" data under "admin" on the menu.
|
||||
|
||||
@@ -42,7 +105,38 @@ folk/folk.csv table - a year doesn't exist until that is done.
|
||||
|
||||
Running a Troggle server
|
||||
------------------------
|
||||
For high volume use, Troggle should be run using a web server like apache. However, a quick way to get started is to use the development server built into Django.
|
||||
For high volume use, Troggle should be run using a web server like apache. However, a quick way to get started is to use the development server built into Django. This is limited though: directory
|
||||
redirection needs apache.
|
||||
|
||||
To do this, run "python manage.py runserver" from the troggle directory.
|
||||
|
||||
|
||||
Running a Troggle server with Apache
|
||||
------------------------------------
|
||||
Troggle also needs these aliases to be configured. These are set in
|
||||
/home/expo/config/apache/expo.conf
|
||||
on the expo server.
|
||||
|
||||
At least these need setting:
|
||||
DocumentRoot /home/expo/expoweb
|
||||
WSGIScriptAlias / /home/expo/troggle/wsgi.py
|
||||
<Directory /home/expo/troggle>
|
||||
<Files wsgi.py>
|
||||
Require all granted
|
||||
</Files>
|
||||
</Directory>
|
||||
|
||||
Alias /expofiles /home/expo/expofiles
|
||||
Alias /photos /home/expo/webphotos
|
||||
Alias /map /home/expo/expoweb/map
|
||||
Alias /javascript /usr/share/javascript
|
||||
Alias /static/ /home/expo/static/
|
||||
ScriptAlias /repositories /home/expo/config/apache/services/hgweb/hgweb.cgi
|
||||
|
||||
(The last is just for mercurial which will be remoived during 2020).
|
||||
|
||||
Unlike the "runserver" method, apache requires a restart before it will use
|
||||
any changed files:
|
||||
|
||||
apache2ctl stop
|
||||
apache2ctl start
|
||||
|
||||
27
README/index.html
Normal file
27
README/index.html
Normal file
@@ -0,0 +1,27 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
|
||||
<title>Troggle - Coding Documentation</title>
|
||||
<link rel="stylesheet" type="text/css" href="..media/css/main2.css" />
|
||||
</head>
|
||||
<body>
|
||||
<h1>Troggle Code - README</h1>
|
||||
<h2>Contents of README.txt file</h2>
|
||||
|
||||
<iframe name="erriframe" width="90%" height="45%"
|
||||
src="../readme.txt" frameborder="1" ></iframe>
|
||||
|
||||
<h2>Troggle documentation in the Expo Handbook</h2>
|
||||
<ul>
|
||||
<li><a href="http://expo.survex.com/handbook/troggle/trogintro.html">Intro</a>
|
||||
<li><a href="http://expo.survex.com/handbook/troggle/trogmanual.html">Troggle manual</a>
|
||||
<li><a href="http://expo.survex.com/handbook/troggle/trogarch.html">Troggle data model</a>
|
||||
<li><a href="http://expo.survex.com/handbook/troggle/trogimport.html">Troggle importing data</a>
|
||||
<li><a href="http://expo.survex.com/handbook/troggle/trogdesign.html">Troggle design decisions</a>
|
||||
<li><a href="http://expo.survex.com/handbook/troggle/trogdesignx.html">Troggle future architectures</a>
|
||||
<li><a href="http://expo.survex.com/handbook/troggle/trogsimpler.html">a kinder simpler Troggle?</a>
|
||||
|
||||
</ul>
|
||||
<hr />
|
||||
</body></html>
|
||||
@@ -50,10 +50,10 @@ class QMsFoundInline(admin.TabularInline):
|
||||
extra=1
|
||||
|
||||
|
||||
class PhotoInline(admin.TabularInline):
|
||||
model = DPhoto
|
||||
exclude = ['is_mugshot' ]
|
||||
extra = 1
|
||||
# class PhotoInline(admin.TabularInline):
|
||||
# model = DPhoto
|
||||
# exclude = ['is_mugshot' ]
|
||||
# extra = 1
|
||||
|
||||
|
||||
class PersonTripInline(admin.TabularInline):
|
||||
@@ -67,7 +67,8 @@ class LogbookEntryAdmin(TroggleModelAdmin):
|
||||
prepopulated_fields = {'slug':("title",)}
|
||||
search_fields = ('title','expedition__year')
|
||||
date_heirarchy = ('date')
|
||||
inlines = (PersonTripInline, PhotoInline, QMsFoundInline)
|
||||
# inlines = (PersonTripInline, PhotoInline, QMsFoundInline)
|
||||
inlines = (PersonTripInline, QMsFoundInline)
|
||||
class Media:
|
||||
css = {
|
||||
"all": ("css/troggleadmin.css",)
|
||||
@@ -116,7 +117,7 @@ class EntranceAdmin(TroggleModelAdmin):
|
||||
search_fields = ('caveandentrance__cave__kataster_number',)
|
||||
|
||||
|
||||
admin.site.register(DPhoto)
|
||||
#admin.site.register(DPhoto)
|
||||
admin.site.register(Cave, CaveAdmin)
|
||||
admin.site.register(Area)
|
||||
#admin.site.register(OtherCaveName)
|
||||
|
||||
@@ -1,183 +1,33 @@
|
||||
from django.core.management.base import BaseCommand, CommandError
|
||||
from optparse import make_option
|
||||
from troggle.core.models import Cave
|
||||
import settings
|
||||
import os
|
||||
from optparse import make_option
|
||||
|
||||
from django.db import connection
|
||||
from django.core import management
|
||||
from django.contrib.auth.models import User
|
||||
from django.core.urlresolvers import reverse
|
||||
from django.core.management.base import BaseCommand, CommandError
|
||||
from django.contrib.auth.models import User
|
||||
|
||||
from troggle.core.models import Cave, Entrance
|
||||
import troggle.flatpages.models
|
||||
|
||||
databasename=settings.DATABASES['default']['NAME']
|
||||
expouser=settings.EXPOUSER
|
||||
expouserpass=settings.EXPOUSERPASS
|
||||
expouseremail=settings.EXPOUSER_EMAIL
|
||||
import settings
|
||||
|
||||
"""Pretty much all of this is now replaced by databaseRest.py
|
||||
I don't know why this still exists. Needs testing to see if
|
||||
removing it makes django misbehave.
|
||||
"""
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = 'This is normal usage, clear database and reread everything'
|
||||
help = 'Removed as redundant - use databaseReset.py'
|
||||
|
||||
option_list = BaseCommand.option_list + (
|
||||
make_option('--reset',
|
||||
action='store_true',
|
||||
dest='reset',
|
||||
default=False,
|
||||
help='Reset the entier DB from files'),
|
||||
help='Removed as redundant'),
|
||||
)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
print(args)
|
||||
print(options)
|
||||
if "desc" in args:
|
||||
self.resetdesc()
|
||||
elif "scans" in args:
|
||||
self.import_surveyscans()
|
||||
elif "caves" in args:
|
||||
self.reload_db()
|
||||
self.make_dirs()
|
||||
self.pageredirects()
|
||||
self.import_caves()
|
||||
elif "people" in args:
|
||||
self.import_people()
|
||||
elif "QMs" in args:
|
||||
self.import_QMs()
|
||||
elif "tunnel" in args:
|
||||
self.import_tunnelfiles()
|
||||
elif options['reset']:
|
||||
self.reset(self)
|
||||
elif "survex" in args:
|
||||
self.import_survex()
|
||||
elif "survexpos" in args:
|
||||
import parsers.survex
|
||||
parsers.survex.LoadPos()
|
||||
elif "logbooks" in args:
|
||||
self.import_logbooks()
|
||||
elif "autologbooks" in args:
|
||||
self.import_auto_logbooks()
|
||||
elif "dumplogbooks" in args:
|
||||
self.dumplogbooks()
|
||||
elif "writeCaves" in args:
|
||||
self.writeCaves()
|
||||
elif options['foo']:
|
||||
self.stdout.write(self.style.WARNING('Tesing....'))
|
||||
else:
|
||||
#self.stdout.write("%s not recognised" % args)
|
||||
#self.usage(options)
|
||||
self.stdout.write("poo")
|
||||
#print(args)
|
||||
|
||||
def reload_db(obj):
|
||||
if settings.DATABASES['default']['ENGINE'] == 'django.db.backends.sqlite3':
|
||||
try:
|
||||
os.remove(databasename)
|
||||
except OSError:
|
||||
pass
|
||||
else:
|
||||
cursor = connection.cursor()
|
||||
cursor.execute("DROP DATABASE %s" % databasename)
|
||||
cursor.execute("CREATE DATABASE %s" % databasename)
|
||||
cursor.execute("ALTER DATABASE %s CHARACTER SET=utf8" % databasename)
|
||||
cursor.execute("USE %s" % databasename)
|
||||
management.call_command('migrate', interactive=False)
|
||||
# management.call_command('syncdb', interactive=False)
|
||||
user = User.objects.create_user(expouser, expouseremail, expouserpass)
|
||||
user.is_staff = True
|
||||
user.is_superuser = True
|
||||
user.save()
|
||||
|
||||
def make_dirs(obj):
|
||||
"""Make directories that troggle requires"""
|
||||
# should also deal with permissions here.
|
||||
if not os.path.isdir(settings.PHOTOS_ROOT):
|
||||
os.mkdir(settings.PHOTOS_ROOT)
|
||||
|
||||
def import_caves(obj):
|
||||
import parsers.caves
|
||||
print("Importing Caves")
|
||||
parsers.caves.readcaves()
|
||||
|
||||
def import_people(obj):
|
||||
import parsers.people
|
||||
parsers.people.LoadPersonsExpos()
|
||||
|
||||
def import_logbooks(obj):
|
||||
# The below line was causing errors I didn't understand (it said LOGFILE was a string), and I couldn't be bothered to figure
|
||||
# what was going on so I just catch the error with a try. - AC 21 May
|
||||
try:
|
||||
settings.LOGFILE.write('\nBegun importing logbooks at ' + time.asctime() + '\n' + '-' * 60)
|
||||
except:
|
||||
pass
|
||||
|
||||
import parsers.logbooks
|
||||
parsers.logbooks.LoadLogbooks()
|
||||
|
||||
def import_survex(obj):
|
||||
import parsers.survex
|
||||
parsers.survex.LoadAllSurvexBlocks()
|
||||
parsers.survex.LoadPos()
|
||||
|
||||
def import_QMs(obj):
|
||||
import parsers.QMs
|
||||
|
||||
def import_surveys(obj):
|
||||
import parsers.surveys
|
||||
parsers.surveys.parseSurveys(logfile=settings.LOGFILE)
|
||||
|
||||
def import_surveyscans(obj):
|
||||
import parsers.surveys
|
||||
parsers.surveys.LoadListScans()
|
||||
|
||||
def import_tunnelfiles(obj):
|
||||
import parsers.surveys
|
||||
parsers.surveys.LoadTunnelFiles()
|
||||
|
||||
def reset(self, mgmt_obj):
|
||||
""" Wipe the troggle database and import everything from legacy data
|
||||
"""
|
||||
self.reload_db()
|
||||
self.make_dirs()
|
||||
self.pageredirects()
|
||||
self.import_caves()
|
||||
self.import_people()
|
||||
self.import_surveyscans()
|
||||
self.import_survex()
|
||||
self.import_logbooks()
|
||||
self.import_QMs()
|
||||
try:
|
||||
self.import_tunnelfiles()
|
||||
except:
|
||||
print("Tunnel files parser broken.")
|
||||
|
||||
self.import_surveys()
|
||||
|
||||
def pageredirects(obj):
|
||||
for oldURL, newURL in [("indxal.htm", reverse("caveindex"))]:
|
||||
f = troggle.flatpages.models.Redirect(originalURL=oldURL, newURL=newURL)
|
||||
f.save()
|
||||
|
||||
def writeCaves(obj):
|
||||
for cave in Cave.objects.all():
|
||||
cave.writeDataFile()
|
||||
for entrance in Entrance.objects.all():
|
||||
entrance.writeDataFile()
|
||||
|
||||
def troggle_usage(obj):
|
||||
print("""Usage is 'manage.py reset_db <command>'
|
||||
where command is:
|
||||
reset - this is normal usage, clear database and reread everything
|
||||
desc
|
||||
caves - read in the caves
|
||||
logbooks - read in the logbooks
|
||||
autologbooks
|
||||
dumplogbooks
|
||||
people
|
||||
QMs - read in the QM files
|
||||
resetend
|
||||
scans - read in the scanned surveynotes
|
||||
survex - read in the survex files
|
||||
survexpos
|
||||
tunnel - read in the Tunnel files
|
||||
writeCaves
|
||||
""")
|
||||
|
||||
@@ -39,10 +39,8 @@ try:
|
||||
filename=settings.LOGFILE,
|
||||
filemode='w')
|
||||
except:
|
||||
subprocess.call(settings.FIX_PERMISSIONS)
|
||||
logging.basicConfig(level=logging.DEBUG,
|
||||
filename=settings.LOGFILE,
|
||||
filemode='w')
|
||||
# Opening of file for writing is going to fail currently, so decide it doesn't matter for now
|
||||
pass
|
||||
|
||||
#This class is for adding fields and methods which all of our models will have.
|
||||
class TroggleModel(models.Model):
|
||||
@@ -457,7 +455,7 @@ class Cave(TroggleModel):
|
||||
return urlparse.urljoin(settings.URL_ROOT, reverse('cave',kwargs={'cave_id':href,}))
|
||||
|
||||
def __unicode__(self, sep = u": "):
|
||||
return unicode(self.slug())
|
||||
return unicode("slug:"+self.slug())
|
||||
|
||||
def get_QMs(self):
|
||||
return QM.objects.filter(found_by__cave_slug=self.caveslug_set.all())
|
||||
@@ -780,31 +778,32 @@ class QM(TroggleModel):
|
||||
def wiki_link(self):
|
||||
return u"%s%s%s" % ('[[QM:',self.code(),']]')
|
||||
|
||||
photoFileStorage = FileSystemStorage(location=settings.PHOTOS_ROOT, base_url=settings.PHOTOS_URL)
|
||||
class DPhoto(TroggleImageModel):
|
||||
caption = models.CharField(max_length=1000,blank=True,null=True)
|
||||
contains_logbookentry = models.ForeignKey(LogbookEntry,blank=True,null=True)
|
||||
contains_person = models.ManyToManyField(Person,blank=True,null=True)
|
||||
file = models.ImageField(storage=photoFileStorage, upload_to='.',)
|
||||
is_mugshot = models.BooleanField(default=False)
|
||||
contains_cave = models.ForeignKey(Cave,blank=True,null=True)
|
||||
contains_entrance = models.ForeignKey(Entrance, related_name="photo_file",blank=True,null=True)
|
||||
#photoFileStorage = FileSystemStorage(location=settings.PHOTOS_ROOT, base_url=settings.PHOTOS_URL)
|
||||
#class DPhoto(TroggleImageModel):
|
||||
#caption = models.CharField(max_length=1000,blank=True,null=True)
|
||||
#contains_logbookentry = models.ForeignKey(LogbookEntry,blank=True,null=True)
|
||||
#contains_person = models.ManyToManyField(Person,blank=True,null=True)
|
||||
# replace link to copied file with link to original file location
|
||||
#file = models.ImageField(storage=photoFileStorage, upload_to='.',)
|
||||
#is_mugshot = models.BooleanField(default=False)
|
||||
#contains_cave = models.ForeignKey(Cave,blank=True,null=True)
|
||||
#contains_entrance = models.ForeignKey(Entrance, related_name="photo_file",blank=True,null=True)
|
||||
#nearest_survey_point = models.ForeignKey(SurveyStation,blank=True,null=True)
|
||||
nearest_QM = models.ForeignKey(QM,blank=True,null=True)
|
||||
lon_utm = models.FloatField(blank=True,null=True)
|
||||
lat_utm = models.FloatField(blank=True,null=True)
|
||||
#nearest_QM = models.ForeignKey(QM,blank=True,null=True)
|
||||
#lon_utm = models.FloatField(blank=True,null=True)
|
||||
#lat_utm = models.FloatField(blank=True,null=True)
|
||||
|
||||
class IKOptions:
|
||||
spec_module = 'core.imagekit_specs'
|
||||
cache_dir = 'thumbs'
|
||||
image_field = 'file'
|
||||
# class IKOptions:
|
||||
# spec_module = 'core.imagekit_specs'
|
||||
# cache_dir = 'thumbs'
|
||||
# image_field = 'file'
|
||||
|
||||
#content_type = models.ForeignKey(ContentType)
|
||||
#object_id = models.PositiveIntegerField()
|
||||
#location = generic.GenericForeignKey('content_type', 'object_id')
|
||||
|
||||
def __unicode__(self):
|
||||
return self.caption
|
||||
# def __unicode__(self):
|
||||
# return self.caption
|
||||
|
||||
scansFileStorage = FileSystemStorage(location=settings.SURVEY_SCANS, base_url=settings.SURVEYS_URL)
|
||||
def get_scan_path(instance, filename):
|
||||
|
||||
@@ -147,7 +147,7 @@ class SurvexBlock(models.Model):
|
||||
return ssl[0]
|
||||
#print name
|
||||
ss = SurvexStation(name=name, block=self)
|
||||
ss.save()
|
||||
#ss.save()
|
||||
return ss
|
||||
|
||||
def DayIndex(self):
|
||||
@@ -197,6 +197,9 @@ class SurvexScansFolder(models.Model):
|
||||
|
||||
def get_absolute_url(self):
|
||||
return urlparse.urljoin(settings.URL_ROOT, reverse('surveyscansfolder', kwargs={"path":re.sub("#", "%23", self.walletname)}))
|
||||
|
||||
def __unicode__(self):
|
||||
return unicode(self.walletname) + " (Survey Scans Folder)"
|
||||
|
||||
class SurvexScanSingle(models.Model):
|
||||
ffile = models.CharField(max_length=200)
|
||||
@@ -208,6 +211,9 @@ class SurvexScanSingle(models.Model):
|
||||
|
||||
def get_absolute_url(self):
|
||||
return urlparse.urljoin(settings.URL_ROOT, reverse('surveyscansingle', kwargs={"path":re.sub("#", "%23", self.survexscansfolder.walletname), "file":self.name}))
|
||||
|
||||
def __unicode__(self):
|
||||
return "Survey Scan Image: " + unicode(self.name) + " in " + unicode(self.survexscansfolder)
|
||||
|
||||
|
||||
class TunnelFile(models.Model):
|
||||
|
||||
@@ -3,7 +3,7 @@ from django.utils.html import conditional_escape
|
||||
from django.template.defaultfilters import stringfilter
|
||||
from django.utils.safestring import mark_safe
|
||||
from django.conf import settings
|
||||
from troggle.core.models import QM, DPhoto, LogbookEntry, Cave
|
||||
from troggle.core.models import QM, LogbookEntry, Cave
|
||||
import re, urlparse
|
||||
|
||||
register = template.Library()
|
||||
@@ -120,13 +120,13 @@ def wiki_to_html_short(value, autoescape=None):
|
||||
except KeyError:
|
||||
linkText=None
|
||||
|
||||
try:
|
||||
photo=DPhoto.objects.get(file=matchdict['photoName'])
|
||||
if not linkText:
|
||||
linkText=str(photo)
|
||||
res=r'<a href=' + photo.get_admin_url() +'>' + linkText + '</a>'
|
||||
except Photo.DoesNotExist:
|
||||
res = r'<a class="redtext" href="">make new photo</a>'
|
||||
# try:
|
||||
# photo=DPhoto.objects.get(file=matchdict['photoName'])
|
||||
# if not linkText:
|
||||
# linkText=str(photo)
|
||||
# res=r'<a href=' + photo.get_admin_url() +'>' + linkText + '</a>'
|
||||
# except Photo.DoesNotExist:
|
||||
# res = r'<a class="redtext" href="">make new photo</a>'
|
||||
return res
|
||||
|
||||
def photoSrcRepl(matchobj):
|
||||
|
||||
139
core/views_caves.py
Normal file → Executable file
139
core/views_caves.py
Normal file → Executable file
@@ -1,5 +1,7 @@
|
||||
#!/usr/bin/python
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import (absolute_import, division,
|
||||
print_function, unicode_literals)
|
||||
|
||||
from troggle.core.models import CaveSlug, Cave, CaveAndEntrance, Survey, Expedition, QM, CaveDescription, EntranceSlug, Entrance, Area, SurvexStation
|
||||
from troggle.core.forms import CaveForm, CaveAndEntranceFormSet, VersionControlCommentForm, EntranceForm, EntranceLetterForm
|
||||
@@ -7,18 +9,44 @@ import troggle.core.models as models
|
||||
import troggle.settings as settings
|
||||
from troggle.helper import login_required_if_public
|
||||
|
||||
from PIL import Image, ImageDraw, ImageFont
|
||||
from django.forms.models import modelformset_factory
|
||||
from django import forms
|
||||
from django.core.urlresolvers import reverse
|
||||
from django.http import HttpResponse, HttpResponseRedirect
|
||||
from django.conf import settings
|
||||
import re, urlparse
|
||||
import re
|
||||
import os
|
||||
import urlparse
|
||||
#import urllib.parse
|
||||
from django.shortcuts import get_object_or_404, render
|
||||
import settings
|
||||
|
||||
|
||||
from PIL import Image, ImageDraw, ImageFont
|
||||
import string, os, sys, subprocess
|
||||
class MapLocations(object):
|
||||
p = [
|
||||
("laser.0_7", "BNase", "Reference", "Bräuning Nase laser point"),
|
||||
("226-96", "BZkn", "Reference", "Bräuning Zinken trig point"),
|
||||
("vd1","VD1","Reference", "VD1 survey point"),
|
||||
("laser.kt114_96","HSK","Reference", "Hinterer Schwarzmooskogel trig point"),
|
||||
("2000","Nipple","Reference", "Nipple (Weiße Warze)"),
|
||||
("3000","VSK","Reference", "Vorderer Schwarzmooskogel summit"),
|
||||
("topcamp", "OTC", "Reference", "Old Top Camp"),
|
||||
("laser.0", "LSR0", "Reference", "Laser Point 0"),
|
||||
("laser.0_1", "LSR1", "Reference", "Laser Point 0/1"),
|
||||
("laser.0_3", "LSR3", "Reference", "Laser Point 0/3"),
|
||||
("laser.0_5", "LSR5", "Reference", "Laser Point 0/5"),
|
||||
("225-96", "BAlm", "Reference", "Bräuning Alm trig point")
|
||||
]
|
||||
def points(self):
|
||||
for ent in Entrance.objects.all():
|
||||
if ent.best_station():
|
||||
areaName = ent.caveandentrance_set.all()[0].cave.getArea().short_name
|
||||
self.p.append((ent.best_station(), "%s-%s" % (areaName, str(ent)[5:]), ent.needs_surface_work(), str(ent)))
|
||||
return self.p
|
||||
|
||||
def __str__(self):
|
||||
return "{} map locations".format(len(self.p))
|
||||
|
||||
def getCave(cave_id):
|
||||
"""Returns a cave object when given a cave name or number. It is used by views including cavehref, ent, and qm."""
|
||||
@@ -217,7 +245,7 @@ def qm(request,cave_id,qm_id,year,grade=None):
|
||||
return render(request,'qm.html',locals())
|
||||
|
||||
except QM.DoesNotExist:
|
||||
url=urlparse.urljoin(settings.URL_ROOT, r'/admin/core/qm/add/'+'?'+ r'number=' + qm_id)
|
||||
url=urllib.parse.urljoin(settings.URL_ROOT, r'/admin/core/qm/add/'+'?'+ r'number=' + qm_id)
|
||||
if grade:
|
||||
url += r'&grade=' + grade
|
||||
return HttpResponseRedirect(url)
|
||||
@@ -240,7 +268,7 @@ def entranceSlug(request, slug):
|
||||
|
||||
def survexblock(request, survexpath):
|
||||
survexpath = re.sub("/", ".", survexpath)
|
||||
print "jjjjjj", survexpath
|
||||
print("jjjjjj", survexpath)
|
||||
survexblock = models.SurvexBlock.objects.get(survexpath=survexpath)
|
||||
#ftext = survexblock.filecontents()
|
||||
ftext = survexblock.text
|
||||
@@ -335,7 +363,7 @@ maps = {
|
||||
"Grießkogel Area"],
|
||||
}
|
||||
|
||||
for n in maps.keys():
|
||||
for n in list(maps.keys()):
|
||||
L, T, R, B, S, name = maps[n]
|
||||
W = (R-L)/2
|
||||
H = (T-B)/2
|
||||
@@ -371,6 +399,7 @@ areacolours = {
|
||||
for FONT in [
|
||||
"/usr/share/fonts/truetype/freefont/FreeSans.ttf",
|
||||
"/usr/X11R6/lib/X11/fonts/truetype/arial.ttf",
|
||||
"/mnt/c/windows/fonts/arial.ttf",
|
||||
"C:\WINNT\Fonts\ARIAL.TTF"
|
||||
]:
|
||||
if os.path.isfile(FONT): break
|
||||
@@ -406,7 +435,7 @@ def plot(surveypoint, number, point_type, label, mapcode, draw, img):
|
||||
ss = SurvexStation.objects.lookup(surveypoint)
|
||||
E, N = ss.x, ss.y
|
||||
shortnumber = number.replace("—","")
|
||||
(x,y) = map(int, mungecoord(E, N, mapcode, img))
|
||||
(x,y) = list(map(int, mungecoord(E, N, mapcode, img)))
|
||||
#imgmaps[maparea].append( [x-4, y-SIZE/2, x+4+draw.textsize(shortnumber)[0], y+SIZE/2, shortnumber, label] )
|
||||
draw.rectangle([(x+CIRCLESIZE, y-TEXTSIZE/2), (x+CIRCLESIZE*2+draw.textsize(shortnumber)[0], y+TEXTSIZE/2)], fill="#ffffff")
|
||||
draw.text((x+CIRCLESIZE * 1.5,y-TEXTSIZE/2), shortnumber, fill="#000000")
|
||||
@@ -418,44 +447,44 @@ def prospecting_image(request, name):
|
||||
|
||||
mainImage = Image.open(os.path.join(settings.SURVEY_SCANS, "location_maps", "pguidemap.jpg"))
|
||||
if settings.PUBLIC_SITE and not request.user.is_authenticated():
|
||||
mainImage = Image.new("RGB", mainImage.size, '#ffffff')
|
||||
mainImage = Image.new("RGB", mainImage.size, '#ffffff')
|
||||
m = maps[name]
|
||||
#imgmaps = []
|
||||
if name == "all":
|
||||
img = mainImage
|
||||
img = mainImage
|
||||
else:
|
||||
M = maps['all']
|
||||
W, H = mainImage.size
|
||||
l = int((m[L] - M[L]) / (M[R] - M[L]) * W)
|
||||
t = int((m[T] - M[T]) / (M[B] - M[T]) * H)
|
||||
r = int((m[R] - M[L]) / (M[R] - M[L]) * W)
|
||||
b = int((m[B] - M[T]) / (M[B] - M[T]) * H)
|
||||
img = mainImage.crop((l, t, r, b))
|
||||
w = int(round(m[ZOOM] * (m[R] - m[L]) / (M[R] - M[L]) * W))
|
||||
h = int(round(m[ZOOM] * (m[B] - m[T]) / (M[B] - M[T]) * H))
|
||||
img = img.resize((w, h), Image.BICUBIC)
|
||||
M = maps['all']
|
||||
W, H = mainImage.size
|
||||
l = int((m[L] - M[L]) / (M[R] - M[L]) * W)
|
||||
t = int((m[T] - M[T]) / (M[B] - M[T]) * H)
|
||||
r = int((m[R] - M[L]) / (M[R] - M[L]) * W)
|
||||
b = int((m[B] - M[T]) / (M[B] - M[T]) * H)
|
||||
img = mainImage.crop((l, t, r, b))
|
||||
w = int(round(m[ZOOM] * (m[R] - m[L]) / (M[R] - M[L]) * W))
|
||||
h = int(round(m[ZOOM] * (m[B] - m[T]) / (M[B] - M[T]) * H))
|
||||
img = img.resize((w, h), Image.BICUBIC)
|
||||
draw = ImageDraw.Draw(img)
|
||||
draw.setfont(myFont)
|
||||
if name == "all":
|
||||
for maparea in maps.keys():
|
||||
if maparea == "all":
|
||||
continue
|
||||
localm = maps[maparea]
|
||||
l,t = mungecoord(localm[L], localm[T], "all", img)
|
||||
r,b = mungecoord(localm[R], localm[B], "all", img)
|
||||
text = maparea + " map"
|
||||
textlen = draw.textsize(text)[0] + 3
|
||||
draw.rectangle([l, t, l+textlen, t+TEXTSIZE+2], fill='#ffffff')
|
||||
draw.text((l+2, t+1), text, fill="#000000")
|
||||
#imgmaps.append( [l, t, l+textlen, t+SIZE+2, "submap" + maparea, maparea + " subarea map"] )
|
||||
draw.line([l, t, r, t], fill='#777777', width=LINEWIDTH)
|
||||
draw.line([l, b, r, b], fill='#777777', width=LINEWIDTH)
|
||||
draw.line([l, t, l, b], fill='#777777', width=LINEWIDTH)
|
||||
draw.line([r, t, r, b], fill='#777777', width=LINEWIDTH)
|
||||
draw.line([l, t, l+textlen, t], fill='#777777', width=LINEWIDTH)
|
||||
draw.line([l, t+TEXTSIZE+2, l+textlen, t+TEXTSIZE+2], fill='#777777', width=LINEWIDTH)
|
||||
draw.line([l, t, l, t+TEXTSIZE+2], fill='#777777', width=LINEWIDTH)
|
||||
draw.line([l+textlen, t, l+textlen, t+TEXTSIZE+2], fill='#777777', width=LINEWIDTH)
|
||||
for maparea in list(maps.keys()):
|
||||
if maparea == "all":
|
||||
continue
|
||||
localm = maps[maparea]
|
||||
l,t = mungecoord(localm[L], localm[T], "all", img)
|
||||
r,b = mungecoord(localm[R], localm[B], "all", img)
|
||||
text = maparea + " map"
|
||||
textlen = draw.textsize(text)[0] + 3
|
||||
draw.rectangle([l, t, l+textlen, t+TEXTSIZE+2], fill='#ffffff')
|
||||
draw.text((l+2, t+1), text, fill="#000000")
|
||||
#imgmaps.append( [l, t, l+textlen, t+SIZE+2, "submap" + maparea, maparea + " subarea map"] )
|
||||
draw.line([l, t, r, t], fill='#777777', width=LINEWIDTH)
|
||||
draw.line([l, b, r, b], fill='#777777', width=LINEWIDTH)
|
||||
draw.line([l, t, l, b], fill='#777777', width=LINEWIDTH)
|
||||
draw.line([r, t, r, b], fill='#777777', width=LINEWIDTH)
|
||||
draw.line([l, t, l+textlen, t], fill='#777777', width=LINEWIDTH)
|
||||
draw.line([l, t+TEXTSIZE+2, l+textlen, t+TEXTSIZE+2], fill='#777777', width=LINEWIDTH)
|
||||
draw.line([l, t, l, t+TEXTSIZE+2], fill='#777777', width=LINEWIDTH)
|
||||
draw.line([l+textlen, t, l+textlen, t+TEXTSIZE+2], fill='#777777', width=LINEWIDTH)
|
||||
#imgmaps[maparea] = []
|
||||
# Draw scale bar
|
||||
m100 = int(100 / (m[R] - m[L]) * img.size[0])
|
||||
@@ -477,24 +506,24 @@ def prospecting_image(request, name):
|
||||
plot("laser.0_5", "LSR5", "Reference", "Laser Point 0/5", name, draw, img)
|
||||
plot("225-96", "BAlm", "Reference", "Bräuning Alm trig point", name, draw, img)
|
||||
for entrance in Entrance.objects.all():
|
||||
station = entrance.best_station()
|
||||
if station:
|
||||
#try:
|
||||
areaName = entrance.caveandentrance_set.all()[0].cave.getArea().short_name
|
||||
plot(station, "%s-%s" % (areaName, str(entrance)[5:]), entrance.needs_surface_work(), str(entrance), name, draw, img)
|
||||
#except:
|
||||
# pass
|
||||
|
||||
for (N, E, D, num) in [(35975.37, 83018.21, 100,"177"), # Calculated from bearings
|
||||
(35350.00, 81630.00, 50, "71"), # From Auer map
|
||||
(36025.00, 82475.00, 50, "146"), # From mystery map
|
||||
(35600.00, 82050.00, 50, "35"), # From Auer map
|
||||
(35650.00, 82025.00, 50, "44"), # From Auer map
|
||||
(36200.00, 82925.00, 50, "178"), # Calculated from bearings
|
||||
(35232.64, 82910.37, 25, "181"), # Calculated from bearings
|
||||
(35323.60, 81357.83, 50, "74") # From Auer map
|
||||
station = entrance.best_station()
|
||||
if station:
|
||||
#try:
|
||||
areaName = entrance.caveandentrance_set.all()[0].cave.getArea().short_name
|
||||
plot(station, "%s-%s" % (areaName, str(entrance)[5:]), entrance.needs_surface_work(), str(entrance), name, draw, img)
|
||||
#except:
|
||||
# pass
|
||||
|
||||
for (N, E, D, num) in [(35975.37, 83018.21, 100,"177"), # Calculated from bearings
|
||||
(35350.00, 81630.00, 50, "71"), # From Auer map
|
||||
(36025.00, 82475.00, 50, "146"), # From mystery map
|
||||
(35600.00, 82050.00, 50, "35"), # From Auer map
|
||||
(35650.00, 82025.00, 50, "44"), # From Auer map
|
||||
(36200.00, 82925.00, 50, "178"), # Calculated from bearings
|
||||
(35232.64, 82910.37, 25, "181"), # Calculated from bearings
|
||||
(35323.60, 81357.83, 50, "74") # From Auer map
|
||||
]:
|
||||
(N,E,D) = map(float, (N, E, D))
|
||||
(N,E,D) = list(map(float, (N, E, D)))
|
||||
maparea = Cave.objects.get(kataster_number = num).getArea().short_name
|
||||
lo = mungecoord(N-D, E+D, name, img)
|
||||
hi = mungecoord(N+D, E-D, name, img)
|
||||
|
||||
89
core/views_logbooks.py
Normal file → Executable file
89
core/views_logbooks.py
Normal file → Executable file
@@ -164,22 +164,95 @@ def personForm(request,pk):
|
||||
form=PersonForm(instance=person)
|
||||
return render(request,'personform.html', {'form':form,})
|
||||
|
||||
from settings import *
|
||||
def pathsreport(request):
|
||||
pathsdict={
|
||||
"ADMIN_MEDIA_PREFIX" : ADMIN_MEDIA_PREFIX,
|
||||
"ADMIN_MEDIA_PREFIX" : ADMIN_MEDIA_PREFIX,
|
||||
"CAVEDESCRIPTIONSX" : CAVEDESCRIPTIONS,
|
||||
"DIR_ROOT" : DIR_ROOT,
|
||||
"ENTRANCEDESCRIPTIONS" : ENTRANCEDESCRIPTIONS,
|
||||
"EXPOUSER_EMAIL" : EXPOUSER_EMAIL,
|
||||
"EXPOUSERPASS" :"<redacted>",
|
||||
"EXPOUSER" : EXPOUSER,
|
||||
"EXPOWEB" : EXPOWEB,
|
||||
"EXPOWEB_URL" : EXPOWEB_URL,
|
||||
"FILES" : FILES,
|
||||
"JSLIB_URL" : JSLIB_URL,
|
||||
"LOGFILE" : LOGFILE,
|
||||
"LOGIN_REDIRECT_URL" : LOGIN_REDIRECT_URL,
|
||||
"MEDIA_ADMIN_DIR" : MEDIA_ADMIN_DIR,
|
||||
"MEDIA_ROOT" : MEDIA_ROOT,
|
||||
"MEDIA_URL" : MEDIA_URL,
|
||||
#"PHOTOS_ROOT" : PHOTOS_ROOT,
|
||||
"PHOTOS_URL" : PHOTOS_URL,
|
||||
"PYTHON_PATH" : PYTHON_PATH,
|
||||
"REPOS_ROOT_PATH" : REPOS_ROOT_PATH,
|
||||
"ROOT_URLCONF" : ROOT_URLCONF,
|
||||
"STATIC_ROOT" : STATIC_ROOT,
|
||||
"STATIC_URL" : STATIC_URL,
|
||||
"SURVEX_DATA" : SURVEX_DATA,
|
||||
"SURVEY_SCANS" : SURVEY_SCANS,
|
||||
"SURVEYS" : SURVEYS,
|
||||
"SURVEYS_URL" : SURVEYS_URL,
|
||||
"SVX_URL" : SVX_URL,
|
||||
"TEMPLATE_DIRS" : TEMPLATE_DIRS,
|
||||
"THREEDCACHEDIR" : THREEDCACHEDIR,
|
||||
"TINY_MCE_MEDIA_ROOT" : TINY_MCE_MEDIA_ROOT,
|
||||
"TINY_MCE_MEDIA_URL" : TINY_MCE_MEDIA_URL,
|
||||
"TUNNEL_DATA" : TUNNEL_DATA,
|
||||
"URL_ROOT" : URL_ROOT
|
||||
}
|
||||
|
||||
ncodes = len(pathsdict)
|
||||
|
||||
bycodeslist = sorted(pathsdict.iteritems())
|
||||
bypathslist = sorted(pathsdict.iteritems(), key=lambda x: x[1])
|
||||
|
||||
return render(request, 'pathsreport.html', {
|
||||
"pathsdict":pathsdict,
|
||||
"bycodeslist":bycodeslist,
|
||||
"bypathslist":bypathslist,
|
||||
"ncodes":ncodes})
|
||||
|
||||
|
||||
|
||||
def experimental(request):
|
||||
blockroots = models.SurvexBlock.objects.filter(name="root")
|
||||
if len(blockroots)>1:
|
||||
print(" ! more than one root survexblock {}".format(len(blockroots)))
|
||||
for sbr in blockroots:
|
||||
print("{} {} {} {}".format(sbr.id, sbr.name, sbr.text, sbr.date))
|
||||
sbr = blockroots[0]
|
||||
totalsurvexlength = sbr.totalleglength
|
||||
try:
|
||||
nimportlegs = int(sbr.text)
|
||||
except:
|
||||
print("{} {} {} {}".format(sbr.id, sbr.name, sbr.text, sbr.date))
|
||||
nimportlegs = -1
|
||||
|
||||
legsbyexpo = [ ]
|
||||
addupsurvexlength = 0
|
||||
for expedition in Expedition.objects.all():
|
||||
survexblocks = expedition.survexblock_set.all()
|
||||
survexlegs = [ ]
|
||||
#survexlegs = [ ]
|
||||
legsyear=0
|
||||
survexleglength = 0.0
|
||||
for survexblock in survexblocks:
|
||||
survexlegs.extend(survexblock.survexleg_set.all())
|
||||
#survexlegs.extend(survexblock.survexleg_set.all())
|
||||
survexleglength += survexblock.totalleglength
|
||||
legsbyexpo.append((expedition, {"nsurvexlegs":len(survexlegs), "survexleglength":survexleglength}))
|
||||
legsbyexpo.reverse()
|
||||
|
||||
survexlegs = models.SurvexLeg.objects.all()
|
||||
totalsurvexlength = sum([survexleg.tape for survexleg in survexlegs])
|
||||
return render(request, 'experimental.html', { "nsurvexlegs":len(survexlegs), "totalsurvexlength":totalsurvexlength, "legsbyexpo":legsbyexpo })
|
||||
try:
|
||||
legsyear += int(survexblock.text)
|
||||
except:
|
||||
pass
|
||||
addupsurvexlength += survexleglength
|
||||
legsbyexpo.append((expedition, {"nsurvexlegs":legsyear, "survexleglength":survexleglength}))
|
||||
legsbyexpo.reverse()
|
||||
|
||||
#removing survexleg objects completely
|
||||
#survexlegs = models.SurvexLeg.objects.all()
|
||||
#totalsurvexlength = sum([survexleg.tape for survexleg in survexlegs])
|
||||
return render(request, 'experimental.html', { "nsurvexlegs":nimportlegs, "totalsurvexlength":totalsurvexlength, "addupsurvexlength":addupsurvexlength, "legsbyexpo":legsbyexpo })
|
||||
|
||||
@login_required_if_public
|
||||
def newLogbookEntry(request, expeditionyear, pdate = None, pslug = None):
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
from troggle.core.models import Cave, Expedition, Person, LogbookEntry, PersonExpedition, PersonTrip, DPhoto, QM
|
||||
#from troggle.core.forms import UploadFileForm
|
||||
from troggle.core.models import Cave, Expedition, Person, LogbookEntry, PersonExpedition, PersonTrip, QM
|
||||
#from troggle.core.forms import UploadFileForm, DPhoto
|
||||
from django.conf import settings
|
||||
from django import forms
|
||||
from django.template import loader, Context
|
||||
@@ -30,7 +30,7 @@ def frontpage(request):
|
||||
expeditions = Expedition.objects.order_by("-year")
|
||||
logbookentry = LogbookEntry
|
||||
cave = Cave
|
||||
photo = DPhoto
|
||||
#photo = DPhoto
|
||||
from django.contrib.admin.templatetags import log
|
||||
return render(request,'frontpage.html', locals())
|
||||
|
||||
@@ -55,8 +55,9 @@ def controlPanel(request):
|
||||
|
||||
#importlist is mostly here so that things happen in the correct order.
|
||||
#http post data seems to come in an unpredictable order, so we do it this way.
|
||||
importlist=['reload_db', 'import_people', 'import_cavetab', 'import_logbooks', 'import_surveys', 'import_QMs']
|
||||
databaseReset.make_dirs()
|
||||
importlist=['reinit_db', 'import_people', 'import_caves', 'import_logbooks',
|
||||
'import_survexblks', 'import_QMs', 'import_survexpos', 'import_surveyscans', 'import_tunnelfiles']
|
||||
databaseReset.dirsredirect()
|
||||
for item in importlist:
|
||||
if item in request.POST:
|
||||
print("running"+ " databaseReset."+item+"()")
|
||||
@@ -70,20 +71,6 @@ def controlPanel(request):
|
||||
|
||||
return render(request,'controlPanel.html', {'caves':Cave.objects.all(),'expeditions':Expedition.objects.all(),'jobs_completed':jobs_completed})
|
||||
|
||||
def downloadCavetab(request):
|
||||
from export import tocavetab
|
||||
response = HttpResponse(content_type='text/csv')
|
||||
response['Content-Disposition'] = 'attachment; filename=CAVETAB2.CSV'
|
||||
tocavetab.writeCaveTab(response)
|
||||
return response
|
||||
|
||||
def downloadSurveys(request):
|
||||
from export import tosurveys
|
||||
response = HttpResponse(content_type='text/csv')
|
||||
response['Content-Disposition'] = 'attachment; filename=Surveys.csv'
|
||||
tosurveys.writeCaveTab(response)
|
||||
return response
|
||||
|
||||
def downloadLogbook(request,year=None,extension=None,queryset=None):
|
||||
|
||||
if year:
|
||||
|
||||
136
core/views_survex.py
Normal file → Executable file
136
core/views_survex.py
Normal file → Executable file
@@ -15,47 +15,76 @@ from parsers.people import GetPersonExpeditionNameLookup
|
||||
import troggle.settings as settings
|
||||
import parsers.survex
|
||||
|
||||
survextemplatefile = """; Locn: Totes Gebirge, Austria - Loser/Augst-Eck Plateau (kataster group 1623)
|
||||
; Cave:
|
||||
survextemplatefile = """; *** THIS IS A TEMPLATE FILE NOT WHAT YOU MIGHT BE EXPECTING ***
|
||||
|
||||
*** DO NOT SAVE THIS FILE WITHOUT RENAMING IT !! ***
|
||||
;[Stuff in square brackets is example text to be replaced with real data,
|
||||
; removing the square brackets]
|
||||
|
||||
*begin [surveyname]
|
||||
|
||||
*export [connecting stations]
|
||||
; stations linked into other surveys (or likely to)
|
||||
*export [1 8 12 34]
|
||||
|
||||
*title "area title"
|
||||
*date 2999.99.99
|
||||
*team Insts [Caver]
|
||||
*team Insts [Caver]
|
||||
*team Notes [Caver]
|
||||
*instrument [set number]
|
||||
; Cave:
|
||||
; Area in cave/QM:
|
||||
*title ""
|
||||
*date [2040.07.04] ; <-- CHANGE THIS DATE
|
||||
*team Insts [Fred Fossa]
|
||||
*team Notes [Brenda Badger]
|
||||
*team Pics [Luke Lynx]
|
||||
*team Tape [Albert Aadvark]
|
||||
*instrument [SAP #+Laser Tape/DistoX/Compass # ; Clino #]
|
||||
; Calibration: [Where, readings]
|
||||
*ref [2040#00] ; <-- CHANGE THIS TOO
|
||||
; the #number is on the clear pocket containing the original notes
|
||||
|
||||
;ref.: 2009#NN
|
||||
; if using a tape:
|
||||
*calibrate tape +0.0 ; +ve if tape was too short, -ve if too long
|
||||
|
||||
*calibrate tape +0.0 ; +ve if tape was too short, -ve if too long
|
||||
; Centreline data
|
||||
*data normal from to length bearing gradient ignoreall
|
||||
[ 1 2 5.57 034.5 -12.8 ]
|
||||
|
||||
*data normal from to tape compass clino
|
||||
1 2 3.90 298 -20
|
||||
;-----------
|
||||
;recorded station details (leave commented out)
|
||||
;(NP=Nail Polish, LHW/RHW=Left/Right Hand Wall)
|
||||
;Station Left Right Up Down Description
|
||||
;[Red] nail varnish markings
|
||||
[;1 0.8 0 5.3 1.6 ; NP on boulder. pt 23 on foo survey ]
|
||||
[;2 0.3 1.2 6 1.2 ; NP '2' LHW ]
|
||||
[;3 1.3 0 3.4 0.2 ; Rock on floor - not refindable ]
|
||||
|
||||
*data passage station left right up down ignoreall
|
||||
1 [L] [R] [U] [D] comment
|
||||
|
||||
*end [surveyname]"""
|
||||
|
||||
|
||||
def ReplaceTabs(stext):
|
||||
res = [ ]
|
||||
nsl = 0
|
||||
for s in re.split("(\t|\n)", stext):
|
||||
if s == "\t":
|
||||
res.append(" " * (4 - (nsl % 4)))
|
||||
nsl = 0
|
||||
continue
|
||||
if s == "\n":
|
||||
nsl = 0
|
||||
else:
|
||||
nsl += len(s)
|
||||
res.append(s)
|
||||
return "".join(res)
|
||||
;LRUDs arranged into passage tubes
|
||||
;new *data command for each 'passage',
|
||||
;repeat stations and adjust numbers as needed
|
||||
*data passage station left right up down
|
||||
;[ 1 0.8 0 5.3 1.6 ]
|
||||
;[ 2 0.3 1.2 6 1.2 ]
|
||||
*data passage station left right up down
|
||||
;[ 1 1.3 1.5 5.3 1.6 ]
|
||||
;[ 3 2.4 0 3.4 0.2 ]
|
||||
|
||||
|
||||
;-----------
|
||||
;Question Mark List ;(leave commented-out)
|
||||
; The nearest-station is the name of the survey and station which are nearest to
|
||||
; the QM. The resolution-station is either '-' to indicate that the QM hasn't
|
||||
; been checked; or the name of the survey and station which push that QM. If a
|
||||
; QM doesn't go anywhere, set the resolution-station to be the same as the
|
||||
; nearest-station. Include any relevant details of how to find or push the QM in
|
||||
; the textual description.
|
||||
;Serial number grade(A/B/C/X) nearest-station resolution-station description
|
||||
;[ QM1 A surveyname.3 - description of QM ]
|
||||
;[ QM2 B surveyname.5 - description of QM ]
|
||||
|
||||
;------------
|
||||
;Cave description ;(leave commented-out)
|
||||
;freeform text describing this section of the cave
|
||||
|
||||
*end [surveyname]
|
||||
"""
|
||||
|
||||
|
||||
class SvxForm(forms.Form):
|
||||
@@ -63,15 +92,14 @@ class SvxForm(forms.Form):
|
||||
filename = forms.CharField(widget=forms.TextInput(attrs={"readonly":True}))
|
||||
datetime = forms.DateTimeField(widget=forms.TextInput(attrs={"readonly":True}))
|
||||
outputtype = forms.CharField(widget=forms.TextInput(attrs={"readonly":True}))
|
||||
code = forms.CharField(widget=forms.Textarea(attrs={"cols":150, "rows":18}))
|
||||
code = forms.CharField(widget=forms.Textarea(attrs={"cols":150, "rows":36}))
|
||||
|
||||
def GetDiscCode(self):
|
||||
fname = settings.SURVEX_DATA + self.data['filename'] + ".svx"
|
||||
if not os.path.isfile(fname):
|
||||
return survextemplatefile
|
||||
fin = open(fname, "rb")
|
||||
svxtext = fin.read().decode("latin1") # unicode(a, "latin1")
|
||||
svxtext = ReplaceTabs(svxtext).strip()
|
||||
fin = open(fname, "rt")
|
||||
svxtext = fin.read().encode("utf8")
|
||||
fin.close()
|
||||
return svxtext
|
||||
|
||||
@@ -84,19 +112,28 @@ class SvxForm(forms.Form):
|
||||
def SaveCode(self, rcode):
|
||||
fname = settings.SURVEX_DATA + self.data['filename'] + ".svx"
|
||||
if not os.path.isfile(fname):
|
||||
# only save if appears valid
|
||||
if re.search(r"\[|\]", rcode):
|
||||
return "Error: clean up all []s from the text"
|
||||
return "Error: remove all []s from the text. They are only template guidance."
|
||||
mbeginend = re.search(r"(?s)\*begin\s+(\w+).*?\*end\s+(\w+)", rcode)
|
||||
if not mbeginend:
|
||||
return "Error: no begin/end block here"
|
||||
if mbeginend.group(1) != mbeginend.group(2):
|
||||
return "Error: mismatching beginend"
|
||||
|
||||
fout = open(fname, "w")
|
||||
res = fout.write(rcode.encode("latin1"))
|
||||
return "Error: mismatching begin/end labels"
|
||||
|
||||
# Make this create new survex folders if needed
|
||||
try:
|
||||
fout = open(fname, "wb")
|
||||
except IOError:
|
||||
pth = os.path.dirname(self.data['filename'])
|
||||
newpath = os.path.join(settings.SURVEX_DATA, pth)
|
||||
if not os.path.exists(newpath):
|
||||
os.makedirs(newpath)
|
||||
fout = open(fname, "wb")
|
||||
|
||||
# javascript seems to insert CRLF on WSL1 whatever you say. So fix that:
|
||||
res = fout.write(rcode.replace("\r",""))
|
||||
fout.close()
|
||||
return "SAVED"
|
||||
return "SAVED ."
|
||||
|
||||
def Process(self):
|
||||
print("....\n\n\n....Processing\n\n\n")
|
||||
@@ -104,7 +141,7 @@ class SvxForm(forms.Form):
|
||||
os.chdir(os.path.split(settings.SURVEX_DATA + self.data['filename'])[0])
|
||||
os.system(settings.CAVERN + " --log " + settings.SURVEX_DATA + self.data['filename'] + ".svx")
|
||||
os.chdir(cwd)
|
||||
fin = open(settings.SURVEX_DATA + self.data['filename'] + ".log", "rb")
|
||||
fin = open(settings.SURVEX_DATA + self.data['filename'] + ".log", "rt")
|
||||
log = fin.read()
|
||||
fin.close()
|
||||
log = re.sub("(?s).*?(Survey contains)", "\\1", log)
|
||||
@@ -144,7 +181,6 @@ def svx(request, survex_file):
|
||||
form.data['code'] = rcode
|
||||
if "save" in rform.data:
|
||||
if request.user.is_authenticated():
|
||||
#print("sssavvving")
|
||||
message = form.SaveCode(rcode)
|
||||
else:
|
||||
message = "You do not have authority to save this file"
|
||||
@@ -179,7 +215,7 @@ def svx(request, survex_file):
|
||||
return render_to_response('svxfile.html', vmap)
|
||||
|
||||
def svxraw(request, survex_file):
|
||||
svx = open(os.path.join(settings.SURVEX_DATA, survex_file+".svx"), "rb")
|
||||
svx = open(os.path.join(settings.SURVEX_DATA, survex_file+".svx"), "rt",encoding='utf8')
|
||||
return HttpResponse(svx, content_type="text")
|
||||
|
||||
|
||||
@@ -194,20 +230,20 @@ def process(survex_file):
|
||||
def threed(request, survex_file):
|
||||
process(survex_file)
|
||||
try:
|
||||
threed = open(settings.SURVEX_DATA + survex_file + ".3d", "rb")
|
||||
threed = open(settings.SURVEX_DATA + survex_file + ".3d", "rt",encoding='utf8')
|
||||
return HttpResponse(threed, content_type="model/3d")
|
||||
except:
|
||||
log = open(settings.SURVEX_DATA + survex_file + ".log", "rb")
|
||||
log = open(settings.SURVEX_DATA + survex_file + ".log", "rt",encoding='utf8')
|
||||
return HttpResponse(log, content_type="text")
|
||||
|
||||
def log(request, survex_file):
|
||||
process(survex_file)
|
||||
log = open(settings.SURVEX_DATA + survex_file + ".log", "rb")
|
||||
log = open(settings.SURVEX_DATA + survex_file + ".log", "rt",encoding='utf8')
|
||||
return HttpResponse(log, content_type="text")
|
||||
|
||||
def err(request, survex_file):
|
||||
process(survex_file)
|
||||
err = open(settings.SURVEX_DATA + survex_file + ".err", "rb")
|
||||
err = open(settings.SURVEX_DATA + survex_file + ".err", "rt",encoding='utf8')
|
||||
return HttpResponse(err, content_type="text")
|
||||
|
||||
|
||||
|
||||
584
databaseReset.py
Normal file → Executable file
584
databaseReset.py
Normal file → Executable file
@@ -1,191 +1,403 @@
|
||||
from __future__ import (absolute_import, division,
|
||||
print_function)
|
||||
import os
|
||||
import time
|
||||
import timeit
|
||||
import json
|
||||
|
||||
import settings
|
||||
if os.geteuid() == 0:
|
||||
print("This script should be run as expo not root - quitting")
|
||||
exit()
|
||||
|
||||
os.environ['PYTHONPATH'] = settings.PYTHON_PATH
|
||||
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'settings')
|
||||
|
||||
from django.core import management
|
||||
from django.db import connection
|
||||
from django.db import connection, close_old_connections
|
||||
from django.contrib.auth.models import User
|
||||
from django.http import HttpResponse
|
||||
from django.core.urlresolvers import reverse
|
||||
from troggle.core.models import Cave, Entrance
|
||||
import troggle.flatpages.models
|
||||
|
||||
databasename=settings.DATABASES['default']['NAME']
|
||||
from troggle.core.models import Cave, Entrance
|
||||
import troggle.settings
|
||||
import troggle.flatpages.models
|
||||
import troggle.logbooksdump
|
||||
|
||||
# NOTE databaseReset.py is *imported* by views_other.py as it is used in the control panel
|
||||
# presented there.
|
||||
|
||||
expouser=settings.EXPOUSER
|
||||
expouserpass=settings.EXPOUSERPASS
|
||||
expouseremail=settings.EXPOUSER_EMAIL
|
||||
|
||||
def reload_db():
|
||||
def reinit_db():
|
||||
"""Rebuild database from scratch. Deletes the file first if sqlite is used,
|
||||
otherwise it drops the database and creates it.
|
||||
"""
|
||||
currentdbname = settings.DATABASES['default']['NAME']
|
||||
if settings.DATABASES['default']['ENGINE'] == 'django.db.backends.sqlite3':
|
||||
try:
|
||||
os.remove(databasename)
|
||||
os.remove(currentdbname)
|
||||
except OSError:
|
||||
pass
|
||||
else:
|
||||
cursor = connection.cursor()
|
||||
cursor.execute("DROP DATABASE %s" % databasename)
|
||||
cursor.execute("CREATE DATABASE %s" % databasename)
|
||||
cursor.execute("ALTER DATABASE %s CHARACTER SET=utf8" % databasename)
|
||||
cursor.execute("USE %s" % databasename)
|
||||
management.call_command('syncdb', interactive=False)
|
||||
cursor.execute("DROP DATABASE %s" % currentdbname)
|
||||
cursor.execute("CREATE DATABASE %s" % currentdbname)
|
||||
cursor.execute("ALTER DATABASE %s CHARACTER SET=utf8" % currentdbname)
|
||||
cursor.execute("USE %s" % currentdbname)
|
||||
syncuser()
|
||||
|
||||
def syncuser():
|
||||
"""Sync user - needed after reload
|
||||
"""
|
||||
print("Synchronizing user")
|
||||
management.call_command('migrate', interactive=False)
|
||||
user = User.objects.create_user(expouser, expouseremail, expouserpass)
|
||||
user.is_staff = True
|
||||
user.is_superuser = True
|
||||
user.save()
|
||||
|
||||
def make_dirs():
|
||||
"""Make directories that troggle requires"""
|
||||
def dirsredirect():
|
||||
"""Make directories that troggle requires and sets up page redirects
|
||||
"""
|
||||
#should also deal with permissions here.
|
||||
if not os.path.isdir(settings.PHOTOS_ROOT):
|
||||
os.mkdir(settings.PHOTOS_ROOT)
|
||||
#if not os.path.isdir(settings.PHOTOS_ROOT):
|
||||
#os.mkdir(settings.PHOTOS_ROOT)
|
||||
# for oldURL, newURL in [("indxal.htm", reverse("caveindex"))]:
|
||||
# f = troggle.flatpages.models.Redirect(originalURL = oldURL, newURL = newURL)
|
||||
# f.save()
|
||||
|
||||
def import_caves():
|
||||
import parsers.caves
|
||||
import troggle.parsers.caves
|
||||
print("Importing Caves")
|
||||
parsers.caves.readcaves()
|
||||
troggle.parsers.caves.readcaves()
|
||||
|
||||
def import_people():
|
||||
import parsers.people
|
||||
parsers.people.LoadPersonsExpos()
|
||||
import troggle.parsers.people
|
||||
print("Importing People (folk.csv)")
|
||||
troggle.parsers.people.LoadPersonsExpos()
|
||||
|
||||
def import_logbooks():
|
||||
# The below line was causing errors I didn't understand (it said LOGFILE was a string), and I couldn't be bothered to figure
|
||||
# what was going on so I just catch the error with a try. - AC 21 May
|
||||
try:
|
||||
settings.LOGFILE.write('\nBegun importing logbooks at ' + time.asctime() +'\n'+'-'*60)
|
||||
except:
|
||||
pass
|
||||
|
||||
import parsers.logbooks
|
||||
parsers.logbooks.LoadLogbooks()
|
||||
|
||||
def import_survex():
|
||||
import parsers.survex
|
||||
parsers.survex.LoadAllSurvexBlocks()
|
||||
parsers.survex.LoadPos()
|
||||
import troggle.parsers.logbooks
|
||||
print("Importing Logbooks")
|
||||
troggle.parsers.logbooks.LoadLogbooks()
|
||||
|
||||
def import_QMs():
|
||||
import parsers.QMs
|
||||
print("Importing QMs (old caves)")
|
||||
import troggle.parsers.QMs
|
||||
# import process itself runs on qm.csv in only 3 old caves, not the modern ones!
|
||||
|
||||
def import_survexblks():
|
||||
import troggle.parsers.survex
|
||||
print("Importing Survex Blocks")
|
||||
troggle.parsers.survex.LoadAllSurvexBlocks()
|
||||
|
||||
def import_surveys():
|
||||
import parsers.surveys
|
||||
parsers.surveys.parseSurveys(logfile=settings.LOGFILE)
|
||||
def import_survexpos():
|
||||
import troggle.parsers.survex
|
||||
print("Importing Survex x/y/z Positions")
|
||||
troggle.parsers.survex.LoadPos()
|
||||
|
||||
def import_surveyimgs():
|
||||
"""This appears to store data in unused objects. The code is kept
|
||||
for future re-working to manage progress against notes, plans and elevs.
|
||||
"""
|
||||
#import troggle.parsers.surveys
|
||||
print("NOT Importing survey images")
|
||||
#troggle.parsers.surveys.parseSurveys(logfile=settings.LOGFILE)
|
||||
|
||||
def import_surveyscans():
|
||||
import parsers.surveys
|
||||
parsers.surveys.LoadListScans()
|
||||
import troggle.parsers.surveys
|
||||
print("Importing Survey Scans")
|
||||
troggle.parsers.surveys.LoadListScans()
|
||||
|
||||
def import_tunnelfiles():
|
||||
import parsers.surveys
|
||||
parsers.surveys.LoadTunnelFiles()
|
||||
import troggle.parsers.surveys
|
||||
print("Importing Tunnel files")
|
||||
troggle.parsers.surveys.LoadTunnelFiles()
|
||||
|
||||
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
# These functions moved to a different file - not used currently.
|
||||
#import logbooksdump
|
||||
#def import_auto_logbooks():
|
||||
#def dumplogbooks():
|
||||
|
||||
def rebuild():
|
||||
""" Wipe the troggle database and sets up structure but imports nothing
|
||||
#def writeCaves():
|
||||
# Writes out all cave and entrance HTML files to
|
||||
# folder specified in settings.CAVEDESCRIPTIONS
|
||||
# for cave in Cave.objects.all():
|
||||
# cave.writeDataFile()
|
||||
# for entrance in Entrance.objects.all():
|
||||
# entrance.writeDataFile()
|
||||
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
|
||||
class JobQueue():
|
||||
"""A list of import operations to run. Always reports profile times
|
||||
in the same order.
|
||||
"""
|
||||
reload_db()
|
||||
make_dirs()
|
||||
pageredirects()
|
||||
def __init__(self,run):
|
||||
self.runlabel = run
|
||||
self.queue = [] # tuples of (jobname, jobfunction)
|
||||
self.results = {}
|
||||
self.results_order=[
|
||||
"date","runlabel","reinit", "caves", "people",
|
||||
"logbooks", "QMs", "scans", "survexblks", "survexpos",
|
||||
"tunnel", "surveyimgs", "test", "dirsredirect", "syncuser" ]
|
||||
for k in self.results_order:
|
||||
self.results[k]=[]
|
||||
self.tfile = "import_profile.json"
|
||||
self.htmlfile = "profile.html" # for HTML results table. Not yet done.
|
||||
|
||||
#Adding elements to queue - enqueue
|
||||
def enq(self,label,func):
|
||||
self.queue.append((label,func))
|
||||
return True
|
||||
|
||||
#Removing the last element from the queue - dequeue
|
||||
# def deq(self):
|
||||
# if len(self.queue)>0:
|
||||
# return self.queue.pop()
|
||||
# return ("Queue Empty!")
|
||||
|
||||
def reset():
|
||||
""" Wipe the troggle database and import everything from legacy data
|
||||
"""
|
||||
reload_db()
|
||||
make_dirs()
|
||||
pageredirects()
|
||||
import_caves()
|
||||
import_people()
|
||||
import_surveyscans()
|
||||
import_logbooks()
|
||||
import_QMs()
|
||||
import_survex()
|
||||
try:
|
||||
import_tunnelfiles()
|
||||
except:
|
||||
print("Tunnel files parser broken.")
|
||||
|
||||
import_surveys()
|
||||
|
||||
|
||||
def import_auto_logbooks():
|
||||
import parsers.logbooks
|
||||
import os
|
||||
for pt in troggle.core.models.PersonTrip.objects.all():
|
||||
pt.delete()
|
||||
for lbe in troggle.core.models.LogbookEntry.objects.all():
|
||||
lbe.delete()
|
||||
for expedition in troggle.core.models.Expedition.objects.all():
|
||||
directory = os.path.join(settings.EXPOWEB,
|
||||
"years",
|
||||
expedition.year,
|
||||
"autologbook")
|
||||
for root, dirs, filenames in os.walk(directory):
|
||||
for filename in filenames:
|
||||
print(os.path.join(root, filename))
|
||||
parsers.logbooks.parseAutoLogBookEntry(os.path.join(root, filename))
|
||||
|
||||
#Temporary function until definative source of data transfered.
|
||||
from django.template.defaultfilters import slugify
|
||||
from django.template import Context, loader
|
||||
def dumplogbooks():
|
||||
def get_name(pe):
|
||||
if pe.nickname:
|
||||
return pe.nickname
|
||||
else:
|
||||
return pe.person.first_name
|
||||
for lbe in troggle.core.models.LogbookEntry.objects.all():
|
||||
dateStr = lbe.date.strftime("%Y-%m-%d")
|
||||
directory = os.path.join(settings.EXPOWEB,
|
||||
"years",
|
||||
lbe.expedition.year,
|
||||
"autologbook")
|
||||
if not os.path.isdir(directory):
|
||||
os.mkdir(directory)
|
||||
filename = os.path.join(directory,
|
||||
dateStr + "." + slugify(lbe.title)[:50] + ".html")
|
||||
if lbe.cave:
|
||||
print(lbe.cave.reference())
|
||||
trip = {"title": lbe.title, "html":lbe.text, "cave": lbe.cave.reference(), "caveOrLocation": "cave"}
|
||||
else:
|
||||
trip = {"title": lbe.title, "html":lbe.text, "location":lbe.place, "caveOrLocation": "location"}
|
||||
pts = [pt for pt in lbe.persontrip_set.all() if pt.personexpedition]
|
||||
persons = [{"name": get_name(pt.personexpedition), "TU": pt.time_underground, "author": pt.is_logbook_entry_author} for pt in pts]
|
||||
f = open(filename, "wb")
|
||||
template = loader.get_template('dataformat/logbookentry.html')
|
||||
context = Context({'trip': trip,
|
||||
'persons': persons,
|
||||
'date': dateStr,
|
||||
'expeditionyear': lbe.expedition.year})
|
||||
output = template.render(context)
|
||||
f.write(unicode(output).encode( "utf-8" ))
|
||||
def loadprofiles(self):
|
||||
"""Load timings for previous runs from file
|
||||
"""
|
||||
if os.path.isfile(self.tfile):
|
||||
try:
|
||||
f = open(self.tfile, "r")
|
||||
data = json.load(f)
|
||||
for j in data:
|
||||
self.results[j] = data[j]
|
||||
except:
|
||||
print("FAILURE parsing JSON file %s" % (self.tfile))
|
||||
# Python bug: https://github.com/ShinNoNoir/twitterwebsearch/issues/12
|
||||
f.close()
|
||||
for j in self.results_order:
|
||||
self.results[j].append(None) # append a placeholder
|
||||
return True
|
||||
|
||||
def saveprofiles(self):
|
||||
with open(self.tfile, 'w') as f:
|
||||
json.dump(self.results, f)
|
||||
return True
|
||||
|
||||
def memdumpsql(self):
|
||||
djconn = django.db.connection
|
||||
from dump import _iterdump
|
||||
with open('memdump.sql', 'w') as f:
|
||||
for line in _iterdump(djconn):
|
||||
f.write('%s\n' % line.encode("utf8"))
|
||||
return True
|
||||
|
||||
def runqonce(self):
|
||||
"""Run all the jobs in the queue provided - once
|
||||
"""
|
||||
|
||||
print("** Running job ", self.runlabel)
|
||||
jobstart = time.time()
|
||||
self.results["date"].pop()
|
||||
self.results["date"].append(jobstart)
|
||||
self.results["runlabel"].pop()
|
||||
self.results["runlabel"].append(self.runlabel)
|
||||
|
||||
for i in self.queue:
|
||||
start = time.time()
|
||||
i[1]() # looks ugly but invokes function passed in the second item in the tuple
|
||||
duration = time.time()-start
|
||||
print("\n*- Ended \"", i[0], "\" %.1f seconds" % duration)
|
||||
self.results[i[0]].pop() # the null item
|
||||
self.results[i[0]].append(duration)
|
||||
|
||||
|
||||
jobend = time.time()
|
||||
jobduration = jobend-jobstart
|
||||
print("** Ended job %s - %.1f seconds total." % (self.runlabel,jobduration))
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def run(self):
|
||||
"""First runs all the jobs in the queue against a scratch in-memory db
|
||||
then re-runs the import against the db specified in settings.py
|
||||
Default behaviour is to skip the in-memory phase.
|
||||
When MySQL is the db the in-memory phase crashes as MySQL does not properly
|
||||
relinquish some kind of db connection (not fixed yet)
|
||||
"""
|
||||
self.loadprofiles()
|
||||
# save db settings for later
|
||||
dbengine = settings.DATABASES['default']['ENGINE']
|
||||
dbname = settings.DATABASES['default']['NAME']
|
||||
dbdefault = settings.DATABASES['default']
|
||||
|
||||
skipmem = False
|
||||
if self.runlabel:
|
||||
if self.runlabel == "":
|
||||
skipmem = True
|
||||
elif self.runlabel[0:2] == "F-":
|
||||
skipmem = True
|
||||
else:
|
||||
skipmem = True
|
||||
|
||||
print("-- ", settings.DATABASES['default']['NAME'], settings.DATABASES['default']['ENGINE'])
|
||||
#print "-- DATABASES.default", settings.DATABASES['default']
|
||||
|
||||
if dbname ==":memory:":
|
||||
# just run, and save the sql file
|
||||
self.runqonce()
|
||||
self.memdumpsql() # saved contents of scratch db, could be imported later..
|
||||
self.saveprofiles()
|
||||
elif skipmem:
|
||||
self.runqonce()
|
||||
self.saveprofiles()
|
||||
else:
|
||||
django.db.close_old_connections() # needed if MySQL running?
|
||||
# run all the imports through :memory: first
|
||||
settings.DATABASES['default']['ENGINE'] = 'django.db.backends.sqlite3'
|
||||
settings.DATABASES['default']['NAME'] = ":memory:"
|
||||
settings.DATABASES['default'] = {'ENGINE': 'django.db.backends.sqlite3',
|
||||
'AUTOCOMMIT': True,
|
||||
'ATOMIC_REQUESTS': False,
|
||||
'NAME': ':memory:',
|
||||
'CONN_MAX_AGE': 0,
|
||||
'TIME_ZONE': 'UTC',
|
||||
'OPTIONS': {},
|
||||
'HOST': '',
|
||||
'USER': '',
|
||||
'TEST': {'COLLATION': None, 'CHARSET': None, 'NAME': None, 'MIRROR': None},
|
||||
'PASSWORD': '',
|
||||
'PORT': ''}
|
||||
|
||||
|
||||
print("-- ", settings.DATABASES['default']['NAME'], settings.DATABASES['default']['ENGINE'])
|
||||
#print("-- DATABASES.default", settings.DATABASES['default'])
|
||||
|
||||
# but because the user may be expecting to add this to a db with lots of tables already there,
|
||||
# the jobqueue may not start from scratch so we need to initialise the db properly first
|
||||
# because we are using an empty :memory: database
|
||||
# But initiating twice crashes it; so be sure to do it once only.
|
||||
|
||||
|
||||
# Damn. syncdb() is still calling MySQL somehow **conn_params not sqlite3. So crashes on expo server.
|
||||
if ("reinit",reinit_db) not in self.queue:
|
||||
reinit_db()
|
||||
if ("dirsredirect",dirsredirect) not in self.queue:
|
||||
dirsredirect()
|
||||
if ("caves",import_caves) not in self.queue:
|
||||
import_caves() # sometime extract the initialising code from this and put in reinit...
|
||||
if ("people",import_people) not in self.queue:
|
||||
import_people() # sometime extract the initialising code from this and put in reinit...
|
||||
|
||||
django.db.close_old_connections() # maybe not needed here
|
||||
|
||||
self.runqonce()
|
||||
self.memdumpsql()
|
||||
self.showprofile()
|
||||
|
||||
# restore the original db and import again
|
||||
# if we wanted to, we could re-import the SQL generated in the first pass to be
|
||||
# blazing fast. But for the present just re-import the lot.
|
||||
settings.DATABASES['default'] = dbdefault
|
||||
settings.DATABASES['default']['ENGINE'] = dbengine
|
||||
settings.DATABASES['default']['NAME'] = dbname
|
||||
print("-- ", settings.DATABASES['default']['NAME'], settings.DATABASES['default']['ENGINE'])
|
||||
|
||||
django.db.close_old_connections() # maybe not needed here
|
||||
for j in self.results_order:
|
||||
self.results[j].pop() # throw away results from :memory: run
|
||||
self.results[j].append(None) # append a placeholder
|
||||
|
||||
django.db.close_old_connections() # magic rune. works. found by looking in django.db__init__.py
|
||||
#django.setup() # should this be needed?
|
||||
|
||||
self.runqonce() # crashes because it thinks it has no migrations to apply, when it does.
|
||||
self.saveprofiles()
|
||||
|
||||
return True
|
||||
|
||||
def showprofile(self):
|
||||
"""Prints out the time it took to run the jobqueue
|
||||
"""
|
||||
for k in self.results_order:
|
||||
if k =="dirsredirect":
|
||||
break
|
||||
if k =="surveyimgs":
|
||||
break
|
||||
elif k =="syncuser":
|
||||
break
|
||||
elif k =="test":
|
||||
break
|
||||
elif k =="date":
|
||||
print(" days ago ", end=' ')
|
||||
else:
|
||||
print('%10s (s)' % k, end=' ')
|
||||
percen=0
|
||||
r = self.results[k]
|
||||
|
||||
for i in range(len(r)):
|
||||
if k == "runlabel":
|
||||
if r[i]:
|
||||
rp = r[i]
|
||||
else:
|
||||
rp = " - "
|
||||
print('%8s' % rp, end=' ')
|
||||
elif k =="date":
|
||||
# Calculate dates as days before present
|
||||
if r[i]:
|
||||
if i == len(r)-1:
|
||||
print(" this", end=' ')
|
||||
else:
|
||||
# prints one place to the left of where you expect
|
||||
if r[len(r)-1]:
|
||||
s = r[i]-r[len(r)-1]
|
||||
else:
|
||||
s = 0
|
||||
days = (s)/(24*60*60)
|
||||
print('%8.2f' % days, end=' ')
|
||||
elif r[i]:
|
||||
print('%8.1f' % r[i], end=' ')
|
||||
if i == len(r)-1 and r[i-1]:
|
||||
percen = 100* (r[i] - r[i-1])/r[i-1]
|
||||
if abs(percen) >0.1:
|
||||
print('%8.1f%%' % percen, end=' ')
|
||||
else:
|
||||
print(" - ", end=' ')
|
||||
print("")
|
||||
print("\n")
|
||||
return True
|
||||
|
||||
def pageredirects():
|
||||
for oldURL, newURL in [("indxal.htm", reverse("caveindex"))]:
|
||||
f = troggle.flatpages.models.Redirect(originalURL = oldURL, newURL = newURL)
|
||||
f.save()
|
||||
|
||||
def usage():
|
||||
print("""Usage is 'python databaseReset.py <command>'
|
||||
print("""Usage is 'python databaseReset.py <command> [runlabel]'
|
||||
where command is:
|
||||
rebuild - this reloads database and set up directories & redirects only
|
||||
reset - this is normal usage, clear database and reread everything from files - time-consuming
|
||||
desc - NOT WORKING: function resetdesc() missing
|
||||
caves - read in the caves
|
||||
logbooks - read in the logbooks, but read in people first
|
||||
autologbooks - read in autologbooks
|
||||
dumplogbooks - write out autologbooks (not working?)
|
||||
people - read in the people from folk.csv
|
||||
QMs - read in the QM files
|
||||
resetend
|
||||
scans - read in the scanned surveynotes
|
||||
survex - read in the survex files
|
||||
survexpos
|
||||
surveys
|
||||
tunnel - read in the Tunnel files
|
||||
test - testing... imports people and prints profile. Deletes nothing.
|
||||
profile - print the profile from previous runs. Import nothing.
|
||||
|
||||
reset - normal usage: clear database and reread everything from files - time-consuming
|
||||
caves - read in the caves (must run first after reset)
|
||||
people - read in the people from folk.csv (must run before logbooks)
|
||||
logbooks - read in the logbooks
|
||||
QMs - read in the QM csv files (older caves only)
|
||||
scans - the survey scans in all the wallets (must run before survex)
|
||||
survex - read in the survex files - all the survex blocks but not the x/y/z positions
|
||||
survexpos - set the x/y/z positions for entrances and fixed points
|
||||
|
||||
tunnel - read in the Tunnel files - which scans the survey scans too
|
||||
|
||||
reinit - clear database (delete everything) and make empty tables. Import nothing.
|
||||
syncuser - needed after reloading database from SQL backup
|
||||
autologbooks - Not used. read in autologbooks (what are these?)
|
||||
dumplogbooks - Not used. write out autologbooks (not working?)
|
||||
surveyimgs - Not used. read in scans by-expo, must run after "people".
|
||||
|
||||
and [runlabel] is an optional string identifying this run of the script
|
||||
in the stored profiling data 'import-profile.json'
|
||||
if [runlabel] is absent or begins with "F-" then it will skip the :memory: pass
|
||||
|
||||
caves and logbooks must be run on an empty db before the others as they
|
||||
set up db tables used by the others.
|
||||
|
||||
the in-memory phase is on an empty db, so always runs reinit, caves & people for this phase
|
||||
""")
|
||||
|
||||
if __name__ == "__main__":
|
||||
@@ -193,54 +405,66 @@ if __name__ == "__main__":
|
||||
import sys
|
||||
import django
|
||||
django.setup()
|
||||
if "desc" in sys.argv:
|
||||
resetdesc()
|
||||
elif "scans" in sys.argv:
|
||||
import_surveyscans()
|
||||
|
||||
if len(sys.argv)>2:
|
||||
runlabel = sys.argv[len(sys.argv)-1]
|
||||
else:
|
||||
runlabel=None
|
||||
|
||||
jq = JobQueue(runlabel)
|
||||
|
||||
if len(sys.argv)==1:
|
||||
usage()
|
||||
exit()
|
||||
elif "test" in sys.argv:
|
||||
jq.enq("caves",import_caves)
|
||||
jq.enq("people",import_people)
|
||||
elif "caves" in sys.argv:
|
||||
import_caves()
|
||||
elif "people" in sys.argv:
|
||||
import_people()
|
||||
elif "QMs" in sys.argv:
|
||||
import_QMs()
|
||||
elif "tunnel" in sys.argv:
|
||||
import_tunnelfiles()
|
||||
elif "reset" in sys.argv:
|
||||
reset()
|
||||
elif "resetend" in sys.argv:
|
||||
#import_logbooks()
|
||||
import_QMs()
|
||||
try:
|
||||
import_tunnelfiles()
|
||||
except:
|
||||
print("Tunnel files parser broken.")
|
||||
import_surveys()
|
||||
import_descriptions()
|
||||
parse_descriptions()
|
||||
elif "survex" in sys.argv:
|
||||
# management.call_command('syncdb', interactive=False) # this sets the path so that import settings works in import_survex
|
||||
import_survex()
|
||||
elif "survexpos" in sys.argv:
|
||||
# management.call_command('syncdb', interactive=False) # this sets the path so that import settings works in import_survex
|
||||
import parsers.survex
|
||||
parsers.survex.LoadPos()
|
||||
jq.enq("caves",import_caves)
|
||||
elif "logbooks" in sys.argv:
|
||||
# management.call_command('syncdb', interactive=False) # this sets the path so that import settings works in import_survex
|
||||
import_logbooks()
|
||||
elif "autologbooks" in sys.argv:
|
||||
jq.enq("logbooks",import_logbooks)
|
||||
elif "people" in sys.argv:
|
||||
jq.enq("people",import_people)
|
||||
elif "QMs" in sys.argv:
|
||||
jq.enq("QMs",import_QMs)
|
||||
elif "reset" in sys.argv:
|
||||
jq.enq("reinit",reinit_db)
|
||||
jq.enq("dirsredirect",dirsredirect)
|
||||
jq.enq("caves",import_caves)
|
||||
jq.enq("people",import_people)
|
||||
jq.enq("scans",import_surveyscans)
|
||||
jq.enq("logbooks",import_logbooks)
|
||||
jq.enq("QMs",import_QMs)
|
||||
jq.enq("tunnel",import_tunnelfiles)
|
||||
#jq.enq("survexblks",import_survexblks)
|
||||
#jq.enq("survexpos",import_survexpos)
|
||||
elif "scans" in sys.argv:
|
||||
jq.enq("scans",import_surveyscans)
|
||||
elif "survex" in sys.argv:
|
||||
jq.enq("survexblks",import_survexblks)
|
||||
elif "survexpos" in sys.argv:
|
||||
jq.enq("survexpos",import_survexpos)
|
||||
elif "tunnel" in sys.argv:
|
||||
jq.enq("tunnel",import_tunnelfiles)
|
||||
elif "surveyimgs" in sys.argv:
|
||||
jq.enq("surveyimgs",import_surveyimgs) # imports into tables which are never read
|
||||
elif "autologbooks" in sys.argv: # untested in 2020
|
||||
import_auto_logbooks()
|
||||
elif "dumplogbooks" in sys.argv:
|
||||
elif "dumplogbooks" in sys.argv: # untested in 2020
|
||||
dumplogbooks()
|
||||
elif "writeCaves" in sys.argv:
|
||||
writeCaves()
|
||||
elif "surveys" in sys.argv:
|
||||
import_surveys()
|
||||
# elif "writecaves" in sys.argv: # untested in 2020 - will overwrite input files!!
|
||||
# writeCaves()
|
||||
elif "profile" in sys.argv:
|
||||
jq.loadprofiles()
|
||||
jq.showprofile()
|
||||
exit()
|
||||
elif "help" in sys.argv:
|
||||
usage()
|
||||
elif "reload_db" in sys.argv:
|
||||
reload_db()
|
||||
elif "rebuild" in sys.argv:
|
||||
rebuild()
|
||||
exit()
|
||||
else:
|
||||
print("%s not recognised" % sys.argv)
|
||||
usage()
|
||||
print(("%s not recognised as a command." % sys.argv[1]))
|
||||
exit()
|
||||
|
||||
jq.run()
|
||||
jq.showprofile()
|
||||
|
||||
69
dump.py
Normal file
69
dump.py
Normal file
@@ -0,0 +1,69 @@
|
||||
# Mimic the sqlite3 console shell's .dump command
|
||||
# Author: Paul Kippes <kippesp@gmail.com>
|
||||
|
||||
# Every identifier in sql is quoted based on a comment in sqlite
|
||||
# documentation "SQLite adds new keywords from time to time when it
|
||||
# takes on new features. So to prevent your code from being broken by
|
||||
# future enhancements, you should normally quote any identifier that
|
||||
# is an English language word, even if you do not have to."
|
||||
|
||||
def _iterdump(connection):
|
||||
"""
|
||||
Returns an iterator to the dump of the database in an SQL text format.
|
||||
Used to produce an SQL dump of the database. Useful to save an in-memory
|
||||
database for later restoration. This function should not be called
|
||||
directly but instead called from the Connection method, iterdump().
|
||||
"""
|
||||
|
||||
cu = connection.cursor()
|
||||
yield('BEGIN TRANSACTION;')
|
||||
|
||||
# sqlite_master table contains the SQL CREATE statements for the database.
|
||||
q = """
|
||||
SELECT "name", "type", "sql"
|
||||
FROM "sqlite_master"
|
||||
WHERE "sql" NOT NULL AND
|
||||
"type" == 'table'
|
||||
ORDER BY "name"
|
||||
"""
|
||||
schema_res = cu.execute(q)
|
||||
for table_name, type, sql in schema_res.fetchall():
|
||||
if table_name == 'sqlite_sequence':
|
||||
yield('DELETE FROM "sqlite_sequence";')
|
||||
elif table_name == 'sqlite_stat1':
|
||||
yield('ANALYZE "sqlite_master";')
|
||||
elif table_name.startswith('sqlite_'):
|
||||
continue
|
||||
# NOTE: Virtual table support not implemented
|
||||
#elif sql.startswith('CREATE VIRTUAL TABLE'):
|
||||
# qtable = table_name.replace("'", "''")
|
||||
# yield("INSERT INTO sqlite_master(type,name,tbl_name,rootpage,sql)"\
|
||||
# "VALUES('table','{0}','{0}',0,'{1}');".format(
|
||||
# qtable,
|
||||
# sql.replace("''")))
|
||||
else:
|
||||
yield('{0};'.format(sql))
|
||||
|
||||
# Build the insert statement for each row of the current table
|
||||
table_name_ident = table_name.replace('"', '""')
|
||||
res = cu.execute('PRAGMA table_info("{0}")'.format(table_name_ident))
|
||||
column_names = [str(table_info[1]) for table_info in res.fetchall()]
|
||||
q = """SELECT 'INSERT INTO "{0}" VALUES({1})' FROM "{0}";""".format(
|
||||
table_name_ident,
|
||||
",".join("""'||quote("{0}")||'""".format(col.replace('"', '""')) for col in column_names))
|
||||
query_res = cu.execute(q)
|
||||
for row in query_res:
|
||||
yield(row[0]) # '{0}'.format(row[0]) had unicode errors
|
||||
|
||||
# Now when the type is 'index', 'trigger', or 'view'
|
||||
q = """
|
||||
SELECT "name", "type", "sql"
|
||||
FROM "sqlite_master"
|
||||
WHERE "sql" NOT NULL AND
|
||||
"type" IN ('index', 'trigger', 'view')
|
||||
"""
|
||||
schema_res = cu.execute(q)
|
||||
for name, type, sql in schema_res.fetchall():
|
||||
yield('{0};'.format(sql))
|
||||
|
||||
yield('COMMIT;')
|
||||
@@ -67,15 +67,24 @@ def flatpage(request, path):
|
||||
title, = m.groups()
|
||||
else:
|
||||
title = ""
|
||||
m = re.search(r"<meta([^>]*)noedit", head, re.DOTALL + re.IGNORECASE)
|
||||
if m:
|
||||
editable = False
|
||||
else:
|
||||
editable = True
|
||||
|
||||
has_menu = False
|
||||
menumatch = re.match('(.*)<div id="menu">', body, re.DOTALL + re.IGNORECASE)
|
||||
if menumatch:
|
||||
has_menu = True
|
||||
menumatch = re.match('(.*)<ul id="links">', body, re.DOTALL + re.IGNORECASE)
|
||||
if menumatch:
|
||||
has_menu = True
|
||||
#body, = menumatch.groups()
|
||||
if re.search(r"iso-8859-1", html):
|
||||
body = unicode(body, "iso-8859-1")
|
||||
body.strip
|
||||
return render(request, 'flatpage.html', {'editable': True, 'path': path, 'title': title, 'body': body, 'homepage': (path == "index.htm"), 'has_menu': has_menu})
|
||||
return render(request, 'flatpage.html', {'editable': editable, 'path': path, 'title': title, 'body': body, 'homepage': (path == "index.htm"), 'has_menu': has_menu})
|
||||
else:
|
||||
return HttpResponse(o.read(), content_type=getmimetype(path))
|
||||
|
||||
|
||||
@@ -1,38 +0,0 @@
|
||||
from django.db.models.loading import cache
|
||||
from django.core.management.base import BaseCommand, CommandError
|
||||
from optparse import make_option
|
||||
from imagekit.models import ImageModel
|
||||
from imagekit.specs import ImageSpec
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = ('Clears all ImageKit cached files.')
|
||||
args = '[apps]'
|
||||
requires_model_validation = True
|
||||
can_import_settings = True
|
||||
|
||||
def handle(self, *args, **options):
|
||||
return flush_cache(args, options)
|
||||
|
||||
def flush_cache(apps, options):
|
||||
""" Clears the image cache
|
||||
|
||||
"""
|
||||
apps = [a.strip(',') for a in apps]
|
||||
if apps:
|
||||
print 'Flushing cache for %s...' % ', '.join(apps)
|
||||
else:
|
||||
print 'Flushing caches...'
|
||||
|
||||
for app_label in apps:
|
||||
app = cache.get_app(app_label)
|
||||
models = [m for m in cache.get_models(app) if issubclass(m, ImageModel)]
|
||||
|
||||
for model in models:
|
||||
for obj in model.objects.all():
|
||||
for spec in model._ik.specs:
|
||||
prop = getattr(obj, spec.name(), None)
|
||||
if prop is not None:
|
||||
prop._delete()
|
||||
if spec.pre_cache:
|
||||
prop._create()
|
||||
@@ -1,136 +0,0 @@
|
||||
import os
|
||||
from datetime import datetime
|
||||
from django.conf import settings
|
||||
from django.core.files.base import ContentFile
|
||||
from django.db import models
|
||||
from django.db.models.base import ModelBase
|
||||
from django.utils.translation import ugettext_lazy as _
|
||||
|
||||
from imagekit import specs
|
||||
from imagekit.lib import *
|
||||
from imagekit.options import Options
|
||||
from imagekit.utils import img_to_fobj
|
||||
|
||||
# Modify image file buffer size.
|
||||
ImageFile.MAXBLOCK = getattr(settings, 'PIL_IMAGEFILE_MAXBLOCK', 256 * 2 ** 10)
|
||||
|
||||
# Choice tuples for specifying the crop origin.
|
||||
# These are provided for convenience.
|
||||
CROP_HORZ_CHOICES = (
|
||||
(0, _('left')),
|
||||
(1, _('center')),
|
||||
(2, _('right')),
|
||||
)
|
||||
|
||||
CROP_VERT_CHOICES = (
|
||||
(0, _('top')),
|
||||
(1, _('center')),
|
||||
(2, _('bottom')),
|
||||
)
|
||||
|
||||
|
||||
class ImageModelBase(ModelBase):
|
||||
""" ImageModel metaclass
|
||||
|
||||
This metaclass parses IKOptions and loads the specified specification
|
||||
module.
|
||||
|
||||
"""
|
||||
def __init__(cls, name, bases, attrs):
|
||||
parents = [b for b in bases if isinstance(b, ImageModelBase)]
|
||||
if not parents:
|
||||
return
|
||||
user_opts = getattr(cls, 'IKOptions', None)
|
||||
opts = Options(user_opts)
|
||||
try:
|
||||
module = __import__(opts.spec_module, {}, {}, [''])
|
||||
except ImportError:
|
||||
raise ImportError('Unable to load imagekit config module: %s' % \
|
||||
opts.spec_module)
|
||||
for spec in [spec for spec in module.__dict__.values() \
|
||||
if isinstance(spec, type) \
|
||||
and issubclass(spec, specs.ImageSpec) \
|
||||
and spec != specs.ImageSpec]:
|
||||
setattr(cls, spec.name(), specs.Descriptor(spec))
|
||||
opts.specs.append(spec)
|
||||
setattr(cls, '_ik', opts)
|
||||
|
||||
|
||||
class ImageModel(models.Model):
|
||||
""" Abstract base class implementing all core ImageKit functionality
|
||||
|
||||
Subclasses of ImageModel are augmented with accessors for each defined
|
||||
image specification and can override the inner IKOptions class to customize
|
||||
storage locations and other options.
|
||||
|
||||
"""
|
||||
__metaclass__ = ImageModelBase
|
||||
|
||||
class Meta:
|
||||
abstract = True
|
||||
|
||||
class IKOptions:
|
||||
pass
|
||||
|
||||
def admin_thumbnail_view(self):
|
||||
if not self._imgfield:
|
||||
return None
|
||||
prop = getattr(self, self._ik.admin_thumbnail_spec, None)
|
||||
if prop is None:
|
||||
return 'An "%s" image spec has not been defined.' % \
|
||||
self._ik.admin_thumbnail_spec
|
||||
else:
|
||||
if hasattr(self, 'get_absolute_url'):
|
||||
return u'<a href="%s"><img src="%s"></a>' % \
|
||||
(self.get_absolute_url(), prop.url)
|
||||
else:
|
||||
return u'<a href="%s"><img src="%s"></a>' % \
|
||||
(self._imgfield.url, prop.url)
|
||||
admin_thumbnail_view.short_description = _('Thumbnail')
|
||||
admin_thumbnail_view.allow_tags = True
|
||||
|
||||
@property
|
||||
def _imgfield(self):
|
||||
return getattr(self, self._ik.image_field)
|
||||
|
||||
def _clear_cache(self):
|
||||
for spec in self._ik.specs:
|
||||
prop = getattr(self, spec.name())
|
||||
prop._delete()
|
||||
|
||||
def _pre_cache(self):
|
||||
for spec in self._ik.specs:
|
||||
if spec.pre_cache:
|
||||
prop = getattr(self, spec.name())
|
||||
prop._create()
|
||||
|
||||
def save(self, clear_cache=True, *args, **kwargs):
|
||||
is_new_object = self._get_pk_val is None
|
||||
super(ImageModel, self).save(*args, **kwargs)
|
||||
if is_new_object:
|
||||
clear_cache = False
|
||||
spec = self._ik.preprocessor_spec
|
||||
if spec is not None:
|
||||
newfile = self._imgfield.storage.open(str(self._imgfield))
|
||||
img = Image.open(newfile)
|
||||
img = spec.process(img, None)
|
||||
format = img.format or 'JPEG'
|
||||
if format != 'JPEG':
|
||||
imgfile = img_to_fobj(img, format)
|
||||
else:
|
||||
imgfile = img_to_fobj(img, format,
|
||||
quality=int(spec.quality),
|
||||
optimize=True)
|
||||
content = ContentFile(imgfile.read())
|
||||
newfile.close()
|
||||
name = str(self._imgfield)
|
||||
self._imgfield.storage.delete(name)
|
||||
self._imgfield.storage.save(name, content)
|
||||
if clear_cache and self._imgfield != '':
|
||||
self._clear_cache()
|
||||
self._pre_cache()
|
||||
|
||||
def delete(self):
|
||||
assert self._get_pk_val() is not None, "%s object can't be deleted because its %s attribute is set to None." % (self._meta.object_name, self._meta.pk.attname)
|
||||
self._clear_cache()
|
||||
models.Model.delete(self)
|
||||
@@ -1,23 +0,0 @@
|
||||
# Imagekit options
|
||||
from imagekit import processors
|
||||
from imagekit.specs import ImageSpec
|
||||
|
||||
|
||||
class Options(object):
|
||||
""" Class handling per-model imagekit options
|
||||
|
||||
"""
|
||||
image_field = 'image'
|
||||
crop_horz_field = 'crop_horz'
|
||||
crop_vert_field = 'crop_vert'
|
||||
preprocessor_spec = None
|
||||
cache_dir = 'cache'
|
||||
save_count_as = None
|
||||
cache_filename_format = "%(filename)s_%(specname)s.%(extension)s"
|
||||
admin_thumbnail_spec = 'admin_thumbnail'
|
||||
spec_module = 'imagekit.defaults'
|
||||
|
||||
def __init__(self, opts):
|
||||
for key, value in opts.__dict__.iteritems():
|
||||
setattr(self, key, value)
|
||||
self.specs = []
|
||||
@@ -1,119 +0,0 @@
|
||||
""" ImageKit image specifications
|
||||
|
||||
All imagekit specifications must inherit from the ImageSpec class. Models
|
||||
inheriting from ImageModel will be modified with a descriptor/accessor for each
|
||||
spec found.
|
||||
|
||||
"""
|
||||
import os
|
||||
from StringIO import StringIO
|
||||
from imagekit.lib import *
|
||||
from imagekit.utils import img_to_fobj
|
||||
from django.core.files.base import ContentFile
|
||||
|
||||
class ImageSpec(object):
|
||||
pre_cache = False
|
||||
quality = 70
|
||||
increment_count = False
|
||||
processors = []
|
||||
|
||||
@classmethod
|
||||
def name(cls):
|
||||
return getattr(cls, 'access_as', cls.__name__.lower())
|
||||
|
||||
@classmethod
|
||||
def process(cls, image, obj):
|
||||
processed_image = image.copy()
|
||||
for proc in cls.processors:
|
||||
processed_image = proc.process(processed_image, obj)
|
||||
return processed_image
|
||||
|
||||
|
||||
class Accessor(object):
|
||||
def __init__(self, obj, spec):
|
||||
self._img = None
|
||||
self._obj = obj
|
||||
self.spec = spec
|
||||
|
||||
def _get_imgfile(self):
|
||||
format = self._img.format or 'JPEG'
|
||||
if format != 'JPEG':
|
||||
imgfile = img_to_fobj(self._img, format)
|
||||
else:
|
||||
imgfile = img_to_fobj(self._img, format,
|
||||
quality=int(self.spec.quality),
|
||||
optimize=True)
|
||||
return imgfile
|
||||
|
||||
def _create(self):
|
||||
if self._exists():
|
||||
return
|
||||
# process the original image file
|
||||
fp = self._obj._imgfield.storage.open(self._obj._imgfield.name)
|
||||
fp.seek(0)
|
||||
fp = StringIO(fp.read())
|
||||
try:
|
||||
self._img = self.spec.process(Image.open(fp), self._obj)
|
||||
# save the new image to the cache
|
||||
content = ContentFile(self._get_imgfile().read())
|
||||
self._obj._imgfield.storage.save(self.name, content)
|
||||
except IOError:
|
||||
pass
|
||||
|
||||
def _delete(self):
|
||||
self._obj._imgfield.storage.delete(self.name)
|
||||
|
||||
def _exists(self):
|
||||
return self._obj._imgfield.storage.exists(self.name)
|
||||
|
||||
def _basename(self):
|
||||
filename, extension = \
|
||||
os.path.splitext(os.path.basename(self._obj._imgfield.name))
|
||||
return self._obj._ik.cache_filename_format % \
|
||||
{'filename': filename,
|
||||
'specname': self.spec.name(),
|
||||
'extension': extension.lstrip('.')}
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return os.path.join(self._obj._ik.cache_dir, self._basename())
|
||||
|
||||
@property
|
||||
def url(self):
|
||||
self._create()
|
||||
if self.spec.increment_count:
|
||||
fieldname = self._obj._ik.save_count_as
|
||||
if fieldname is not None:
|
||||
current_count = getattr(self._obj, fieldname)
|
||||
setattr(self._obj, fieldname, current_count + 1)
|
||||
self._obj.save(clear_cache=False)
|
||||
return self._obj._imgfield.storage.url(self.name)
|
||||
|
||||
@property
|
||||
def file(self):
|
||||
self._create()
|
||||
return self._obj._imgfield.storage.open(self.name)
|
||||
|
||||
@property
|
||||
def image(self):
|
||||
if self._img is None:
|
||||
self._create()
|
||||
if self._img is None:
|
||||
self._img = Image.open(self.file)
|
||||
return self._img
|
||||
|
||||
@property
|
||||
def width(self):
|
||||
return self.image.size[0]
|
||||
|
||||
@property
|
||||
def height(self):
|
||||
return self.image.size[1]
|
||||
|
||||
|
||||
class Descriptor(object):
|
||||
def __init__(self, spec):
|
||||
self._spec = spec
|
||||
|
||||
def __get__(self, obj, type=None):
|
||||
return Accessor(obj, self._spec)
|
||||
@@ -1,86 +0,0 @@
|
||||
import os
|
||||
import tempfile
|
||||
import unittest
|
||||
from django.conf import settings
|
||||
from django.core.files.base import ContentFile
|
||||
from django.db import models
|
||||
from django.test import TestCase
|
||||
|
||||
from imagekit import processors
|
||||
from imagekit.models import ImageModel
|
||||
from imagekit.specs import ImageSpec
|
||||
from imagekit.lib import Image
|
||||
|
||||
|
||||
class ResizeToWidth(processors.Resize):
|
||||
width = 100
|
||||
|
||||
class ResizeToHeight(processors.Resize):
|
||||
height = 100
|
||||
|
||||
class ResizeToFit(processors.Resize):
|
||||
width = 100
|
||||
height = 100
|
||||
|
||||
class ResizeCropped(ResizeToFit):
|
||||
crop = ('center', 'center')
|
||||
|
||||
class TestResizeToWidth(ImageSpec):
|
||||
access_as = 'to_width'
|
||||
processors = [ResizeToWidth]
|
||||
|
||||
class TestResizeToHeight(ImageSpec):
|
||||
access_as = 'to_height'
|
||||
processors = [ResizeToHeight]
|
||||
|
||||
class TestResizeCropped(ImageSpec):
|
||||
access_as = 'cropped'
|
||||
processors = [ResizeCropped]
|
||||
|
||||
class TestPhoto(ImageModel):
|
||||
""" Minimal ImageModel class for testing """
|
||||
image = models.ImageField(upload_to='images')
|
||||
|
||||
class IKOptions:
|
||||
spec_module = 'imagekit.tests'
|
||||
|
||||
|
||||
class IKTest(TestCase):
|
||||
""" Base TestCase class """
|
||||
def setUp(self):
|
||||
# create a test image using tempfile and PIL
|
||||
self.tmp = tempfile.TemporaryFile()
|
||||
Image.new('RGB', (800, 600)).save(self.tmp, 'JPEG')
|
||||
self.tmp.seek(0)
|
||||
self.p = TestPhoto()
|
||||
self.p.image.save(os.path.basename('test.jpg'),
|
||||
ContentFile(self.tmp.read()))
|
||||
self.p.save()
|
||||
# destroy temp file
|
||||
self.tmp.close()
|
||||
|
||||
def test_setup(self):
|
||||
self.assertEqual(self.p.image.width, 800)
|
||||
self.assertEqual(self.p.image.height, 600)
|
||||
|
||||
def test_to_width(self):
|
||||
self.assertEqual(self.p.to_width.width, 100)
|
||||
self.assertEqual(self.p.to_width.height, 75)
|
||||
|
||||
def test_to_height(self):
|
||||
self.assertEqual(self.p.to_height.width, 133)
|
||||
self.assertEqual(self.p.to_height.height, 100)
|
||||
|
||||
def test_crop(self):
|
||||
self.assertEqual(self.p.cropped.width, 100)
|
||||
self.assertEqual(self.p.cropped.height, 100)
|
||||
|
||||
def test_url(self):
|
||||
tup = (settings.MEDIA_URL, self.p._ik.cache_dir, 'test_to_width.jpg')
|
||||
self.assertEqual(self.p.to_width.url, "%s%s/%s" % tup)
|
||||
|
||||
def tearDown(self):
|
||||
# make sure image file is deleted
|
||||
path = self.p.image.path
|
||||
self.p.delete()
|
||||
self.failIf(os.path.isfile(path))
|
||||
78
localsettings WSL.py
Normal file
78
localsettings WSL.py
Normal file
@@ -0,0 +1,78 @@
|
||||
import sys
|
||||
# link localsettings to this file for use on a Windows 10 machine running WSL1
|
||||
# expofiles on a different drive
|
||||
|
||||
DATABASES = {
|
||||
'default': {
|
||||
'ENGINE': 'django.db.backends.sqlite3', # 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
|
||||
'NAME' : 'troggle.sqlite', # Or path to database file if using sqlite3.
|
||||
'USER' : 'expo', # Not used with sqlite3.
|
||||
'PASSWORD' : 'sekrit', # Not used with sqlite3.
|
||||
'HOST' : '', # Set to empty string for localhost. Not used with sqlite3.
|
||||
'PORT' : '', # Set to empty string for default. Not used with sqlite3.
|
||||
}
|
||||
}
|
||||
|
||||
EXPOUSER = 'expo'
|
||||
EXPOUSERPASS = 'nnn:ggggggr'
|
||||
EXPOUSER_EMAIL = 'philip.sargent@gmail.com'
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
REPOS_ROOT_PATH = '/mnt/d/CUCC-Expo/'
|
||||
|
||||
sys.path.append(REPOS_ROOT_PATH)
|
||||
sys.path.append(REPOS_ROOT_PATH + 'troggle')
|
||||
|
||||
PUBLIC_SITE = False
|
||||
|
||||
SURVEX_DATA = REPOS_ROOT_PATH + 'loser/'
|
||||
TUNNEL_DATA = REPOS_ROOT_PATH + 'drawings/'
|
||||
THREEDCACHEDIR = REPOS_ROOT_PATH + 'expowebcache/3d/'
|
||||
|
||||
CAVERN = 'cavern'
|
||||
THREEDTOPOS = '3dtopos'
|
||||
EXPOWEB = REPOS_ROOT_PATH + 'expoweb/'
|
||||
SURVEYS = REPOS_ROOT_PATH
|
||||
#SURVEY_SCANS = REPOS_ROOT_PATH + 'expofiles/'
|
||||
SURVEY_SCANS = '/mnt/f/expofiles/'
|
||||
#FILES = REPOS_ROOT_PATH + 'expofiles'
|
||||
FILES = '/mnt/f/expofiles'
|
||||
|
||||
EXPOWEB_URL = ''
|
||||
SURVEYS_URL = '/survey_scans/'
|
||||
|
||||
PYTHON_PATH = REPOS_ROOT_PATH + 'troggle/'
|
||||
|
||||
URL_ROOT = 'http://127.0.0.1:8000/'
|
||||
#URL_ROOT = "/mnt/d/CUCC-Expo/expoweb/"
|
||||
DIR_ROOT = ''#this should end in / if a value is given
|
||||
|
||||
|
||||
|
||||
|
||||
#MEDIA_URL = URL_ROOT + DIR_ROOT + '/site_media/'
|
||||
MEDIA_URL = '/site_media/'
|
||||
MEDIA_ROOT = REPOS_ROOT_PATH + 'troggle/media/'
|
||||
MEDIA_ADMIN_DIR = '/usr/lib/python2.7/site-packages/django/contrib/admin/media/'
|
||||
|
||||
STATIC_URL = URL_ROOT + 'static/'
|
||||
STATIC_ROOT = DIR_ROOT + '/mnt/d/CUCC-Expo/'
|
||||
|
||||
JSLIB_URL = URL_ROOT + 'javascript/'
|
||||
|
||||
TINY_MCE_MEDIA_ROOT = '/usr/share/tinymce/www/'
|
||||
TINY_MCE_MEDIA_URL = URL_ROOT + DIR_ROOT + '/tinymce_media/'
|
||||
|
||||
TEMPLATE_DIRS = (
|
||||
PYTHON_PATH + "templates",
|
||||
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
|
||||
# Always use forward slashes, even on Windows.
|
||||
# Don't forget to use absolute paths, not relative paths.
|
||||
)
|
||||
|
||||
LOGFILE = PYTHON_PATH + 'troggle.log'
|
||||
@@ -62,16 +62,12 @@ TEMPLATE_DIRS = (
|
||||
# Don't forget to use absolute paths, not relative paths.
|
||||
)
|
||||
|
||||
LOGFILE = '/home/expo/troggle/troggle_log.txt'
|
||||
LOGFILE = '/home/expo/troggle/troggle.log'
|
||||
|
||||
FEINCMS_ADMIN_MEDIA='/site_media/feincms/'
|
||||
|
||||
EMAIL_HOST = "smtp.gmail.com"
|
||||
|
||||
EMAIL_HOST_USER = "cuccexpo@gmail.com"
|
||||
|
||||
EMAIL_HOST_PASSWORD = "khvtffkhvtff"
|
||||
|
||||
EMAIL_PORT=587
|
||||
|
||||
EMAIL_USE_TLS = True
|
||||
#EMAIL_HOST = "smtp.gmail.com"
|
||||
#EMAIL_HOST_USER = "cuccexpo@gmail.com"
|
||||
#EMAIL_HOST_PASSWORD = "khvtffkhvtff"
|
||||
#EMAIL_PORT=587
|
||||
#EMAIL_USE_TLS = True
|
||||
|
||||
68
logbooksdump.py
Normal file
68
logbooksdump.py
Normal file
@@ -0,0 +1,68 @@
|
||||
import os
|
||||
import time
|
||||
import timeit
|
||||
import settings
|
||||
os.environ['PYTHONPATH'] = settings.PYTHON_PATH
|
||||
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'settings')
|
||||
from django.core import management
|
||||
from django.db import connection, close_old_connections
|
||||
from django.contrib.auth.models import User
|
||||
from django.http import HttpResponse
|
||||
from django.core.urlresolvers import reverse
|
||||
from troggle.core.models import Cave, Entrance
|
||||
import troggle.flatpages.models
|
||||
|
||||
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
def import_auto_logbooks():
|
||||
import parsers.logbooks
|
||||
import os
|
||||
for pt in troggle.core.models.PersonTrip.objects.all():
|
||||
pt.delete()
|
||||
for lbe in troggle.core.models.LogbookEntry.objects.all():
|
||||
lbe.delete()
|
||||
for expedition in troggle.core.models.Expedition.objects.all():
|
||||
directory = os.path.join(settings.EXPOWEB,
|
||||
"years",
|
||||
expedition.year,
|
||||
"autologbook")
|
||||
for root, dirs, filenames in os.walk(directory):
|
||||
for filename in filenames:
|
||||
print(os.path.join(root, filename))
|
||||
parsers.logbooks.parseAutoLogBookEntry(os.path.join(root, filename))
|
||||
|
||||
#Temporary function until definitive source of data transfered.
|
||||
from django.template.defaultfilters import slugify
|
||||
from django.template import Context, loader
|
||||
def dumplogbooks():
|
||||
def get_name(pe):
|
||||
if pe.nickname:
|
||||
return pe.nickname
|
||||
else:
|
||||
return pe.person.first_name
|
||||
for lbe in troggle.core.models.LogbookEntry.objects.all():
|
||||
dateStr = lbe.date.strftime("%Y-%m-%d")
|
||||
directory = os.path.join(settings.EXPOWEB,
|
||||
"years",
|
||||
lbe.expedition.year,
|
||||
"autologbook")
|
||||
if not os.path.isdir(directory):
|
||||
os.mkdir(directory)
|
||||
filename = os.path.join(directory,
|
||||
dateStr + "." + slugify(lbe.title)[:50] + ".html")
|
||||
if lbe.cave:
|
||||
print(lbe.cave.reference())
|
||||
trip = {"title": lbe.title, "html":lbe.text, "cave": lbe.cave.reference(), "caveOrLocation": "cave"}
|
||||
else:
|
||||
trip = {"title": lbe.title, "html":lbe.text, "location":lbe.place, "caveOrLocation": "location"}
|
||||
pts = [pt for pt in lbe.persontrip_set.all() if pt.personexpedition]
|
||||
persons = [{"name": get_name(pt.personexpedition), "TU": pt.time_underground, "author": pt.is_logbook_entry_author} for pt in pts]
|
||||
f = open(filename, "wb")
|
||||
template = loader.get_template('dataformat/logbookentry.html')
|
||||
context = Context({'trip': trip,
|
||||
'persons': persons,
|
||||
'date': dateStr,
|
||||
'expeditionyear': lbe.expedition.year})
|
||||
output = template.render(context)
|
||||
f.write(unicode(output).encode( "utf-8" ))
|
||||
f.close()
|
||||
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
@@ -48,7 +48,7 @@ def parseCaveQMs(cave,inputFile):
|
||||
elif cave=='hauch':
|
||||
placeholder, hadToCreate = LogbookEntry.objects.get_or_create(date__year=year, title="placeholder for QMs in 234", text="QMs temporarily attached to this should be re-attached to their actual trips", defaults={"date": date(year, 1, 1),"cave":hauchHl})
|
||||
if hadToCreate:
|
||||
print(cave + " placeholder logbook entry for " + str(year) + " added to database")
|
||||
print((" - placeholder logbook entry for " + cave + " " + str(year) + " added to database"))
|
||||
QMnum=re.match(r".*?-\d*?-X?(?P<numb>\d*)",line[0]).group("numb")
|
||||
newQM = QM()
|
||||
newQM.found_by=placeholder
|
||||
@@ -71,13 +71,13 @@ def parseCaveQMs(cave,inputFile):
|
||||
if preexistingQM.new_since_parsing==False: #if the pre-existing QM has not been modified, overwrite it
|
||||
preexistingQM.delete()
|
||||
newQM.save()
|
||||
print("overwriting " + str(preexistingQM) +"\r")
|
||||
#print((" - overwriting " + str(preexistingQM) +"\r"))
|
||||
else: # otherwise, print that it was ignored
|
||||
print("preserving " + str(preexistingQM) + ", which was edited in admin \r")
|
||||
print((" - preserving " + str(preexistingQM) + ", which was edited in admin \r"))
|
||||
|
||||
except QM.DoesNotExist: #if there is no pre-existing QM, save the new one
|
||||
newQM.save()
|
||||
print("QM "+str(newQM) + ' added to database\r')
|
||||
# print("QM "+str(newQM) + ' added to database\r')
|
||||
|
||||
except KeyError: #check on this one
|
||||
continue
|
||||
|
||||
9
parsers/caves.py
Normal file → Executable file
9
parsers/caves.py
Normal file → Executable file
@@ -152,7 +152,7 @@ def readcave(filename):
|
||||
slug = slug,
|
||||
primary = primary)
|
||||
except:
|
||||
message = "Can't find text (slug): %s, skipping %s" % (slug, context)
|
||||
message = " ! Can't find text (slug): %s, skipping %s" % (slug, context)
|
||||
models.DataIssue.objects.create(parser='caves', message=message)
|
||||
print(message)
|
||||
|
||||
@@ -164,22 +164,23 @@ def readcave(filename):
|
||||
entrance = models.Entrance.objects.get(entranceslug__slug = slug)
|
||||
ce = models.CaveAndEntrance.objects.update_or_create(cave = c, entrance_letter = letter, entrance = entrance)
|
||||
except:
|
||||
message = "Entrance text (slug) %s missing %s" % (slug, context)
|
||||
message = " ! Entrance text (slug) %s missing %s" % (slug, context)
|
||||
models.DataIssue.objects.create(parser='caves', message=message)
|
||||
print(message)
|
||||
|
||||
|
||||
def getXML(text, itemname, minItems = 1, maxItems = None, printwarnings = True, context = ""):
|
||||
# this next line is where it crashes horribly if a stray umlaut creeps in. Will fix itself in python3
|
||||
items = re.findall("<%(itemname)s>(.*?)</%(itemname)s>" % {"itemname": itemname}, text, re.S)
|
||||
if len(items) < minItems and printwarnings:
|
||||
message = "%(count)i %(itemname)s found, at least %(min)i expected" % {"count": len(items),
|
||||
message = " ! %(count)i %(itemname)s found, at least %(min)i expected" % {"count": len(items),
|
||||
"itemname": itemname,
|
||||
"min": minItems} + context
|
||||
models.DataIssue.objects.create(parser='caves', message=message)
|
||||
print(message)
|
||||
|
||||
if maxItems is not None and len(items) > maxItems and printwarnings:
|
||||
message = "%(count)i %(itemname)s found, no more than %(max)i expected" % {"count": len(items),
|
||||
message = " ! %(count)i %(itemname)s found, no more than %(max)i expected" % {"count": len(items),
|
||||
"itemname": itemname,
|
||||
"max": maxItems} + context
|
||||
models.DataIssue.objects.create(parser='caves', message=message)
|
||||
|
||||
@@ -1,20 +1,20 @@
|
||||
#.-*- coding: utf-8 -*-
|
||||
|
||||
from django.conf import settings
|
||||
import troggle.core.models as models
|
||||
|
||||
from parsers.people import GetPersonExpeditionNameLookup
|
||||
from parsers.cavetab import GetCaveLookup
|
||||
|
||||
from django.template.defaultfilters import slugify
|
||||
from django.utils.timezone import get_current_timezone
|
||||
from django.utils.timezone import make_aware
|
||||
|
||||
from __future__ import (absolute_import, division,
|
||||
print_function)
|
||||
import csv
|
||||
import re
|
||||
import datetime
|
||||
import datetime, time
|
||||
import os
|
||||
import pickle
|
||||
|
||||
from django.conf import settings
|
||||
from django.template.defaultfilters import slugify
|
||||
|
||||
|
||||
from troggle.core.models import DataIssue, Expedition
|
||||
import troggle.core.models as models
|
||||
from parsers.people import GetPersonExpeditionNameLookup
|
||||
from parsers.cavetab import GetCaveLookup
|
||||
from utils import save_carefully
|
||||
|
||||
#
|
||||
@@ -78,13 +78,20 @@ def GetTripCave(place): #need to be fuzzier about matching here. Already a very
|
||||
print("No cave found for place " , place)
|
||||
return
|
||||
|
||||
|
||||
logentries = [] # the entire logbook is a single object: a list of entries
|
||||
noncaveplaces = [ "Journey", "Loser Plateau" ]
|
||||
|
||||
def EnterLogIntoDbase(date, place, title, text, trippeople, expedition, logtime_underground, entry_type="wiki"):
|
||||
""" saves a logbook entry and related persontrips """
|
||||
global logentries
|
||||
|
||||
entrytuple = (date, place, title, text,
|
||||
trippeople, expedition, logtime_underground, entry_type)
|
||||
logentries.append(entrytuple)
|
||||
|
||||
trippersons, author = GetTripPersons(trippeople, expedition, logtime_underground)
|
||||
if not author:
|
||||
print(" - Skipping logentry: " + title + " - no author for entry")
|
||||
print(" * Skipping logentry: " + title + " - no author for entry")
|
||||
message = "Skipping logentry: %s - no author for entry in year '%s'" % (title, expedition.year)
|
||||
models.DataIssue.objects.create(parser='logbooks', message=message)
|
||||
return
|
||||
@@ -100,14 +107,13 @@ def EnterLogIntoDbase(date, place, title, text, trippeople, expedition, logtime_
|
||||
lookupAttribs={'date':date, 'title':title}
|
||||
nonLookupAttribs={'place':place, 'text':text, 'expedition':expedition, 'cave':cave, 'slug':slugify(title)[:50], 'entry_type':entry_type}
|
||||
lbo, created=save_carefully(models.LogbookEntry, lookupAttribs, nonLookupAttribs)
|
||||
|
||||
|
||||
for tripperson, time_underground in trippersons:
|
||||
lookupAttribs={'personexpedition':tripperson, 'logbook_entry':lbo}
|
||||
nonLookupAttribs={'time_underground':time_underground, 'is_logbook_entry_author':(tripperson == author)}
|
||||
#print nonLookupAttribs
|
||||
save_carefully(models.PersonTrip, lookupAttribs, nonLookupAttribs)
|
||||
|
||||
|
||||
def ParseDate(tripdate, year):
|
||||
""" Interprets dates in the expo logbooks and returns a correct datetime.date object """
|
||||
mdatestandard = re.match(r"(\d\d\d\d)-(\d\d)-(\d\d)", tripdate)
|
||||
@@ -123,12 +129,11 @@ def ParseDate(tripdate, year):
|
||||
assert False, tripdate
|
||||
return datetime.date(year, month, day)
|
||||
|
||||
# 2006, 2008 - 2010
|
||||
# 2006, 2008 - 2009
|
||||
def Parselogwikitxt(year, expedition, txt):
|
||||
trippara = re.findall(r"===(.*?)===([\s\S]*?)(?====)", txt)
|
||||
for triphead, triptext in trippara:
|
||||
tripheadp = triphead.split("|")
|
||||
#print "ttt", tripheadp
|
||||
assert len(tripheadp) == 3, (tripheadp, triptext)
|
||||
tripdate, tripplace, trippeople = tripheadp
|
||||
tripsplace = tripplace.split(" - ")
|
||||
@@ -136,19 +141,14 @@ def Parselogwikitxt(year, expedition, txt):
|
||||
|
||||
tul = re.findall(r"T/?U:?\s*(\d+(?:\.\d*)?|unknown)\s*(hrs|hours)?", triptext)
|
||||
if tul:
|
||||
#assert len(tul) <= 1, (triphead, triptext)
|
||||
#assert tul[0][1] in ["hrs", "hours"], (triphead, triptext)
|
||||
tu = tul[0][0]
|
||||
else:
|
||||
tu = ""
|
||||
#assert tripcave == "Journey", (triphead, triptext)
|
||||
|
||||
#print tripdate
|
||||
ldate = ParseDate(tripdate.strip(), year)
|
||||
#print "\n", tripcave, "--- ppp", trippeople, len(triptext)
|
||||
EnterLogIntoDbase(date = ldate, place = tripcave, title = tripplace, text = triptext, trippeople=trippeople, expedition=expedition, logtime_underground=0)
|
||||
|
||||
# 2002, 2004, 2005, 2007, 2011 - 2018
|
||||
# 2002, 2004, 2005, 2007, 2010 - now
|
||||
def Parseloghtmltxt(year, expedition, txt):
|
||||
#print(" - Starting log html parser")
|
||||
tripparas = re.findall(r"<hr\s*/>([\s\S]*?)(?=<hr)", txt)
|
||||
@@ -168,28 +168,21 @@ def Parseloghtmltxt(year, expedition, txt):
|
||||
''', trippara)
|
||||
if not s:
|
||||
if not re.search(r"Rigging Guide", trippara):
|
||||
print("can't parse: ", trippara) # this is 2007 which needs editing
|
||||
#assert s, trippara
|
||||
print(("can't parse: ", trippara)) # this is 2007 which needs editing
|
||||
continue
|
||||
tripid, tripid1, tripdate, trippeople, triptitle, triptext, tu = s.groups()
|
||||
ldate = ParseDate(tripdate.strip(), year)
|
||||
#assert tripid[:-1] == "t" + tripdate, (tripid, tripdate)
|
||||
#trippeople = re.sub(r"Ol(?!l)", "Olly", trippeople)
|
||||
#trippeople = re.sub(r"Wook(?!e)", "Wookey", trippeople)
|
||||
triptitles = triptitle.split(" - ")
|
||||
if len(triptitles) >= 2:
|
||||
tripcave = triptitles[0]
|
||||
else:
|
||||
tripcave = "UNKNOWN"
|
||||
#print("\n", tripcave, "--- ppp", trippeople, len(triptext))
|
||||
ltriptext = re.sub(r"</p>", "", triptext)
|
||||
ltriptext = re.sub(r"\s*?\n\s*", " ", ltriptext)
|
||||
ltriptext = re.sub(r"<p>", "</br></br>", ltriptext).strip()
|
||||
EnterLogIntoDbase(date = ldate, place = tripcave, title = triptitle, text = ltriptext,
|
||||
trippeople=trippeople, expedition=expedition, logtime_underground=0,
|
||||
entry_type="html")
|
||||
if logbook_entry_count == 0:
|
||||
print(" - No trip entrys found in logbook, check the syntax matches htmltxt format")
|
||||
|
||||
|
||||
# main parser for 1991 - 2001. simpler because the data has been hacked so much to fit it
|
||||
@@ -203,9 +196,6 @@ def Parseloghtml01(year, expedition, txt):
|
||||
tripid = mtripid and mtripid.group(1) or ""
|
||||
tripheader = re.sub(r"</?(?:[ab]|span)[^>]*>", "", tripheader)
|
||||
|
||||
#print " ", [tripheader]
|
||||
#continue
|
||||
|
||||
tripdate, triptitle, trippeople = tripheader.split("|")
|
||||
ldate = ParseDate(tripdate.strip(), year)
|
||||
|
||||
@@ -223,19 +213,14 @@ def Parseloghtml01(year, expedition, txt):
|
||||
|
||||
mtail = re.search(r'(?:<a href="[^"]*">[^<]*</a>|\s|/|-|&|</?p>|\((?:same day|\d+)\))*$', ltriptext)
|
||||
if mtail:
|
||||
#print mtail.group(0)
|
||||
ltriptext = ltriptext[:mtail.start(0)]
|
||||
ltriptext = re.sub(r"</p>", "", ltriptext)
|
||||
ltriptext = re.sub(r"\s*?\n\s*", " ", ltriptext)
|
||||
ltriptext = re.sub(r"<p>|<br>", "\n\n", ltriptext).strip()
|
||||
#ltriptext = re.sub("[^\s0-9a-zA-Z\-.,:;'!]", "NONASCII", ltriptext)
|
||||
ltriptext = re.sub(r"</?u>", "_", ltriptext)
|
||||
ltriptext = re.sub(r"</?i>", "''", ltriptext)
|
||||
ltriptext = re.sub(r"</?b>", "'''", ltriptext)
|
||||
|
||||
|
||||
#print ldate, trippeople.strip()
|
||||
# could includ the tripid (url link for cross referencing)
|
||||
EnterLogIntoDbase(date=ldate, place=tripcave, title=triptitle, text=ltriptext,
|
||||
trippeople=trippeople, expedition=expedition, logtime_underground=0,
|
||||
entry_type="html")
|
||||
@@ -262,7 +247,6 @@ def Parseloghtml03(year, expedition, txt):
|
||||
tripcave = triptitles[0]
|
||||
else:
|
||||
tripcave = "UNKNOWN"
|
||||
#print tripcave, "--- ppp", triptitle, trippeople, len(triptext)
|
||||
ltriptext = re.sub(r"</p>", "", triptext)
|
||||
ltriptext = re.sub(r"\s*?\n\s*", " ", ltriptext)
|
||||
ltriptext = re.sub(r"<p>", "\n\n", ltriptext).strip()
|
||||
@@ -292,53 +276,95 @@ def SetDatesFromLogbookEntries(expedition):
|
||||
|
||||
|
||||
def LoadLogbookForExpedition(expedition):
|
||||
""" Parses all logbook entries for one expedition """
|
||||
|
||||
expowebbase = os.path.join(settings.EXPOWEB, "years")
|
||||
yearlinks = settings.LOGBOOK_PARSER_SETTINGS
|
||||
|
||||
""" Parses all logbook entries for one expedition
|
||||
"""
|
||||
global logentries
|
||||
logbook_parseable = False
|
||||
|
||||
logbook_cached = False
|
||||
yearlinks = settings.LOGBOOK_PARSER_SETTINGS
|
||||
expologbase = os.path.join(settings.EXPOWEB, "years")
|
||||
|
||||
if expedition.year in yearlinks:
|
||||
year_settings = yearlinks[expedition.year]
|
||||
file_in = open(os.path.join(expowebbase, year_settings[0]))
|
||||
txt = file_in.read().decode("latin1")
|
||||
file_in.close()
|
||||
parsefunc = year_settings[1]
|
||||
logbook_parseable = True
|
||||
print(" - Parsing logbook: " + year_settings[0] + "\n - Using parser: " + year_settings[1])
|
||||
logbookfile = os.path.join(expologbase, yearlinks[expedition.year][0])
|
||||
parsefunc = yearlinks[expedition.year][1]
|
||||
else:
|
||||
logbookfile = os.path.join(expologbase, expedition.year, settings.DEFAULT_LOGBOOK_FILE)
|
||||
parsefunc = settings.DEFAULT_LOGBOOK_PARSER
|
||||
cache_filename = logbookfile + ".cache"
|
||||
|
||||
try:
|
||||
bad_cache = False
|
||||
now = time.time()
|
||||
cache_t = os.path.getmtime(cache_filename)
|
||||
if os.path.getmtime(logbookfile) - cache_t > 2: # at least 2 secs later
|
||||
bad_cache= True
|
||||
if now - cache_t > 30*24*60*60:
|
||||
bad_cache= True
|
||||
if bad_cache:
|
||||
print(" - ! Cache is either stale or more than 30 days old. Deleting it.")
|
||||
os.remove(cache_filename)
|
||||
logentries=[]
|
||||
print(" ! Removed stale or corrupt cache file")
|
||||
raise
|
||||
print(" - Reading cache: " + cache_filename, end='')
|
||||
try:
|
||||
file_in = open(os.path.join(expowebbase, expedition.year, settings.DEFAULT_LOGBOOK_FILE))
|
||||
with open(cache_filename, "rb") as f:
|
||||
logentries = pickle.load(f)
|
||||
print(" -- Loaded ", len(logentries), " log entries")
|
||||
logbook_cached = True
|
||||
except:
|
||||
print("\n ! Failed to load corrupt cache. Deleting it.\n")
|
||||
os.remove(cache_filename)
|
||||
logentries=[]
|
||||
raise
|
||||
except : # no cache found
|
||||
#print(" - No cache \"" + cache_filename +"\"")
|
||||
try:
|
||||
file_in = open(logbookfile,'rb')
|
||||
txt = file_in.read().decode("latin1")
|
||||
file_in.close()
|
||||
logbook_parseable = True
|
||||
print("No set parser found using default")
|
||||
parsefunc = settings.DEFAULT_LOGBOOK_PARSER
|
||||
print((" - Using: " + parsefunc + " to parse " + logbookfile))
|
||||
except (IOError):
|
||||
logbook_parseable = False
|
||||
print("Couldn't open default logbook file and nothing in settings for expo " + expedition.year)
|
||||
print((" ! Couldn't open logbook " + logbookfile))
|
||||
|
||||
if logbook_parseable:
|
||||
parser = globals()[parsefunc]
|
||||
parser(expedition.year, expedition, txt)
|
||||
SetDatesFromLogbookEntries(expedition)
|
||||
# and this has also stored all the log entries in logentries[]
|
||||
if len(logentries) >0:
|
||||
print(" - Cacheing " , len(logentries), " log entries")
|
||||
with open(cache_filename, "wb") as fc:
|
||||
pickle.dump(logentries, fc, 2)
|
||||
else:
|
||||
print(" ! NO TRIP entries found in logbook, check the syntax.")
|
||||
|
||||
logentries=[] # flush for next year
|
||||
|
||||
if logbook_cached:
|
||||
i=0
|
||||
for entrytuple in range(len(logentries)):
|
||||
date, place, title, text, trippeople, expedition, logtime_underground, \
|
||||
entry_type = logentries[i]
|
||||
EnterLogIntoDbase(date, place, title, text, trippeople, expedition, logtime_underground,\
|
||||
entry_type)
|
||||
i +=1
|
||||
|
||||
#return "TOLOAD: " + year + " " + str(expedition.personexpedition_set.all()[1].logbookentry_set.count()) + " " + str(models.PersonTrip.objects.filter(personexpedition__expedition=expedition).count())
|
||||
|
||||
|
||||
def LoadLogbooks():
|
||||
""" This is the master function for parsing all logbooks into the Troggle database. """
|
||||
|
||||
# Clear the logbook data issues as we are reloading
|
||||
models.DataIssue.objects.filter(parser='logbooks').delete()
|
||||
# Fetch all expos
|
||||
expos = models.Expedition.objects.all()
|
||||
""" This is the master function for parsing all logbooks into the Troggle database.
|
||||
"""
|
||||
DataIssue.objects.filter(parser='logbooks').delete()
|
||||
expos = Expedition.objects.all()
|
||||
nologbook = ["1976", "1977","1978","1979","1980","1980","1981","1983","1984",
|
||||
"1985","1986","1987","1988","1989","1990",]
|
||||
for expo in expos:
|
||||
print("\nLoading Logbook for: " + expo.year)
|
||||
|
||||
# Load logbook for expo
|
||||
LoadLogbookForExpedition(expo)
|
||||
if expo.year not in nologbook:
|
||||
print((" - Logbook for: " + expo.year))
|
||||
LoadLogbookForExpedition(expo)
|
||||
|
||||
|
||||
dateRegex = re.compile(r'<span\s+class="date">(\d\d\d\d)-(\d\d)-(\d\d)</span>', re.S)
|
||||
@@ -362,25 +388,25 @@ def parseAutoLogBookEntry(filename):
|
||||
year, month, day = [int(x) for x in dateMatch.groups()]
|
||||
date = datetime.date(year, month, day)
|
||||
else:
|
||||
errors.append("Date could not be found")
|
||||
errors.append(" - Date could not be found")
|
||||
|
||||
expeditionYearMatch = expeditionYearRegex.search(contents)
|
||||
if expeditionYearMatch:
|
||||
try:
|
||||
expedition = models.Expedition.objects.get(year = expeditionYearMatch.groups()[0])
|
||||
personExpeditionNameLookup = GetPersonExpeditionNameLookup(expedition)
|
||||
except models.Expedition.DoesNotExist:
|
||||
errors.append("Expedition not in database")
|
||||
except Expedition.DoesNotExist:
|
||||
errors.append(" - Expedition not in database")
|
||||
else:
|
||||
errors.append("Expediton Year could not be parsed")
|
||||
errors.append(" - Expedition Year could not be parsed")
|
||||
|
||||
titleMatch = titleRegex.search(contents)
|
||||
if titleMatch:
|
||||
title, = titleMatch.groups()
|
||||
if len(title) > settings.MAX_LOGBOOK_ENTRY_TITLE_LENGTH:
|
||||
errors.append("Title too long")
|
||||
errors.append(" - Title too long")
|
||||
else:
|
||||
errors.append("Title could not be found")
|
||||
errors.append(" - Title could not be found")
|
||||
|
||||
caveMatch = caveRegex.search(contents)
|
||||
if caveMatch:
|
||||
@@ -389,7 +415,7 @@ def parseAutoLogBookEntry(filename):
|
||||
cave = models.getCaveByReference(caveRef)
|
||||
except AssertionError:
|
||||
cave = None
|
||||
errors.append("Cave not found in database")
|
||||
errors.append(" - Cave not found in database")
|
||||
else:
|
||||
cave = None
|
||||
|
||||
@@ -400,13 +426,13 @@ def parseAutoLogBookEntry(filename):
|
||||
location = None
|
||||
|
||||
if cave is None and location is None:
|
||||
errors.append("Location nor cave could not be found")
|
||||
errors.append(" - Location nor cave could not be found")
|
||||
|
||||
reportMatch = reportRegex.search(contents)
|
||||
if reportMatch:
|
||||
report, = reportMatch.groups()
|
||||
else:
|
||||
errors.append("Contents could not be found")
|
||||
errors.append(" - Contents could not be found")
|
||||
if errors:
|
||||
return errors # Easiest to bail out at this point as we need to make sure that we know which expedition to look for people from.
|
||||
people = []
|
||||
@@ -417,21 +443,21 @@ def parseAutoLogBookEntry(filename):
|
||||
if name.lower() in personExpeditionNameLookup:
|
||||
personExpo = personExpeditionNameLookup[name.lower()]
|
||||
else:
|
||||
errors.append("Person could not be found in database")
|
||||
errors.append(" - Person could not be found in database")
|
||||
author = bool(author)
|
||||
else:
|
||||
errors.append("Persons name could not be found")
|
||||
errors.append(" - Persons name could not be found")
|
||||
|
||||
TUMatch = TURegex.search(contents)
|
||||
if TUMatch:
|
||||
TU, = TUMatch.groups()
|
||||
else:
|
||||
errors.append("TU could not be found")
|
||||
errors.append(" - TU could not be found")
|
||||
if not errors:
|
||||
people.append((name, author, TU))
|
||||
if errors:
|
||||
return errors # Bail out before commiting to the database
|
||||
logbookEntry = models.LogbookEntry(date = date,
|
||||
return errors # Bail out before committing to the database
|
||||
logbookEntry = LogbookEntry(date = date,
|
||||
expedition = expedition,
|
||||
title = title, cave = cave, place = location,
|
||||
text = report, slug = slugify(title)[:50],
|
||||
|
||||
@@ -7,41 +7,48 @@ from utils import save_carefully
|
||||
from HTMLParser import HTMLParser
|
||||
from unidecode import unidecode
|
||||
|
||||
def saveMugShot(mugShotPath, mugShotFilename, person):
|
||||
if mugShotFilename.startswith(r'i/'): #if filename in cell has the directory attached (I think they all do), remove it
|
||||
mugShotFilename=mugShotFilename[2:]
|
||||
else:
|
||||
mugShotFilename=mugShotFilename # just in case one doesn't
|
||||
# def saveMugShot(mugShotPath, mugShotFilename, person):
|
||||
# if mugShotFilename.startswith(r'i/'): #if filename in cell has the directory attached (I think they all do), remove it
|
||||
# mugShotFilename=mugShotFilename[2:]
|
||||
# else:
|
||||
# mugShotFilename=mugShotFilename # just in case one doesn't
|
||||
|
||||
dummyObj=models.DPhoto(file=mugShotFilename)
|
||||
# dummyObj=models.DPhoto(file=mugShotFilename)
|
||||
|
||||
#Put a copy of the file in the right place. mugShotObj.file.path is determined by the django filesystemstorage specified in models.py
|
||||
if not os.path.exists(dummyObj.file.path):
|
||||
shutil.copy(mugShotPath, dummyObj.file.path)
|
||||
# #Put a copy of the file in the right place. mugShotObj.file.path is determined by the django filesystemstorage specified in models.py
|
||||
# if not os.path.exists(dummyObj.file.path):
|
||||
# shutil.copy(mugShotPath, dummyObj.file.path)
|
||||
|
||||
mugShotObj, created = save_carefully(
|
||||
models.DPhoto,
|
||||
lookupAttribs={'is_mugshot':True, 'file':mugShotFilename},
|
||||
nonLookupAttribs={'caption':"Mugshot for "+person.first_name+" "+person.last_name}
|
||||
)
|
||||
# mugShotObj, created = save_carefully(
|
||||
# models.DPhoto,
|
||||
# lookupAttribs={'is_mugshot':True, 'file':mugShotFilename},
|
||||
# nonLookupAttribs={'caption':"Mugshot for "+person.first_name+" "+person.last_name}
|
||||
# )
|
||||
|
||||
if created:
|
||||
mugShotObj.contains_person.add(person)
|
||||
mugShotObj.save()
|
||||
# if created:
|
||||
# mugShotObj.contains_person.add(person)
|
||||
# mugShotObj.save()
|
||||
|
||||
def parseMugShotAndBlurb(personline, header, person):
|
||||
"""create mugshot Photo instance"""
|
||||
mugShotFilename=personline[header["Mugshot"]]
|
||||
mugShotPath = os.path.join(settings.EXPOWEB, "folk", mugShotFilename)
|
||||
if mugShotPath[-3:]=='jpg': #if person just has an image, add it
|
||||
saveMugShot(mugShotPath=mugShotPath, mugShotFilename=mugShotFilename, person=person)
|
||||
#saveMugShot(mugShotPath=mugShotPath, mugShotFilename=mugShotFilename, person=person)
|
||||
pass
|
||||
elif mugShotPath[-3:]=='htm': #if person has an html page, find the image(s) and add it. Also, add the text from the html page to the "blurb" field in his model instance.
|
||||
personPageOld=open(mugShotPath,'r').read()
|
||||
if not person.blurb:
|
||||
person.blurb=re.search('<body>.*<hr',personPageOld,re.DOTALL).group() #this needs to be refined, take care of the HTML and make sure it doesn't match beyond the blurb
|
||||
for mugShotFilename in re.findall('i/.*?jpg',personPageOld,re.DOTALL):
|
||||
mugShotPath = os.path.join(settings.EXPOWEB, "folk", mugShotFilename)
|
||||
saveMugShot(mugShotPath=mugShotPath, mugShotFilename=mugShotFilename, person=person)
|
||||
pblurb=re.search('<body>.*<hr',personPageOld,re.DOTALL)
|
||||
if pblurb:
|
||||
#this needs to be refined, take care of the HTML and make sure it doesn't match beyond the blurb.
|
||||
#Only finds the first image, not all of them
|
||||
person.blurb=re.search('<body>.*<hr',personPageOld,re.DOTALL).group()
|
||||
else:
|
||||
print "ERROR: --------------- Broken link or Blurb parse error in ", mugShotFilename
|
||||
#for mugShotFilename in re.findall('i/.*?jpg',personPageOld,re.DOTALL):
|
||||
# mugShotPath = os.path.join(settings.EXPOWEB, "folk", mugShotFilename)
|
||||
# saveMugShot(mugShotPath=mugShotPath, mugShotFilename=mugShotFilename, person=person)
|
||||
person.save()
|
||||
|
||||
def LoadPersonsExpos():
|
||||
@@ -52,7 +59,7 @@ def LoadPersonsExpos():
|
||||
header = dict(zip(headers, range(len(headers))))
|
||||
|
||||
# make expeditions
|
||||
print("Loading expeditions")
|
||||
print(" - Loading expeditions")
|
||||
years = headers[5:]
|
||||
|
||||
for year in years:
|
||||
@@ -62,7 +69,7 @@ def LoadPersonsExpos():
|
||||
save_carefully(models.Expedition, lookupAttribs, nonLookupAttribs)
|
||||
|
||||
# make persons
|
||||
print("Loading personexpeditions")
|
||||
print(" - Loading personexpeditions")
|
||||
|
||||
for personline in personreader:
|
||||
name = personline[header["Name"]]
|
||||
@@ -132,7 +139,7 @@ def GetPersonExpeditionNameLookup(expedition):
|
||||
res = { }
|
||||
duplicates = set()
|
||||
|
||||
print("Calculating GetPersonExpeditionNameLookup for " + expedition.year)
|
||||
#print("Calculating GetPersonExpeditionNameLookup for " + expedition.year)
|
||||
personexpeditions = models.PersonExpedition.objects.filter(expedition=expedition)
|
||||
htmlparser = HTMLParser()
|
||||
for personexpedition in personexpeditions:
|
||||
|
||||
425
parsers/survex.py
Normal file → Executable file
425
parsers/survex.py
Normal file → Executable file
@@ -1,47 +1,67 @@
|
||||
from __future__ import absolute_import, division, print_function
|
||||
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
from datetime import datetime, timedelta
|
||||
from subprocess import PIPE, Popen, call
|
||||
|
||||
from django.utils.timezone import get_current_timezone, make_aware
|
||||
|
||||
import troggle.settings as settings
|
||||
import troggle.core.models as models
|
||||
import troggle.settings as settings
|
||||
|
||||
from subprocess import call, Popen, PIPE
|
||||
|
||||
import troggle.core.models_survex as models_survex
|
||||
from troggle.parsers.people import GetPersonExpeditionNameLookup
|
||||
from django.utils.timezone import get_current_timezone
|
||||
from django.utils.timezone import make_aware
|
||||
from troggle.core.views_caves import MapLocations
|
||||
|
||||
import re
|
||||
import os
|
||||
from datetime import datetime
|
||||
"""A 'survex block' is a *begin...*end set of cave data.
|
||||
A 'survexscansfolder' is what we today call a "survey scans folder" or a "wallet".
|
||||
"""
|
||||
|
||||
line_leg_regex = re.compile(r"[\d\-+.]+$")
|
||||
survexlegsalllength = 0.0
|
||||
survexlegsnumber = 0
|
||||
|
||||
def LoadSurvexLineLeg(survexblock, stardata, sline, comment, cave):
|
||||
# The try catches here need replacing as they are relativly expensive
|
||||
global survexlegsalllength
|
||||
global survexlegsnumber
|
||||
# The try catches here need replacing as they are relatively expensive
|
||||
ls = sline.lower().split()
|
||||
ssfrom = survexblock.MakeSurvexStation(ls[stardata["from"]])
|
||||
ssto = survexblock.MakeSurvexStation(ls[stardata["to"]])
|
||||
|
||||
survexleg = models.SurvexLeg(block=survexblock, stationfrom=ssfrom, stationto=ssto)
|
||||
# this next fails for two surface survey svx files which use / for decimal point
|
||||
# e.g. '29/09' in the tape measurement, or use decimals but in brackets, e.g. (06.05)
|
||||
if stardata["type"] == "normal":
|
||||
try:
|
||||
survexleg.tape = float(ls[stardata["tape"]])
|
||||
survexlegsnumber += 1
|
||||
except ValueError:
|
||||
print("Tape misread in", survexblock.survexfile.path)
|
||||
print("Stardata:", stardata)
|
||||
print("Line:", ls)
|
||||
survexleg.tape = 1000
|
||||
print("! Tape misread in", survexblock.survexfile.path)
|
||||
print(" Stardata:", stardata)
|
||||
print(" Line:", ls)
|
||||
message = ' ! Value Error: Tape misread in line %s in %s' % (ls, survexblock.survexfile.path)
|
||||
models.DataIssue.objects.create(parser='survex', message=message)
|
||||
survexleg.tape = 0
|
||||
try:
|
||||
lclino = ls[stardata["clino"]]
|
||||
except:
|
||||
print("Clino misread in", survexblock.survexfile.path)
|
||||
print("Stardata:", stardata)
|
||||
print("Line:", ls)
|
||||
print("! Clino misread in", survexblock.survexfile.path)
|
||||
print(" Stardata:", stardata)
|
||||
print(" Line:", ls)
|
||||
message = ' ! Value Error: Clino misread in line %s in %s' % (ls, survexblock.survexfile.path)
|
||||
models.DataIssue.objects.create(parser='survex', message=message)
|
||||
lclino = error
|
||||
try:
|
||||
lcompass = ls[stardata["compass"]]
|
||||
except:
|
||||
print("Compass misread in", survexblock.survexfile.path)
|
||||
print("Stardata:", stardata)
|
||||
print("Line:", ls)
|
||||
print("! Compass misread in", survexblock.survexfile.path)
|
||||
print(" Stardata:", stardata)
|
||||
print(" Line:", ls)
|
||||
message = ' ! Value Error: Compass misread in line %s in %s' % (ls, survexblock.survexfile.path)
|
||||
models.DataIssue.objects.create(parser='survex', message=message)
|
||||
lcompass = error
|
||||
if lclino == "up":
|
||||
survexleg.compass = 0.0
|
||||
@@ -53,9 +73,11 @@ def LoadSurvexLineLeg(survexblock, stardata, sline, comment, cave):
|
||||
try:
|
||||
survexleg.compass = float(lcompass)
|
||||
except ValueError:
|
||||
print("Compass misread in", survexblock.survexfile.path)
|
||||
print("Stardata:", stardata)
|
||||
print("Line:", ls)
|
||||
print("! Compass misread in", survexblock.survexfile.path)
|
||||
print(" Stardata:", stardata)
|
||||
print(" Line:", ls)
|
||||
message = ' ! Value Error: line %s in %s' % (ls, survexblock.survexfile.path)
|
||||
models.DataIssue.objects.create(parser='survex', message=message)
|
||||
survexleg.compass = 1000
|
||||
survexleg.clino = -90.0
|
||||
else:
|
||||
@@ -68,15 +90,20 @@ def LoadSurvexLineLeg(survexblock, stardata, sline, comment, cave):
|
||||
survexleg.cave = cave
|
||||
|
||||
# only save proper legs
|
||||
survexleg.save()
|
||||
# No need to save as we are measuring lengths only on parsing now.
|
||||
# delete the object so that django autosaving doesn't save it.
|
||||
survexleg = None
|
||||
#survexleg.save()
|
||||
|
||||
itape = stardata.get("tape")
|
||||
if itape:
|
||||
try:
|
||||
survexblock.totalleglength += float(ls[itape])
|
||||
survexlegsalllength += float(ls[itape])
|
||||
except ValueError:
|
||||
print("Length not added")
|
||||
survexblock.save()
|
||||
print("! Length not added")
|
||||
# No need to save as we are measuring lengths only on parsing now.
|
||||
#survexblock.save()
|
||||
|
||||
|
||||
def LoadSurvexEquate(survexblock, sline):
|
||||
@@ -94,62 +121,102 @@ stardatadefault = {"type":"normal", "t":"leg", "from":0, "to":1, "tape":2, "comp
|
||||
stardataparamconvert = {"length":"tape", "bearing":"compass", "gradient":"clino"}
|
||||
|
||||
regex_comment = re.compile(r"([^;]*?)\s*(?:;\s*(.*))?\n?$")
|
||||
regex_ref = re.compile(r'.*?ref.*?(\d+)\s*#\s*(\d+)')
|
||||
regex_ref = re.compile(r'.*?ref.*?(\d+)\s*#\s*(X)?\s*(\d+)')
|
||||
regex_star = re.compile(r'\s*\*[\s,]*(\w+)\s*(.*?)\s*(?:;.*)?$')
|
||||
# years from 1960 to 2039
|
||||
regex_starref = re.compile(r'^\s*\*ref[\s.:]*((?:19[6789]\d)|(?:20[0123]\d))\s*#?\s*(X)?\s*(.*?\d+.*?)$(?i)')
|
||||
# regex_starref = re.compile("""?x # VERBOSE mode - can't get this to work
|
||||
# ^\s*\*ref # look for *ref at start of line
|
||||
# [\s.:]* # some spaces, stops or colons
|
||||
# ((?:19[6789]\d)|(?:20[0123]\d)) # a date from 1960 to 2039 - captured as one field
|
||||
# \s*# # spaces then hash separator
|
||||
# ?\s*(X) # optional X - captured
|
||||
# ?\s*(.*?\d+.*?) # maybe a space, then at least one digit in the string - captured
|
||||
# $(?i)""", re.X) # the end (do the whole thing case insensitively)
|
||||
|
||||
|
||||
regex_team = re.compile(r"(Insts|Notes|Tape|Dog|Useless|Pics|Helper|Disto|Consultant)\s+(.*)$(?i)")
|
||||
regex_team_member = re.compile(r" and | / |, | & | \+ |^both$|^none$(?i)")
|
||||
regex_qm = re.compile(r'^\s*QM(\d)\s+?([a-dA-DxX])\s+([\w\-]+)\.(\d+)\s+(([\w\-]+)\.(\d+)|\-)\s+(.+)$')
|
||||
|
||||
insp = ""
|
||||
callcount = 0
|
||||
def RecursiveLoad(survexblock, survexfile, fin, textlines):
|
||||
"""Follows the *include links in all the survex files from the root file 1623.svx
|
||||
and reads in the survex blocks, other data and the wallet references (survexscansfolder) as it
|
||||
goes. This part of the data import process is where the maximum memory is used and where it
|
||||
crashes on memory-constrained machines.
|
||||
"""
|
||||
iblankbegins = 0
|
||||
text = [ ]
|
||||
stardata = stardatadefault
|
||||
teammembers = [ ]
|
||||
global insp
|
||||
global callcount
|
||||
global survexlegsnumber
|
||||
|
||||
# uncomment to print out all files during parsing
|
||||
print(" - Reading file: " + survexblock.survexfile.path)
|
||||
print(insp+" - Reading file: " + survexblock.survexfile.path + " <> " + survexfile.path)
|
||||
stamp = datetime.now()
|
||||
lineno = 0
|
||||
|
||||
sys.stderr.flush();
|
||||
callcount +=1
|
||||
if callcount >=10:
|
||||
callcount=0
|
||||
print(".", file=sys.stderr,end='')
|
||||
|
||||
# Try to find the cave in the DB if not use the string as before
|
||||
path_match = re.search(r"caves-(\d\d\d\d)/(\d+|\d\d\d\d-?\w+-\d+)/", survexblock.survexfile.path)
|
||||
if path_match:
|
||||
pos_cave = '%s-%s' % (path_match.group(1), path_match.group(2))
|
||||
# print('Match')
|
||||
# print(pos_cave)
|
||||
# print(insp+'Match')
|
||||
# print(insp+os_cave)
|
||||
cave = models.getCaveByReference(pos_cave)
|
||||
if cave:
|
||||
survexfile.cave = cave
|
||||
svxlines = ''
|
||||
svxlines = fin.read().splitlines()
|
||||
# print('Cave - preloop ' + str(survexfile.cave))
|
||||
# print(survexblock)
|
||||
# print(insp+'Cave - preloop ' + str(survexfile.cave))
|
||||
# print(insp+survexblock)
|
||||
for svxline in svxlines:
|
||||
|
||||
# print(survexblock)
|
||||
# print(insp+survexblock)
|
||||
|
||||
# print(svxline)
|
||||
# print(insp+svxline)
|
||||
# if not svxline:
|
||||
# print(' - Not survex')
|
||||
# print(insp+' - Not survex')
|
||||
# return
|
||||
# textlines.append(svxline)
|
||||
|
||||
lineno += 1
|
||||
|
||||
# print(' - Line: %d' % lineno)
|
||||
# print(insp+' - Line: %d' % lineno)
|
||||
|
||||
# break the line at the comment
|
||||
sline, comment = regex_comment.match(svxline.strip()).groups()
|
||||
# detect ref line pointing to the scans directory
|
||||
mref = comment and regex_ref.match(comment)
|
||||
if mref:
|
||||
refscan = "%s#%s" % (mref.group(1), mref.group(2))
|
||||
yr, letterx, wallet = mref.groups()
|
||||
if not letterx:
|
||||
letterx = ""
|
||||
else:
|
||||
letterx = "X"
|
||||
if len(wallet)<2:
|
||||
wallet = "0" + wallet
|
||||
refscan = "%s#%s%s" % (yr, letterx, wallet )
|
||||
#print(insp+' - Wallet ;ref - %s - looking for survexscansfolder' % refscan)
|
||||
survexscansfolders = models.SurvexScansFolder.objects.filter(walletname=refscan)
|
||||
if survexscansfolders:
|
||||
survexblock.survexscansfolder = survexscansfolders[0]
|
||||
#survexblock.refscandir = "%s/%s%%23%s" % (mref.group(1), mref.group(1), mref.group(2))
|
||||
survexblock.save()
|
||||
continue
|
||||
# print(insp+' - Wallet ; ref - %s - found in survexscansfolders' % refscan)
|
||||
else:
|
||||
message = ' ! Wallet ; ref - %s - NOT found in survexscansfolders %s-%s-%s' % (refscan,yr,letterx,wallet)
|
||||
print(insp+message)
|
||||
models.DataIssue.objects.create(parser='survex', message=message)
|
||||
|
||||
# This whole section should be moved if we can have *QM become a proper survex command
|
||||
# Spec of QM in SVX files, currently commented out need to add to survex
|
||||
@@ -159,7 +226,7 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
|
||||
# ;QM1 a hobnob_hallway_2.42 - junction of keyhole passage
|
||||
qmline = comment and regex_qm.match(comment)
|
||||
if qmline:
|
||||
print(qmline.groups())
|
||||
# print(insp+qmline.groups())
|
||||
#(u'1', u'B', u'miraclemaze', u'1.17', u'-', None, u'\tcontinuation of rift')
|
||||
qm_no = qmline.group(1)
|
||||
qm_grade = qmline.group(2)
|
||||
@@ -169,49 +236,74 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
|
||||
qm_resolve_station = qmline.group(7)
|
||||
qm_notes = qmline.group(8)
|
||||
|
||||
print('Cave - %s' % survexfile.cave)
|
||||
print('QM no %d' % int(qm_no))
|
||||
print('QM grade %s' % qm_grade)
|
||||
print('QM section %s' % qm_from_section)
|
||||
print('QM station %s' % qm_from_station)
|
||||
print('QM res section %s' % qm_resolve_section)
|
||||
print('QM res station %s' % qm_resolve_station)
|
||||
print('QM notes %s' % qm_notes)
|
||||
# print(insp+'Cave - %s' % survexfile.cave)
|
||||
# print(insp+'QM no %d' % int(qm_no))
|
||||
# print(insp+'QM grade %s' % qm_grade)
|
||||
# print(insp+'QM section %s' % qm_from_section)
|
||||
# print(insp+'QM station %s' % qm_from_station)
|
||||
# print(insp+'QM res section %s' % qm_resolve_section)
|
||||
# print(insp+'QM res station %s' % qm_resolve_station)
|
||||
# print(insp+'QM notes %s' % qm_notes)
|
||||
|
||||
# If the QM isn't resolved (has a resolving station) thn load it
|
||||
# If the QM isn't resolved (has a resolving station) then load it
|
||||
if not qm_resolve_section or qm_resolve_section is not '-' or qm_resolve_section is not 'None':
|
||||
from_section = models.SurvexBlock.objects.filter(name=qm_from_section)
|
||||
# If we can find a section (survex note chunck, named)
|
||||
if len(from_section) > 0:
|
||||
print(from_section[0])
|
||||
# print(insp+from_section[0])
|
||||
from_station = models.SurvexStation.objects.filter(block=from_section[0], name=qm_from_station)
|
||||
# If we can find a from station then we have the nearest station and can import it
|
||||
if len(from_station) > 0:
|
||||
print(from_station[0])
|
||||
# print(insp+from_station[0])
|
||||
qm = models.QM.objects.create(number=qm_no,
|
||||
nearest_station=from_station[0],
|
||||
grade=qm_grade.upper(),
|
||||
location_description=qm_notes)
|
||||
else:
|
||||
print('QM found but resolved')
|
||||
# print(insp+' - QM found but resolved')
|
||||
pass
|
||||
|
||||
#print('Cave -sline ' + str(cave))
|
||||
#print(insp+'Cave -sline ' + str(cave))
|
||||
if not sline:
|
||||
continue
|
||||
|
||||
# detect the star ref command
|
||||
mstar = regex_starref.match(sline)
|
||||
if mstar:
|
||||
yr,letterx,wallet = mstar.groups()
|
||||
if not letterx:
|
||||
letterx = ""
|
||||
else:
|
||||
letterx = "X"
|
||||
if len(wallet)<2:
|
||||
wallet = "0" + wallet
|
||||
assert (int(yr)>1960 and int(yr)<2039), "Wallet year out of bounds: %s" % yr
|
||||
assert (int(wallet)<100), "Wallet number more than 100: %s" % wallet
|
||||
refscan = "%s#%s%s" % (yr, letterx, wallet)
|
||||
survexscansfolders = models.SurvexScansFolder.objects.filter(walletname=refscan)
|
||||
if survexscansfolders:
|
||||
survexblock.survexscansfolder = survexscansfolders[0]
|
||||
survexblock.save()
|
||||
# print(insp+' - Wallet *REF - %s - found in survexscansfolders' % refscan)
|
||||
else:
|
||||
message = ' ! Wallet *REF - %s - NOT found in survexscansfolders %s-%s-%s' % (refscan,yr,letterx,wallet)
|
||||
print(insp+message)
|
||||
models.DataIssue.objects.create(parser='survex', message=message)
|
||||
continue
|
||||
|
||||
# detect the star command
|
||||
mstar = regex_star.match(sline)
|
||||
if not mstar:
|
||||
if "from" in stardata:
|
||||
# print('Cave ' + str(survexfile.cave))
|
||||
# print(survexblock)
|
||||
# print(insp+'Cave ' + str(survexfile.cave))
|
||||
# print(insp+survexblock)
|
||||
LoadSurvexLineLeg(survexblock, stardata, sline, comment, survexfile.cave)
|
||||
# print(' - From: ')
|
||||
#print(stardata)
|
||||
# print(insp+' - From: ')
|
||||
# print(insp+stardata)
|
||||
pass
|
||||
elif stardata["type"] == "passage":
|
||||
LoadSurvexLinePassage(survexblock, stardata, sline, comment)
|
||||
# print(' - Passage: ')
|
||||
# print(insp+' - Passage: ')
|
||||
#Missing "station" in stardata.
|
||||
continue
|
||||
|
||||
@@ -220,24 +312,26 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
|
||||
cmd = cmd.lower()
|
||||
if re.match("include$(?i)", cmd):
|
||||
includepath = os.path.join(os.path.split(survexfile.path)[0], re.sub(r"\.svx$", "", line))
|
||||
print(' - Include file found including - ' + includepath)
|
||||
print(insp+' - Include path found including - ' + includepath)
|
||||
# Try to find the cave in the DB if not use the string as before
|
||||
path_match = re.search(r"caves-(\d\d\d\d)/(\d+|\d\d\d\d-?\w+-\d+)/", includepath)
|
||||
if path_match:
|
||||
pos_cave = '%s-%s' % (path_match.group(1), path_match.group(2))
|
||||
# print(pos_cave)
|
||||
# print(insp+pos_cave)
|
||||
cave = models.getCaveByReference(pos_cave)
|
||||
if cave:
|
||||
survexfile.cave = cave
|
||||
else:
|
||||
print('No match for %s' % includepath)
|
||||
print(insp+' - No match in DB (i) for %s, so loading..' % includepath)
|
||||
includesurvexfile = models.SurvexFile(path=includepath)
|
||||
includesurvexfile.save()
|
||||
includesurvexfile.SetDirectory()
|
||||
if includesurvexfile.exists():
|
||||
survexblock.save()
|
||||
fininclude = includesurvexfile.OpenFile()
|
||||
insp += "> "
|
||||
RecursiveLoad(survexblock, includesurvexfile, fininclude, textlines)
|
||||
insp = insp[2:]
|
||||
|
||||
elif re.match("begin$(?i)", cmd):
|
||||
if line:
|
||||
@@ -246,23 +340,26 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
|
||||
path_match = re.search(r"caves-(\d\d\d\d)/(\d+|\d\d\d\d-?\w+-\d+)/", newsvxpath)
|
||||
if path_match:
|
||||
pos_cave = '%s-%s' % (path_match.group(1), path_match.group(2))
|
||||
print(pos_cave)
|
||||
# print(insp+pos_cave)
|
||||
cave = models.getCaveByReference(pos_cave)
|
||||
if cave:
|
||||
survexfile.cave = cave
|
||||
else:
|
||||
print('No match for %s' % newsvxpath)
|
||||
print(insp+' - No match (b) for %s' % newsvxpath)
|
||||
|
||||
previousnlegs = survexlegsnumber
|
||||
name = line.lower()
|
||||
print(' - Begin found for: ' + name)
|
||||
# print('Block cave: ' + str(survexfile.cave))
|
||||
print(insp+' - Begin found for: ' + name)
|
||||
# print(insp+'Block cave: ' + str(survexfile.cave))
|
||||
survexblockdown = models.SurvexBlock(name=name, begin_char=fin.tell(), parent=survexblock, survexpath=survexblock.survexpath+"."+name, cave=survexfile.cave, survexfile=survexfile, totalleglength=0.0)
|
||||
survexblockdown.save()
|
||||
survexblock.save()
|
||||
survexblock = survexblockdown
|
||||
# print(survexblockdown)
|
||||
# print(insp+survexblockdown)
|
||||
textlinesdown = [ ]
|
||||
insp += "> "
|
||||
RecursiveLoad(survexblockdown, survexfile, fin, textlinesdown)
|
||||
insp = insp[2:]
|
||||
else:
|
||||
iblankbegins += 1
|
||||
|
||||
@@ -270,17 +367,21 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
|
||||
if iblankbegins:
|
||||
iblankbegins -= 1
|
||||
else:
|
||||
survexblock.text = "".join(textlines)
|
||||
#survexblock.text = "".join(textlines)
|
||||
# .text not used, using it for number of legs per block
|
||||
legsinblock = survexlegsnumber - previousnlegs
|
||||
print("LEGS: {} (previous: {}, now:{})".format(legsinblock,previousnlegs,survexlegsnumber))
|
||||
survexblock.text = str(legsinblock)
|
||||
survexblock.save()
|
||||
# print(' - End found: ')
|
||||
# print(insp+' - End found: ')
|
||||
endstamp = datetime.now()
|
||||
timetaken = endstamp - stamp
|
||||
# print(' - Time to process: ' + str(timetaken))
|
||||
# print(insp+' - Time to process: ' + str(timetaken))
|
||||
return
|
||||
|
||||
elif re.match("date$(?i)", cmd):
|
||||
if len(line) == 10:
|
||||
#print(' - Date found: ' + line)
|
||||
#print(insp+' - Date found: ' + line)
|
||||
survexblock.date = make_aware(datetime.strptime(re.sub(r"\.", "-", line), '%Y-%m-%d'), get_current_timezone())
|
||||
expeditions = models.Expedition.objects.filter(year=line[:4])
|
||||
if expeditions:
|
||||
@@ -291,7 +392,7 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
|
||||
|
||||
elif re.match("team$(?i)", cmd):
|
||||
pass
|
||||
# print(' - Team found: ')
|
||||
# print(insp+' - Team found: ')
|
||||
mteammember = regex_team.match(line)
|
||||
if mteammember:
|
||||
for tm in regex_team_member.split(mteammember.group(2)):
|
||||
@@ -306,7 +407,7 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
|
||||
personrole.save()
|
||||
|
||||
elif cmd == "title":
|
||||
#print(' - Title found: ')
|
||||
#print(insp+' - Title found: ')
|
||||
survextitle = models.SurvexTitle(survexblock=survexblock, title=line.strip('"'), cave=survexfile.cave)
|
||||
survextitle.save()
|
||||
pass
|
||||
@@ -316,11 +417,11 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
|
||||
pass
|
||||
|
||||
elif cmd == "data":
|
||||
#print(' - Data found: ')
|
||||
#print(insp+' - Data found: ')
|
||||
ls = line.lower().split()
|
||||
stardata = { "type":ls[0] }
|
||||
#print(' - Star data: ', stardata)
|
||||
#print(ls)
|
||||
#print(insp+' - Star data: ', stardata)
|
||||
#print(insp+ls)
|
||||
for i in range(0, len(ls)):
|
||||
stardata[stardataparamconvert.get(ls[i], ls[i])] = i - 1
|
||||
if ls[0] in ["normal", "cartesian", "nosurvey"]:
|
||||
@@ -331,25 +432,30 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
|
||||
assert ls[0] == "passage", line
|
||||
|
||||
elif cmd == "equate":
|
||||
#print(' - Equate found: ')
|
||||
#print(insp+' - Equate found: ')
|
||||
LoadSurvexEquate(survexblock, line)
|
||||
|
||||
elif cmd == "fix":
|
||||
#print(' - Fix found: ')
|
||||
#print(insp+' - Fix found: ')
|
||||
survexblock.MakeSurvexStation(line.split()[0])
|
||||
|
||||
else:
|
||||
#print(' - Stuff')
|
||||
#print(insp+' - Stuff')
|
||||
if cmd not in ["sd", "include", "units", "entrance", "data", "flags", "title", "export", "instrument",
|
||||
"calibrate", "set", "infer", "alias", "ref", "cs", "declination", "case"]:
|
||||
print("Unrecognised command in line:", cmd, line, survexblock, survexblock.survexfile.path)
|
||||
"calibrate", "set", "infer", "alias", "cs", "declination", "case"]:
|
||||
message = "! Bad svx command in line:%s %s %s %s" % (cmd, line, survexblock, survexblock.survexfile.path)
|
||||
print(insp+message)
|
||||
models.DataIssue.objects.create(parser='survex', message=message)
|
||||
|
||||
endstamp = datetime.now()
|
||||
timetaken = endstamp - stamp
|
||||
# print(' - Time to process: ' + str(timetaken))
|
||||
# print(insp+' - Time to process: ' + str(timetaken))
|
||||
|
||||
def LoadAllSurvexBlocks():
|
||||
global survexlegsalllength
|
||||
global survexlegsnumber
|
||||
|
||||
print('Loading All Survex Blocks...')
|
||||
print(' - Flushing All Survex Blocks...')
|
||||
|
||||
models.SurvexBlock.objects.all().delete()
|
||||
models.SurvexFile.objects.all().delete()
|
||||
@@ -361,12 +467,21 @@ def LoadAllSurvexBlocks():
|
||||
models.SurvexStation.objects.all().delete()
|
||||
|
||||
print(" - Data flushed")
|
||||
# Clear the data issues as we are reloading
|
||||
models.DataIssue.objects.filter(parser='survex').delete()
|
||||
print(' - Loading All Survex Blocks...')
|
||||
|
||||
print(' - redirecting stdout to loadsurvexblks.log...')
|
||||
stdout_orig = sys.stdout
|
||||
# Redirect sys.stdout to the file
|
||||
sys.stdout = open('loadsurvexblks.log', 'w')
|
||||
|
||||
survexfile = models.SurvexFile(path=settings.SURVEX_TOPNAME, cave=None)
|
||||
survexfile.save()
|
||||
survexfile.SetDirectory()
|
||||
|
||||
#Load all
|
||||
# this is the first so id=1
|
||||
survexblockroot = models.SurvexBlock(name="root", survexpath="", begin_char=0, cave=None, survexfile=survexfile, totalleglength=0.0)
|
||||
survexblockroot.save()
|
||||
fin = survexfile.OpenFile()
|
||||
@@ -374,30 +489,150 @@ def LoadAllSurvexBlocks():
|
||||
# The real work starts here
|
||||
RecursiveLoad(survexblockroot, survexfile, fin, textlines)
|
||||
fin.close()
|
||||
survexblockroot.text = "".join(textlines)
|
||||
survexblockroot.totalleglength = survexlegsalllength
|
||||
survexblockroot.text = str(survexlegsnumber)
|
||||
#survexblockroot.text = "".join(textlines) these are all blank
|
||||
survexblockroot.save()
|
||||
|
||||
# Close the file
|
||||
sys.stdout.close()
|
||||
print("+", file=sys.stderr)
|
||||
sys.stderr.flush();
|
||||
|
||||
# Restore sys.stdout to our old saved file handler
|
||||
sys.stdout = stdout_orig
|
||||
print(" - total number of survex legs: {}".format(survexlegsnumber))
|
||||
print(" - total leg lengths loaded: {}m".format(survexlegsalllength))
|
||||
print(' - Loaded All Survex Blocks.')
|
||||
|
||||
|
||||
poslineregex = re.compile(r"^\(\s*([+-]?\d*\.\d*),\s*([+-]?\d*\.\d*),\s*([+-]?\d*\.\d*)\s*\)\s*([^\s]+)$")
|
||||
|
||||
|
||||
def LoadPos():
|
||||
"""Run cavern to produce a complete .3d file, then run 3dtopos to produce a table of
|
||||
all survey point positions. Then lookup each position by name to see if we have it in the database
|
||||
and if we do, then save the x/y/z coordinates.
|
||||
If we don't have it in the database, print an error message and discard it.
|
||||
"""
|
||||
topdata = settings.SURVEX_DATA + settings.SURVEX_TOPNAME
|
||||
print(' - Generating a list of Pos from %s.svx and then loading...' % (topdata))
|
||||
|
||||
# Be careful with the cache file.
|
||||
# If LoadPos has been run before,
|
||||
# but without cave import being run before,
|
||||
# then *everything* may be in the fresh 'not found' cache file.
|
||||
|
||||
cachefile = settings.SURVEX_DATA + "posnotfound.cache"
|
||||
notfoundbefore = {}
|
||||
if os.path.isfile(cachefile):
|
||||
# this is not a good test. 1623.svx may never change but *included files may have done.
|
||||
# When the *include is unrolled, we will be able to get a proper timestamp to use
|
||||
# and can increase the timeout from 3 days to 30 days.
|
||||
updtsvx = os.path.getmtime(topdata + ".svx")
|
||||
updtcache = os.path.getmtime(cachefile)
|
||||
age = updtcache - updtsvx
|
||||
print(' svx: %s cache: %s not-found cache is fresher by: %s' % (updtsvx, updtcache, str(timedelta(seconds=age) )))
|
||||
|
||||
now = time.time()
|
||||
if now - updtcache > 3*24*60*60:
|
||||
print( " cache is more than 3 days old. Deleting.")
|
||||
os.remove(cachefile)
|
||||
elif age < 0 :
|
||||
print(" cache is stale. Deleting.")
|
||||
os.remove(cachefile)
|
||||
else:
|
||||
print(" cache is fresh. Reading...")
|
||||
try:
|
||||
with open(cachefile, "r") as f:
|
||||
for line in f:
|
||||
l = line.rstrip()
|
||||
if l in notfoundbefore:
|
||||
notfoundbefore[l] +=1 # should not be duplicates
|
||||
print(" DUPLICATE ", line, notfoundbefore[l])
|
||||
else:
|
||||
notfoundbefore[l] =1
|
||||
except:
|
||||
print(" FAILURE READ opening cache file %s" % (cachefile))
|
||||
raise
|
||||
|
||||
|
||||
notfoundnow =[]
|
||||
found = 0
|
||||
skip = {}
|
||||
print("\n") # extra line because cavern overwrites the text buffer somehow
|
||||
# cavern defaults to using same cwd as supplied input file
|
||||
call([settings.CAVERN, "--output=%s.3d" % (topdata), "%s.svx" % (topdata)])
|
||||
call([settings.THREEDTOPOS, '%s.3d' % (topdata)], cwd = settings.SURVEX_DATA)
|
||||
print(" - This next bit takes a while. Matching ~32,000 survey positions. Be patient...")
|
||||
|
||||
print('Loading Pos....')
|
||||
mappoints = {}
|
||||
for pt in MapLocations().points():
|
||||
svxid, number, point_type, label = pt
|
||||
mappoints[svxid]=True
|
||||
|
||||
call([settings.CAVERN, "--output=%s%s.3d" % (settings.SURVEX_DATA, settings.SURVEX_TOPNAME), "%s%s.svx" % (settings.SURVEX_DATA, settings.SURVEX_TOPNAME)])
|
||||
call([settings.THREEDTOPOS, '%s%s.3d' % (settings.SURVEX_DATA, settings.SURVEX_TOPNAME)], cwd = settings.SURVEX_DATA)
|
||||
posfile = open("%s%s.pos" % (settings.SURVEX_DATA, settings.SURVEX_TOPNAME))
|
||||
posfile = open("%s.pos" % (topdata))
|
||||
posfile.readline() #Drop header
|
||||
|
||||
survexblockroot = models_survex.SurvexBlock.objects.get(id=1)
|
||||
for line in posfile.readlines():
|
||||
r = poslineregex.match(line)
|
||||
if r:
|
||||
x, y, z, name = r.groups()
|
||||
try:
|
||||
ss = models.SurvexStation.objects.lookup(name)
|
||||
ss.x = float(x)
|
||||
ss.y = float(y)
|
||||
ss.z = float(z)
|
||||
ss.save()
|
||||
except:
|
||||
print("%s not parsed in survex" % name)
|
||||
x, y, z, id = r.groups()
|
||||
if id in notfoundbefore:
|
||||
skip[id] = 1
|
||||
else:
|
||||
for sid in mappoints:
|
||||
if id.endswith(sid):
|
||||
notfoundnow.append(id)
|
||||
# Now that we don't import any stations, we create it rather than look it up
|
||||
# ss = models_survex.SurvexStation.objects.lookup(id)
|
||||
|
||||
# need to set block_id which means doing a search on all the survex blocks..
|
||||
# remove dot at end and add one at beginning
|
||||
blockpath = "." + id[:-len(sid)].strip(".")
|
||||
try:
|
||||
sbqs = models_survex.SurvexBlock.objects.filter(survexpath=blockpath)
|
||||
if len(sbqs)==1:
|
||||
sb = sbqs[0]
|
||||
if len(sbqs)>1:
|
||||
message = ' ! MULTIPLE SurvexBlocks matching Entrance point {} {}'.format(blockpath, sid)
|
||||
print(message)
|
||||
models.DataIssue.objects.create(parser='survex', message=message)
|
||||
sb = sbqs[0]
|
||||
elif len(sbqs)<=0:
|
||||
message = ' ! ZERO SurvexBlocks matching Entrance point {} {}'.format(blockpath, sid)
|
||||
print(message)
|
||||
models.DataIssue.objects.create(parser='survex', message=message)
|
||||
sb = survexblockroot
|
||||
except:
|
||||
message = ' ! FAIL in getting SurvexBlock matching Entrance point {} {}'.format(blockpath, sid)
|
||||
print(message)
|
||||
models.DataIssue.objects.create(parser='survex', message=message)
|
||||
try:
|
||||
ss = models_survex.SurvexStation(name=id, block=sb)
|
||||
ss.x = float(x)
|
||||
ss.y = float(y)
|
||||
ss.z = float(z)
|
||||
ss.save()
|
||||
found += 1
|
||||
except:
|
||||
message = ' ! FAIL to create SurvexStation Entrance point {} {}'.format(blockpath, sid)
|
||||
print(message)
|
||||
models.DataIssue.objects.create(parser='survex', message=message)
|
||||
raise
|
||||
|
||||
#print(" - %s failed lookups of SurvexStation.objects. %s found. %s skipped." % (len(notfoundnow),found, len(skip)))
|
||||
|
||||
if found > 10: # i.e. a previous cave import has been done
|
||||
try:
|
||||
with open(cachefile, "w") as f:
|
||||
c = len(notfoundnow)+len(skip)
|
||||
for i in notfoundnow:
|
||||
pass #f.write("%s\n" % i)
|
||||
for j in skip:
|
||||
pass #f.write("%s\n" % j) # NB skip not notfoundbefore
|
||||
print((' Not-found cache file written: %s entries' % c))
|
||||
except:
|
||||
print(" FAILURE WRITE opening cache file %s" % (cachefile))
|
||||
raise
|
||||
|
||||
@@ -1,16 +1,21 @@
|
||||
import sys, os, types, logging, stat
|
||||
#sys.path.append('C:\\Expo\\expoweb')
|
||||
#from troggle import *
|
||||
#os.environ['DJANGO_SETTINGS_MODULE']='troggle.settings'
|
||||
import settings
|
||||
from troggle.core.models import *
|
||||
from PIL import Image
|
||||
#import settings
|
||||
#import core.models as models
|
||||
from __future__ import (absolute_import, division,
|
||||
print_function, unicode_literals)
|
||||
|
||||
import sys
|
||||
import os
|
||||
import types
|
||||
import logging
|
||||
import stat
|
||||
import csv
|
||||
import re
|
||||
import datetime
|
||||
|
||||
#from PIL import Image
|
||||
from utils import save_carefully
|
||||
from functools import reduce
|
||||
|
||||
import settings
|
||||
from troggle.core.models import *
|
||||
|
||||
def get_or_create_placeholder(year):
|
||||
""" All surveys must be related to a logbookentry. We don't have a way to
|
||||
@@ -24,142 +29,89 @@ def get_or_create_placeholder(year):
|
||||
placeholder_logbook_entry, newly_created = save_carefully(LogbookEntry, lookupAttribs, nonLookupAttribs)
|
||||
return placeholder_logbook_entry
|
||||
|
||||
# dead
|
||||
def readSurveysFromCSV():
|
||||
try: # could probably combine these two
|
||||
surveytab = open(os.path.join(settings.SURVEY_SCANS, "Surveys.csv"))
|
||||
except IOError:
|
||||
import cStringIO, urllib
|
||||
surveytab = cStringIO.StringIO(urllib.urlopen(settings.SURVEY_SCANS + "/Surveys.csv").read())
|
||||
dialect=csv.Sniffer().sniff(surveytab.read())
|
||||
surveytab.seek(0,0)
|
||||
surveyreader = csv.reader(surveytab,dialect=dialect)
|
||||
headers = surveyreader.next()
|
||||
header = dict(zip(headers, range(len(headers)))) #set up a dictionary where the indexes are header names and the values are column numbers
|
||||
|
||||
# test if the expeditions have been added yet
|
||||
if Expedition.objects.count()==0:
|
||||
print("There are no expeditions in the database. Please run the logbook parser.")
|
||||
sys.exit()
|
||||
|
||||
|
||||
logging.info("Deleting all scanned images")
|
||||
ScannedImage.objects.all().delete()
|
||||
|
||||
|
||||
logging.info("Deleting all survey objects")
|
||||
Survey.objects.all().delete()
|
||||
|
||||
|
||||
logging.info("Beginning to import surveys from "+str(os.path.join(settings.SURVEYS, "Surveys.csv"))+"\n"+"-"*60+"\n")
|
||||
|
||||
for survey in surveyreader:
|
||||
#I hate this, but some surveys have a letter eg 2000#34a. The next line deals with that.
|
||||
walletNumberLetter = re.match(r'(?P<number>\d*)(?P<letter>[a-zA-Z]*)',survey[header['Survey Number']])
|
||||
# print(walletNumberLetter.groups())
|
||||
year=survey[header['Year']]
|
||||
|
||||
|
||||
surveyobj = Survey(
|
||||
expedition = Expedition.objects.filter(year=year)[0],
|
||||
wallet_number = walletNumberLetter.group('number'),
|
||||
logbook_entry = get_or_create_placeholder(year),
|
||||
comments = survey[header['Comments']],
|
||||
location = survey[header['Location']]
|
||||
)
|
||||
surveyobj.wallet_letter = walletNumberLetter.group('letter')
|
||||
if survey[header['Finished']]=='Yes':
|
||||
#try and find the sketch_scan
|
||||
pass
|
||||
surveyobj.save()
|
||||
|
||||
|
||||
logging.info("added survey " + survey[header['Year']] + "#" + surveyobj.wallet_number + "\r")
|
||||
|
||||
# dead
|
||||
def listdir(*directories):
|
||||
try:
|
||||
return os.listdir(os.path.join(settings.SURVEYS, *directories))
|
||||
except:
|
||||
import urllib
|
||||
import urllib.request, urllib.parse, urllib.error
|
||||
url = settings.SURVEYS + reduce(lambda x, y: x + "/" + y, ["listdir"] + list(directories))
|
||||
folders = urllib.urlopen(url.replace("#", "%23")).readlines()
|
||||
folders = urllib.request.urlopen(url.replace("#", "%23")).readlines()
|
||||
return [folder.rstrip(r"/") for folder in folders]
|
||||
|
||||
# add survey scans
|
||||
def parseSurveyScans(expedition, logfile=None):
|
||||
# yearFileList = listdir(expedition.year)
|
||||
try:
|
||||
yearPath=os.path.join(settings.SURVEY_SCANS, "surveyscans", expedition.year)
|
||||
yearFileList=os.listdir(yearPath)
|
||||
print(yearFileList)
|
||||
for surveyFolder in yearFileList:
|
||||
try:
|
||||
surveyNumber=re.match(r'\d\d\d\d#(X?)0*(\d+)',surveyFolder).groups()
|
||||
#scanList = listdir(expedition.year, surveyFolder)
|
||||
scanList=os.listdir(os.path.join(yearPath,surveyFolder))
|
||||
except AttributeError:
|
||||
print("Folder: " + surveyFolder + " ignored\r")
|
||||
continue
|
||||
# def parseSurveyScans(expedition, logfile=None):
|
||||
# # yearFileList = listdir(expedition.year)
|
||||
# try:
|
||||
# yearPath=os.path.join(settings.SURVEY_SCANS, "surveyscans", expedition.year)
|
||||
# yearFileList=os.listdir(yearPath)
|
||||
# print(yearFileList)
|
||||
# for surveyFolder in yearFileList:
|
||||
# try:
|
||||
# surveyNumber=re.match(rb'\d\d\d\d#(X?)0*(\d+)',surveyFolder).groups()
|
||||
# #scanList = listdir(expedition.year, surveyFolder)
|
||||
# scanList=os.listdir(os.path.join(yearPath,surveyFolder))
|
||||
# except AttributeError:
|
||||
# print(("Ignoring file in year folder: " + surveyFolder + "\r"))
|
||||
# continue
|
||||
|
||||
for scan in scanList:
|
||||
try:
|
||||
scanChopped=re.match(r'(?i).*(notes|elev|plan|elevation|extend)(\d*)\.(png|jpg|jpeg)',scan).groups()
|
||||
scanType,scanNumber,scanFormat=scanChopped
|
||||
except AttributeError:
|
||||
print("File: " + scan + " ignored\r")
|
||||
continue
|
||||
if scanType == 'elev' or scanType == 'extend':
|
||||
scanType = 'elevation'
|
||||
# for scan in scanList:
|
||||
# # Why does this insist on renaming all the scanned image files?
|
||||
# # It produces duplicates names and all images have type .jpg in the scanObj.
|
||||
# # It seems to rely on end users being particularly diligent in filenames which is NGtH
|
||||
# try:
|
||||
# #scanChopped=re.match(rb'(?i).*(notes|elev|plan|extend|elevation)-?(\d*)\.(png|jpg|jpeg|pdf)',scan).groups()
|
||||
# scanChopped=re.match(rb'(?i)([a-z_-]*\d?[a-z_-]*)(\d*)\.(png|jpg|jpeg|pdf|top|dxf|svg|tdr|th2|xml|txt)',scan).groups()
|
||||
# scanType,scanNumber,scanFormat=scanChopped
|
||||
# except AttributeError:
|
||||
# print(("Ignored (bad name format): " + surveyFolder + '/' + scan + "\r"))
|
||||
# continue
|
||||
# scanTest = scanType
|
||||
# scanType = 'notes'
|
||||
# match = re.search(rb'(?i)(elev|extend)',scanTest)
|
||||
# if match:
|
||||
# scanType = 'elevation'
|
||||
|
||||
if scanNumber=='':
|
||||
scanNumber=1
|
||||
# match = re.search(rb'(?i)(plan)',scanTest)
|
||||
# if match:
|
||||
# scanType = 'plan'
|
||||
|
||||
if type(surveyNumber)==types.TupleType:
|
||||
surveyLetter=surveyNumber[0]
|
||||
surveyNumber=surveyNumber[1]
|
||||
try:
|
||||
placeholder=get_or_create_placeholder(year=int(expedition.year))
|
||||
survey=Survey.objects.get_or_create(wallet_number=surveyNumber, wallet_letter=surveyLetter, expedition=expedition, defaults={'logbook_entry':placeholder})[0]
|
||||
except Survey.MultipleObjectsReturned:
|
||||
survey=Survey.objects.filter(wallet_number=surveyNumber, wallet_letter=surveyLetter, expedition=expedition)[0]
|
||||
file_=os.path.join(yearPath, surveyFolder, scan)
|
||||
scanObj = ScannedImage(
|
||||
file=file_,
|
||||
contents=scanType,
|
||||
number_in_wallet=scanNumber,
|
||||
survey=survey,
|
||||
new_since_parsing=False,
|
||||
)
|
||||
print("Added scanned image at " + str(scanObj))
|
||||
#if scanFormat=="png":
|
||||
#if isInterlacedPNG(os.path.join(settings.SURVEY_SCANS, "surveyscans", file_)):
|
||||
# print file_+ " is an interlaced PNG. No can do."
|
||||
#continue
|
||||
scanObj.save()
|
||||
except (IOError, OSError):
|
||||
yearPath=os.path.join(settings.SURVEY_SCANS, "surveyscans", expedition.year)
|
||||
print("No folder found for " + expedition.year + " at:- " + yearPath)
|
||||
# if scanNumber=='':
|
||||
# scanNumber=1
|
||||
|
||||
# if isinstance(surveyNumber, tuple):
|
||||
# surveyLetter=surveyNumber[0]
|
||||
# surveyNumber=surveyNumber[1]
|
||||
# try:
|
||||
# placeholder=get_or_create_placeholder(year=int(expedition.year))
|
||||
# survey=Survey.objects.get_or_create(wallet_number=surveyNumber, wallet_letter=surveyLetter, expedition=expedition, defaults={'logbook_entry':placeholder})[0]
|
||||
# except Survey.MultipleObjectsReturned:
|
||||
# survey=Survey.objects.filter(wallet_number=surveyNumber, wallet_letter=surveyLetter, expedition=expedition)[0]
|
||||
# file_=os.path.join(yearPath, surveyFolder, scan)
|
||||
# scanObj = ScannedImage(
|
||||
# file=file_,
|
||||
# contents=scanType,
|
||||
# number_in_wallet=scanNumber,
|
||||
# survey=survey,
|
||||
# new_since_parsing=False,
|
||||
# )
|
||||
# print(("Added scanned image at " + str(scanObj)))
|
||||
# #if scanFormat=="png":
|
||||
# #if isInterlacedPNG(os.path.join(settings.SURVEY_SCANS, "surveyscans", file_)):
|
||||
# # print file_+ " is an interlaced PNG. No can do."
|
||||
# #continue
|
||||
# scanObj.save()
|
||||
# except (IOError, OSError):
|
||||
# yearPath=os.path.join(settings.SURVEY_SCANS, "surveyscans", expedition.year)
|
||||
# print((" ! No folder found for " + expedition.year + " at:- " + yearPath))
|
||||
|
||||
# dead
|
||||
def parseSurveys(logfile=None):
|
||||
try:
|
||||
readSurveysFromCSV()
|
||||
except (IOError, OSError):
|
||||
print("Survey CSV not found..")
|
||||
pass
|
||||
|
||||
for expedition in Expedition.objects.filter(year__gte=2000): #expos since 2000, because paths and filenames were nonstandard before then
|
||||
parseSurveyScans(expedition)
|
||||
|
||||
# dead
|
||||
def isInterlacedPNG(filePath): #We need to check for interlaced PNGs because the thumbnail engine can't handle them (uses PIL)
|
||||
file=Image.open(filePath)
|
||||
print(filePath)
|
||||
if 'interlace' in file.info:
|
||||
return file.info['interlace']
|
||||
else:
|
||||
return False
|
||||
# def isInterlacedPNG(filePath): #We need to check for interlaced PNGs because the thumbnail engine can't handle them (uses PIL)
|
||||
# file=Image.open(filePath)
|
||||
# print(filePath)
|
||||
# if 'interlace' in file.info:
|
||||
# return file.info['interlace']
|
||||
# else:
|
||||
# return False
|
||||
|
||||
|
||||
# handles url or file, so we can refer to a set of scans on another server
|
||||
@@ -167,7 +119,7 @@ def GetListDir(sdir):
|
||||
res = [ ]
|
||||
if sdir[:7] == "http://":
|
||||
assert False, "Not written"
|
||||
s = urllib.urlopen(sdir)
|
||||
s = urllib.request.urlopen(sdir)
|
||||
else:
|
||||
for f in os.listdir(sdir):
|
||||
if f[0] != ".":
|
||||
@@ -178,44 +130,52 @@ def GetListDir(sdir):
|
||||
|
||||
def LoadListScansFile(survexscansfolder):
|
||||
gld = [ ]
|
||||
|
||||
# flatten out any directories in these book files
|
||||
# flatten out any directories in these wallet folders - should not be any
|
||||
for (fyf, ffyf, fisdiryf) in GetListDir(survexscansfolder.fpath):
|
||||
if fisdiryf:
|
||||
gld.extend(GetListDir(ffyf))
|
||||
else:
|
||||
gld.append((fyf, ffyf, fisdiryf))
|
||||
|
||||
c=0
|
||||
for (fyf, ffyf, fisdiryf) in gld:
|
||||
#assert not fisdiryf, ffyf
|
||||
if re.search(r"\.(?:png|jpg|jpeg)(?i)$", fyf):
|
||||
if re.search(r"\.(?:png|jpg|jpeg|pdf|svg|gif)(?i)$", fyf):
|
||||
survexscansingle = SurvexScanSingle(ffile=ffyf, name=fyf, survexscansfolder=survexscansfolder)
|
||||
survexscansingle.save()
|
||||
c+=1
|
||||
if c>=10:
|
||||
print(".", end='')
|
||||
c = 0
|
||||
|
||||
|
||||
# this iterates through the scans directories (either here or on the remote server)
|
||||
# and builds up the models we can access later
|
||||
def LoadListScans():
|
||||
|
||||
print('Loading Survey Scans...')
|
||||
print(' - Loading Survey Scans')
|
||||
|
||||
SurvexScanSingle.objects.all().delete()
|
||||
SurvexScansFolder.objects.all().delete()
|
||||
print(' - deleting all scansFolder and scansSingle objects')
|
||||
|
||||
# first do the smkhs (large kh survey scans) directory
|
||||
survexscansfoldersmkhs = SurvexScansFolder(fpath=os.path.join(settings.SURVEY_SCANS, "smkhs"), walletname="smkhs")
|
||||
survexscansfoldersmkhs = SurvexScansFolder(fpath=os.path.join(settings.SURVEY_SCANS, "../surveys/smkhs"), walletname="smkhs")
|
||||
print("smkhs", end=' ')
|
||||
if os.path.isdir(survexscansfoldersmkhs.fpath):
|
||||
survexscansfoldersmkhs.save()
|
||||
LoadListScansFile(survexscansfoldersmkhs)
|
||||
|
||||
|
||||
# iterate into the surveyscans directory
|
||||
for f, ff, fisdir in GetListDir(os.path.join(settings.SURVEY_SCANS, "surveyscans")):
|
||||
print(' - ', end=' ')
|
||||
for f, ff, fisdir in GetListDir(settings.SURVEY_SCANS):
|
||||
if not fisdir:
|
||||
continue
|
||||
|
||||
# do the year folders
|
||||
if re.match(r"\d\d\d\d$", f):
|
||||
print("%s" % f, end=' ')
|
||||
for fy, ffy, fisdiry in GetListDir(ff):
|
||||
if fisdiry:
|
||||
assert fisdiry, ffy
|
||||
@@ -232,7 +192,7 @@ def LoadListScans():
|
||||
|
||||
def FindTunnelScan(tunnelfile, path):
|
||||
scansfolder, scansfile = None, None
|
||||
mscansdir = re.search(r"(\d\d\d\d#X?\d+\w?|1995-96kh|92-94Surveybookkh|1991surveybook|smkhs)/(.*?(?:png|jpg))$", path)
|
||||
mscansdir = re.search(r"(\d\d\d\d#X?\d+\w?|1995-96kh|92-94Surveybookkh|1991surveybook|smkhs)/(.*?(?:png|jpg|pdf|jpeg))$", path)
|
||||
if mscansdir:
|
||||
scansfolderl = SurvexScansFolder.objects.filter(walletname=mscansdir.group(1))
|
||||
if len(scansfolderl):
|
||||
@@ -241,8 +201,11 @@ def FindTunnelScan(tunnelfile, path):
|
||||
if scansfolder:
|
||||
scansfilel = scansfolder.survexscansingle_set.filter(name=mscansdir.group(2))
|
||||
if len(scansfilel):
|
||||
print(scansfilel, len(scansfilel))
|
||||
assert len(scansfilel) == 1
|
||||
if len(scansfilel) > 1:
|
||||
print("BORK more than one image filename matches filter query. ", scansfilel[0])
|
||||
print("BORK ", tunnelfile.tunnelpath, path)
|
||||
print("BORK ", mscansdir.group(1), mscansdir.group(2), len(scansfilel))
|
||||
#assert len(scansfilel) == 1
|
||||
scansfile = scansfilel[0]
|
||||
|
||||
if scansfolder:
|
||||
@@ -250,9 +213,9 @@ def FindTunnelScan(tunnelfile, path):
|
||||
if scansfile:
|
||||
tunnelfile.survexscans.add(scansfile)
|
||||
|
||||
elif path and not re.search(r"\.(?:png|jpg|jpeg)$(?i)", path):
|
||||
elif path and not re.search(r"\.(?:png|jpg|pdf|jpeg)$(?i)", path):
|
||||
name = os.path.split(path)[1]
|
||||
print("ttt", tunnelfile.tunnelpath, path, name)
|
||||
#print("debug-tunnelfileobjects ", tunnelfile.tunnelpath, path, name)
|
||||
rtunnelfilel = TunnelFile.objects.filter(tunnelname=name)
|
||||
if len(rtunnelfilel):
|
||||
assert len(rtunnelfilel) == 1, ("two paths with name of", path, "need more discrimination coded")
|
||||
@@ -266,19 +229,22 @@ def FindTunnelScan(tunnelfile, path):
|
||||
def SetTunnelfileInfo(tunnelfile):
|
||||
ff = os.path.join(settings.TUNNEL_DATA, tunnelfile.tunnelpath)
|
||||
tunnelfile.filesize = os.stat(ff)[stat.ST_SIZE]
|
||||
fin = open(ff)
|
||||
fin = open(ff,'rb')
|
||||
ttext = fin.read()
|
||||
fin.close()
|
||||
|
||||
mtype = re.search("<(fontcolours|sketch)", ttext)
|
||||
if tunnelfile.filesize <= 0:
|
||||
print("DEBUG - zero length xml file", ff)
|
||||
return
|
||||
mtype = re.search(r"<(fontcolours|sketch)", ttext)
|
||||
|
||||
assert mtype, ff
|
||||
tunnelfile.bfontcolours = (mtype.group(1)=="fontcolours")
|
||||
tunnelfile.npaths = len(re.findall("<skpath", ttext))
|
||||
tunnelfile.npaths = len(re.findall(r"<skpath", ttext))
|
||||
tunnelfile.save()
|
||||
|
||||
# <tunnelxml tunnelversion="version2009-06-21 Matienzo" tunnelproject="ireby" tunneluser="goatchurch" tunneldate="2009-06-29 23:22:17">
|
||||
# <pcarea area_signal="frame" sfscaledown="12.282584" sfrotatedeg="-90.76982" sfxtrans="11.676667377221136" sfytrans="-15.677173422877454" sfsketch="204description/scans/plan(38).png" sfstyle="" nodeconnzsetrelative="0.0">
|
||||
for path, style in re.findall('<pcarea area_signal="frame".*?sfsketch="([^"]*)" sfstyle="([^"]*)"', ttext):
|
||||
for path, style in re.findall(r'<pcarea area_signal="frame".*?sfsketch="([^"]*)" sfstyle="([^"]*)"', ttext):
|
||||
FindTunnelScan(tunnelfile, path)
|
||||
|
||||
# should also scan and look for survex blocks that might have been included
|
||||
|
||||
60
pathreport.py
Normal file
60
pathreport.py
Normal file
@@ -0,0 +1,60 @@
|
||||
#!/usr/bin/python
|
||||
from settings import *
|
||||
import sys
|
||||
import os
|
||||
import string
|
||||
import re
|
||||
import urlparse
|
||||
import django
|
||||
|
||||
pathsdict={
|
||||
"ADMIN_MEDIA_PREFIX" : ADMIN_MEDIA_PREFIX,
|
||||
"ADMIN_MEDIA_PREFIX" : ADMIN_MEDIA_PREFIX,
|
||||
"CAVEDESCRIPTIONSX" : CAVEDESCRIPTIONS,
|
||||
"DIR_ROOT" : DIR_ROOT,
|
||||
#"EMAIL_HOST" : EMAIL_HOST,
|
||||
#"EMAIL_HOST_USER" : EMAIL_HOST_USER,
|
||||
"ENTRANCEDESCRIPTIONS" : ENTRANCEDESCRIPTIONS,
|
||||
"EXPOUSER_EMAIL" : EXPOUSER_EMAIL,
|
||||
"EXPOUSERPASS" :"<redacted>",
|
||||
"EXPOUSER" : EXPOUSER,
|
||||
"EXPOWEB" : EXPOWEB,
|
||||
"EXPOWEB_URL" : EXPOWEB_URL,
|
||||
"FILES" : FILES,
|
||||
"JSLIB_URL" : JSLIB_URL,
|
||||
"LOGFILE" : LOGFILE,
|
||||
"LOGIN_REDIRECT_URL" : LOGIN_REDIRECT_URL,
|
||||
"MEDIA_ADMIN_DIR" : MEDIA_ADMIN_DIR,
|
||||
"MEDIA_ROOT" : MEDIA_ROOT,
|
||||
"MEDIA_URL" : MEDIA_URL,
|
||||
#"PHOTOS_ROOT" : PHOTOS_ROOT,
|
||||
"PHOTOS_URL" : PHOTOS_URL,
|
||||
"PYTHON_PATH" : PYTHON_PATH,
|
||||
"REPOS_ROOT_PATH" : REPOS_ROOT_PATH,
|
||||
"ROOT_URLCONF" : ROOT_URLCONF,
|
||||
"STATIC_ROOT" : STATIC_ROOT,
|
||||
"STATIC_URL" : STATIC_URL,
|
||||
"SURVEX_DATA" : SURVEX_DATA,
|
||||
"SURVEY_SCANS" : SURVEY_SCANS,
|
||||
"SURVEYS" : SURVEYS,
|
||||
"SURVEYS_URL" : SURVEYS_URL,
|
||||
"SVX_URL" : SVX_URL,
|
||||
"TEMPLATE_DIRS" : TEMPLATE_DIRS,
|
||||
"THREEDCACHEDIR" : THREEDCACHEDIR,
|
||||
"TINY_MCE_MEDIA_ROOT" : TINY_MCE_MEDIA_ROOT,
|
||||
"TINY_MCE_MEDIA_URL" : TINY_MCE_MEDIA_URL,
|
||||
"TUNNEL_DATA" : TUNNEL_DATA,
|
||||
"URL_ROOT" : URL_ROOT
|
||||
}
|
||||
|
||||
sep="\r\t\t\t" # ugh nasty - terminal output only
|
||||
sep2="\r\t\t\t\t\t\t\t" # ugh nasty - terminal output only
|
||||
|
||||
bycodes = sorted(pathsdict)
|
||||
for p in bycodes:
|
||||
print p, sep , pathsdict[p]
|
||||
|
||||
byvals = sorted(pathsdict, key=pathsdict.__getitem__)
|
||||
for p in byvals:
|
||||
print pathsdict[p] , sep2, p
|
||||
|
||||
7
requirements.txt
Normal file
7
requirements.txt
Normal file
@@ -0,0 +1,7 @@
|
||||
Django==1.7
|
||||
django-extensions==2.2.9
|
||||
django-registration==2.0
|
||||
django-tinymce==2.0.1
|
||||
six==1.14.0
|
||||
Unidecode==1.1.1
|
||||
Pillow==7.1.2
|
||||
9
settings.py
Normal file → Executable file
9
settings.py
Normal file → Executable file
@@ -1,4 +1,5 @@
|
||||
from localsettings import * #inital localsettings call so that urljoins work
|
||||
from localsettings import *
|
||||
#inital localsettings call so that urljoins work
|
||||
import os
|
||||
import urlparse
|
||||
import django
|
||||
@@ -44,7 +45,7 @@ NOTABLECAVESHREFS = [ "161", "204", "258", "76", "107", "264" ]
|
||||
# trailing slash.
|
||||
# Examples: "http://foo.com/media/", "/media/".
|
||||
ADMIN_MEDIA_PREFIX = '/troggle/media-admin/'
|
||||
PHOTOS_ROOT = os.path.join(EXPOWEB, 'photos')
|
||||
#PHOTOS_ROOT = os.path.join(EXPOWEB, 'mugshot-data')
|
||||
CAVEDESCRIPTIONS = os.path.join(EXPOWEB, "cave_data")
|
||||
ENTRANCEDESCRIPTIONS = os.path.join(EXPOWEB, "entrance_data")
|
||||
|
||||
@@ -69,7 +70,7 @@ LOGBOOK_PARSER_SETTINGS = {
|
||||
"2013": ("2013/logbook.html", "Parseloghtmltxt"),
|
||||
"2012": ("2012/logbook.html", "Parseloghtmltxt"),
|
||||
"2011": ("2011/logbook.html", "Parseloghtmltxt"),
|
||||
"2010": ("2010/logbook.html", "Parselogwikitxt"),
|
||||
"2010": ("2010/logbook.html", "Parseloghtmltxt"),
|
||||
"2009": ("2009/2009logbook.txt", "Parselogwikitxt"),
|
||||
"2008": ("2008/2008logbook.txt", "Parselogwikitxt"),
|
||||
"2007": ("2007/logbook.html", "Parseloghtmltxt"),
|
||||
@@ -130,7 +131,7 @@ INSTALLED_APPS = (
|
||||
'troggle.profiles',
|
||||
'troggle.core',
|
||||
'troggle.flatpages',
|
||||
'troggle.imagekit',
|
||||
#'troggle.imagekit',
|
||||
)
|
||||
|
||||
MIDDLEWARE_CLASSES = (
|
||||
|
||||
@@ -35,13 +35,11 @@
|
||||
<a href="{% url "survexcaveslist" %}">All Survex</a> |
|
||||
<a href="{% url "surveyscansfolders" %}">Scans</a> |
|
||||
<a href="{% url "tunneldata" %}">Tunneldata</a> |
|
||||
<a href="{% url "survexcavessingle" 107 %}">107</a> |
|
||||
<a href="{% url "survexcavessingle" 161 %}">161</a> |
|
||||
<a href="{% url "survexcavessingle" 204 %}">204</a> |
|
||||
<a href="{% url "survexcavessingle" 258 %}">258</a> |
|
||||
<a href="{% url "survexcavessingle" 264 %}">264</a> |
|
||||
<a href="{% url "expedition" 2016 %}">Expo2016</a> |
|
||||
<a href="{% url "expedition" 2017 %}">Expo2017</a> |
|
||||
<a href="{% url "survexcavessingle" "caves-1623/290/290.svx" %}">290</a> |
|
||||
<a href="{% url "survexcavessingle" "caves-1623/291/291.svx" %}">291</a> |
|
||||
<a href="{% url "survexcavessingle" "caves-1626/359/359.svx" %}">359</a> |
|
||||
<a href="{% url "survexcavessingle" "caves-1623/258/258.svx" %}">258</a> |
|
||||
<a href="{% url "survexcavessingle" "caves-1623/264/264.svx" %}">264</a> |
|
||||
<a href="{% url "expedition" 2018 %}">Expo2018</a> |
|
||||
<a href="{% url "expedition" 2019 %}">Expo2019</a> |
|
||||
<a href="{% url "expedition" 2020 %}">Expo2020</a> |
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
<html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
|
||||
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
|
||||
|
||||
@@ -15,7 +15,20 @@
|
||||
{% endfor %}
|
||||
</ul>
|
||||
|
||||
<h3>1623</h3>
|
||||
<h3>1626</h3>
|
||||
|
||||
<ul class="searchable">
|
||||
{% for cave in caves1626 %}
|
||||
|
||||
<li> <a href="{{ cave.url }}">{% if cave.kataster_number %}{{ cave.kataster_number }} {{cave.official_name|safe}}</a> {% if cave.unofficial_number %}({{cave.unofficial_number }}){% endif %}{% else %}{{cave.unofficial_number }} {{cave.official_name|safe}}</a> {% endif %}
|
||||
</li>
|
||||
|
||||
{% endfor %}
|
||||
</ul>
|
||||
<p style="text-align:right">
|
||||
<a href="{% url "newcave" %}">New Cave</a>
|
||||
</p>
|
||||
<h3>1623</h3>
|
||||
|
||||
<table class="searchable">
|
||||
{% for cave in caves1623 %}
|
||||
@@ -25,17 +38,7 @@
|
||||
{% endfor %}
|
||||
</table>
|
||||
|
||||
<h3>1626</h3>
|
||||
|
||||
<ul class="searchable">
|
||||
{% for cave in caves1626 %}
|
||||
|
||||
<li> <a href="{{ cave.url }}">{% if cave.kataster_number %}{{ cave.kataster_number }} {{cave.official_name|safe}}</a> {% if cave.unofficial_number %}({{cave.unofficial_number }}){% endif %}{% else %}{{cave.unofficial_number }} {{cave.official_name|safe}}</a> {% endif %}
|
||||
</li>
|
||||
|
||||
{% endfor %}
|
||||
</ul>
|
||||
|
||||
<p style="text-align:right">
|
||||
<a href="{% url "newcave" %}">New Cave</a>
|
||||
|
||||
</p>
|
||||
{% endblock %}
|
||||
|
||||
@@ -16,7 +16,7 @@
|
||||
|
||||
{% if error %}
|
||||
<div class="noticeBox">
|
||||
{{ error }}
|
||||
{{ error }}
|
||||
<a href="#" class="closeDiv">dismiss this message</a>
|
||||
</div>
|
||||
{% endif %}
|
||||
@@ -96,61 +96,44 @@
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>
|
||||
surveys to Surveys.csv
|
||||
<td>
|
||||
surveys to Surveys.csv
|
||||
</td>
|
||||
<td>
|
||||
<td>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
<form name="export" method="get" action={% url "downloadlogbook" %}>
|
||||
<p>Download a logbook file which is dynamically generated by Troggle.</p>
|
||||
|
||||
<p>Download a logbook file which is dynamically generated by Troggle.</p>
|
||||
|
||||
<p>
|
||||
Expedition year:
|
||||
<select name="year">
|
||||
{% for expedition in expeditions %}
|
||||
<option value="{{expedition}}"> {{expedition}} </option>
|
||||
<option value="{{expedition}}"> {{expedition}} </option>
|
||||
{% endfor %}
|
||||
</select>
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Output style:
|
||||
<select name="extension">
|
||||
<option value="txt">.txt file with MediaWiki markup - 2008 style</option>
|
||||
<option value="html">.html file - 2005 style</option>
|
||||
<select name="extension">
|
||||
<option value="txt">.txt file with MediaWiki markup - 2008 style</option>
|
||||
<option value="html">.html file - 2005 style</option>
|
||||
</select>
|
||||
</p>
|
||||
<p>
|
||||
<input name="download_logbook" type="submit" value="Download logbook" />
|
||||
</p>
|
||||
</form>
|
||||
</td>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>
|
||||
surveys to Surveys.csv
|
||||
</td>
|
||||
<td>
|
||||
<form name="export" method="post" action="">
|
||||
<p>Overwrite the existing Surveys.csv file with one generated by Troggle.</p>
|
||||
<input disabled name="export_surveys" type="submit" value="Update {{settings.SURVEYS}}noinfo/Surveys.csv" />
|
||||
</form>
|
||||
</td>
|
||||
<td>
|
||||
<form name="export" method="get" action={% url "downloadsurveys" %}>
|
||||
<p>Download a Surveys.csv file which is dynamically generated by Troggle.</p>
|
||||
<input disabled name="download_surveys" type="submit" value="Download Surveys.csv" />
|
||||
</form>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
||||
<tr>
|
||||
<td>qms to qms.csv</td><td>
|
||||
<form name="export_qms" method="get" action="downloadqms">
|
||||
|
||||
<form name="export_qms" method="get" action="downloadqms">
|
||||
|
||||
<!--This is for choosing caves by area (drilldown).
|
||||
|
||||
<select id="qmcaveareachooser" class="searchable" >
|
||||
@@ -158,12 +141,12 @@
|
||||
|
||||
-->
|
||||
|
||||
Choose a cave.
|
||||
Choose a cave.
|
||||
<select name="cave_id" id="qmcavechooser">
|
||||
|
||||
{% for cave in caves %}
|
||||
<option value="{{cave.kataster_number}}">{{cave}}
|
||||
</option>
|
||||
</option>
|
||||
{% endfor %}
|
||||
|
||||
</select>
|
||||
@@ -174,4 +157,4 @@
|
||||
</table>
|
||||
</form>
|
||||
|
||||
{% endblock %}
|
||||
{% endblock %}
|
||||
@@ -1,4 +1,3 @@
|
||||
<!DOCTYPE html>
|
||||
<!-- Only put one cave in this file -->
|
||||
<!-- If you edit this file, make sure you update the websites database -->
|
||||
<html lang="en">
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
<!DOCTYPE html>
|
||||
<!-- Only put one entrance in this file -->
|
||||
<!-- If you edit this file, make sure you update the websites database -->
|
||||
<html lang="en">
|
||||
|
||||
4
templates/experimental.html
Normal file → Executable file
4
templates/experimental.html
Normal file → Executable file
@@ -8,7 +8,9 @@
|
||||
|
||||
<h1>Expo Experimental</h1>
|
||||
|
||||
<p>Number of survey legs: {{nsurvexlegs}}, total length: {{totalsurvexlength}}</p>
|
||||
<p>Number of survey legs: {{nsurvexlegs}}<br />
|
||||
Total length: {{totalsurvexlength}} m on importing survex files.<br />
|
||||
Total length: {{addupsurvexlength}} m adding up all the years below.</p>
|
||||
|
||||
<table>
|
||||
<tr><th>Year</th><th>Surveys</th><th>Survey Legs</th><th>Total length</th></tr>
|
||||
|
||||
@@ -1,11 +1,10 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
|
||||
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
|
||||
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
|
||||
<title>{% block title %}{% endblock %}
|
||||
</title>
|
||||
<link rel="stylesheet" type="text/css" href="../css/main2.css" />
|
||||
|
||||
</head>
|
||||
<body>
|
||||
<div id="mainmenu">
|
||||
@@ -13,17 +12,19 @@
|
||||
<li><a href="/index.htm">Expo website home</a></li>
|
||||
<li><a href="/intro.html">Introduction</a></li>
|
||||
<li><a href="/infodx.htm">Main index</a></li>
|
||||
<li><a href="/indxal.htm">Cave index</a></li>
|
||||
<li><a href="/caves">Cave index</a></li>
|
||||
{% if cavepage %}
|
||||
<ul>
|
||||
<li><a href="{% url "survexcaveslist" %}">All Survex</a></li>
|
||||
<li><a href="{% url "surveyscansfolders" %}">Scans</a></li>
|
||||
<li><a href="{% url "tunneldata" %}">Tunneldata</a></li>
|
||||
<li><a href="{% url "survexcavessingle" 161 %}">161</a></li>
|
||||
<li><a href="{% url "survexcavessingle" 204 %}">204</a></li>
|
||||
<li><a href="{% url "survexcavessingle" 258 %}">258</a></li>
|
||||
<li><a href="{% url "expedition" 2012 %}">Expo2012</a></li>
|
||||
<li><a href="{% url "expedition" 2013 %}">Expo2013</a></li>
|
||||
<li><a href="{% url "survexcavessingle" "caves-1623/290/290.svx" %}">290</a></li>
|
||||
<li><a href="{% url "survexcavessingle" "caves-1623/291/291.svx" %}">291</a></li>
|
||||
<li><a href="{% url "survexcavessingle" "caves-1626/359/359.svx" %}">359</a></li>
|
||||
<li><a href="{% url "survexcavessingle" "caves-1623/258/258.svx" %}">258</a></li>
|
||||
<li><a href="{% url "survexcavessingle" "caves-1623/264/264.svx" %}">264</a></li>
|
||||
<li><a href="{% url "expedition" 2018 %}">Expo2018</a></li>
|
||||
<li><a href="{% url "expedition" 2019 %}">Expo2019</a></li>
|
||||
<li><a href="/admin">Django admin</a></li>
|
||||
</ul>
|
||||
{% endif %}
|
||||
|
||||
@@ -38,7 +38,7 @@
|
||||
<div id="col1">
|
||||
<h3>Welcome</h3>
|
||||
<p class="indent">
|
||||
This is Troggle, the information portal for Cambridge University Caving Club's Expeditions to Austria.
|
||||
This is Troggle, the online system for Cambridge University Caving Club's Expeditions to Austria.
|
||||
</p>
|
||||
|
||||
<p class="indent">
|
||||
@@ -46,7 +46,7 @@ Here you will find information about the {{expedition.objects.count}} expedition
|
||||
</p>
|
||||
|
||||
<p class="indent">
|
||||
If you are an expedition member, please sign up using the link to the top right and begin editing.
|
||||
If you are an expedition member, please sign up using the link to the top right.
|
||||
</p>
|
||||
|
||||
{% endblock content %}
|
||||
|
||||
@@ -2,11 +2,14 @@
|
||||
<ul id="links">
|
||||
<li><a href="/index.htm">Home</a></li>
|
||||
<li><a href="/infodx.htm">Main Index</a></li>
|
||||
<li><a href="/troggle">Troggle</a></li>
|
||||
<li><a href="/areas.htm">Areas</a></li>
|
||||
<li><a href="/indxal.htm">Caves</a></li>
|
||||
<li><a href="/handbook/index.htm">Handbook</a></li>
|
||||
<li><a href="/pubs.htm">Reports</a></li>
|
||||
<li><a href="/areas.htm">Areas</a></li>
|
||||
<li><a href="/caves">Caves</a></li>
|
||||
<li><a href="/expedition/2019">Troggle</a></li>
|
||||
<li><form name=P method=get action="/search" target="_top">
|
||||
<input id="omega-autofocus" type=search name=P value="testing" size=8 autofocus>
|
||||
<input type=submit value="Search"></li>
|
||||
{% if editable %}<li><a href="{% url "editflatpage" path %}" class="editlink"><strong>Edit this page</strong></a></li>{% endif %}
|
||||
{% if cave_editable %}<li><a href="{% url "edit_cave" cave_editable %}" class="editlink"><strong>Edit this cave</strong></a></li>{% endif %}
|
||||
</ul>
|
||||
|
||||
45
templates/pathsreport.html
Normal file
45
templates/pathsreport.html
Normal file
@@ -0,0 +1,45 @@
|
||||
{% extends "base.html" %}
|
||||
{% load wiki_markup %}
|
||||
{% load link %}
|
||||
|
||||
{% block title %}Troggle paths report{% endblock %}
|
||||
|
||||
{% block content %}
|
||||
|
||||
<h1>Expo Troggle paths report</h1>
|
||||
|
||||
|
||||
<p>
|
||||
|
||||
<table style="font-family: Consolas, Lucida Console, monospace;">
|
||||
<tr><th>Code</th><th>Path</th></tr>
|
||||
{% for c,p in bycodeslist %}
|
||||
<tr>
|
||||
<td>
|
||||
{{c}}
|
||||
</td>
|
||||
<td>
|
||||
{{p}}
|
||||
</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</table>
|
||||
|
||||
<p>
|
||||
<table style="font-family: Consolas, Lucida Console, monospace;">
|
||||
<tr><th>Path</th><th>Code</th></tr>
|
||||
{% for c,p in bypathslist %}
|
||||
<tr>
|
||||
<td>
|
||||
{{p}}
|
||||
</td>
|
||||
<td>
|
||||
{{c}}
|
||||
</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</table>
|
||||
<p>
|
||||
There are {{ ncodes }} different path codes defined.
|
||||
{% endblock %}
|
||||
|
||||
@@ -17,6 +17,7 @@
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</table>
|
||||
<p>This is based purely on attendance, not on activities, surveying or usefulness of any kind. But as Woody Allen said: "90% of success is just turning up". It should really be called "Notably recent expoers" as the metric is just a geometric "recency" (1/2 for attending last year, 1/3 for the year before, etc., added up. Display cuttoff is 1/3.).
|
||||
|
||||
|
||||
<h2>All expoers</h2>
|
||||
|
||||
@@ -2,11 +2,11 @@
|
||||
{% load wiki_markup %}
|
||||
{% load survex_markup %}
|
||||
|
||||
{% block title %}Survex Scans Folder{% endblock %}
|
||||
{% block title %}Survey Scans Folder{% endblock %}
|
||||
|
||||
{% block content %}
|
||||
|
||||
<h3>Survex Scans in: {{survexscansfolder.walletname}}</h3>
|
||||
<h3>Survey Scans in: {{survexscansfolder.walletname}}</h3>
|
||||
<table>
|
||||
{% for survexscansingle in survexscansfolder.survexscansingle_set.all %}
|
||||
<tr>
|
||||
@@ -20,7 +20,7 @@
|
||||
{% endfor %}
|
||||
</table>
|
||||
|
||||
<h3>Surveys referring to this wallet</h3>
|
||||
<h3>Survex surveys referring to this wallet</h3>
|
||||
|
||||
<table>
|
||||
{% for survexblock in survexscansfolder.survexblock_set.all %}
|
||||
|
||||
@@ -2,11 +2,15 @@
|
||||
{% load wiki_markup %}
|
||||
{% load survex_markup %}
|
||||
|
||||
{% block title %}All Survex scans folders{% endblock %}
|
||||
{% block title %}All Survey scans folders (wallets){% endblock %}
|
||||
|
||||
{% block content %}
|
||||
|
||||
<h3>All Survex scans folders</h3>
|
||||
<h3>All Survey scans folders (wallets)</h3>
|
||||
<p>Each wallet contains the scanned original in-cave survey notes and sketches of
|
||||
plans and elevations. It also contains scans of centre-line survex output on which
|
||||
hand-drawn passage sections are drawn. These hand-drawn passages will eventually be
|
||||
traced to produce Tunnel or Therion drawings and eventually the final complete cave survey.
|
||||
<table>
|
||||
<tr><th>Scans folder</th><th>Files</th><th>Survex blocks</th></tr>
|
||||
{% for survexscansfolder in survexscansfolders %}
|
||||
|
||||
@@ -6,14 +6,13 @@
|
||||
|
||||
{% block content %}
|
||||
|
||||
<h3>All Tunnel files</h3>
|
||||
<h3>All Tunnel files - references to wallets and survey scans</h3>
|
||||
<table>
|
||||
<tr><th>File</th><th>Font</th><th>SurvexBlocks</th><th>Size</th><th>Paths</th><th>Scans folder</th><th>Scan files</th><th>Frames</th></tr>
|
||||
<tr><th>File</th><th>Font</th><th>Size</th><th>Paths</th><th>Scans folder</th><th>Scan files</th><th>Frames</th></tr>
|
||||
{% for tunnelfile in tunnelfiles %}
|
||||
<tr>
|
||||
<td><a href="{% url "tunnelfile" tunnelfile.tunnelpath %}">{{tunnelfile.tunnelpath}}</a></td>
|
||||
<td>{{tunnelfile.bfontcolours}}</td>
|
||||
<td></td>
|
||||
<td>{{tunnelfile.filesize}}</td>
|
||||
<td>{{tunnelfile.npaths}}</td>
|
||||
|
||||
|
||||
38
urls.py
Normal file → Executable file
38
urls.py
Normal file → Executable file
@@ -15,21 +15,17 @@ admin.autodiscover()
|
||||
|
||||
# type url probably means it's used.
|
||||
|
||||
# HOW DOES THIS WORK:
|
||||
# url( <regular expression that matches the thing in the web browser>,
|
||||
# HOW DOES THIS WORK:
|
||||
# url( <regular expression that matches the thing in the web browser>,
|
||||
# <reference to python function in 'core' folder>,
|
||||
# <name optional argument for URL reversing (doesn't do much)>)
|
||||
# <name optional argument for URL reversing (doesn't do much)>)
|
||||
|
||||
actualurlpatterns = patterns('',
|
||||
|
||||
url(r'^testingurl/?$' , views_caves.millenialcaves, name="testing"),
|
||||
|
||||
url(r'^millenialcaves/?$', views_caves.millenialcaves, name="millenialcaves"),
|
||||
|
||||
url(r'^troggle$', views_other.frontpage, name="frontpage"),
|
||||
url(r'^troggle$', views_other.frontpage, name="frontpage"),
|
||||
url(r'^todo/$', views_other.todo, name="todo"),
|
||||
|
||||
url(r'^caves/?$', views_caves.caveindex, name="caveindex"),
|
||||
url(r'^caves$', views_caves.caveindex, name="caveindex"),
|
||||
url(r'^people/?$', views_logbooks.personindex, name="personindex"),
|
||||
|
||||
url(r'^newqmnumber/?$', views_other.ajax_QM_number, ),
|
||||
@@ -48,7 +44,7 @@ actualurlpatterns = patterns('',
|
||||
url(r'^newfile', views_other.newFile, name="newFile"),
|
||||
|
||||
url(r'^getEntrances/(?P<caveslug>.*)', views_caves.get_entrances, name = "get_entrances"),
|
||||
url(r'^getQMs/(?P<caveslug>.*)', views_caves.get_qms, name = "get_qms"),
|
||||
url(r'^getQMs/(?P<caveslug>.*)', views_caves.get_qms, name = "get_qms"), # no template "get_qms"?
|
||||
url(r'^getPeople/(?P<expeditionslug>.*)', views_logbooks.get_people, name = "get_people"),
|
||||
url(r'^getLogBookEntries/(?P<expeditionslug>.*)', views_logbooks.get_logbook_entries, name = "get_logbook_entries"),
|
||||
|
||||
@@ -58,7 +54,7 @@ actualurlpatterns = patterns('',
|
||||
url(r'^caveslug/([^/]+)/?$', views_caves.caveSlug, name="caveSlug"),
|
||||
url(r'^cave/entrance/([^/]+)/?$', views_caves.caveEntrance),
|
||||
url(r'^cave/description/([^/]+)/?$', views_caves.caveDescription),
|
||||
url(r'^cave/qms/([^/]+)/?$', views_caves.caveQMs),
|
||||
url(r'^cave/qms/([^/]+)/?$', views_caves.caveQMs), # blank page
|
||||
url(r'^cave/logbook/([^/]+)/?$', views_caves.caveLogbook),
|
||||
url(r'^entrance/(?P<caveslug>[^/]+)/(?P<slug>[^/]+)/edit/', views_caves.editEntrance, name = "editentrance"),
|
||||
url(r'^entrance/new/(?P<caveslug>[^/]+)/', views_caves.editEntrance, name = "newentrance"),
|
||||
@@ -87,9 +83,8 @@ actualurlpatterns = patterns('',
|
||||
url(r'^survey/?$', surveyindex, name="survey"),
|
||||
url(r'^survey/(?P<year>\d\d\d\d)\#(?P<wallet_number>\d*)$', survey, name="survey"),
|
||||
|
||||
# Is all this lot out of date ? Maybe the logbooks work?
|
||||
url(r'^controlpanel/?$', views_other.controlPanel, name="controlpanel"),
|
||||
url(r'^CAVETAB2\.CSV/?$', views_other.downloadCavetab, name="downloadcavetab"),
|
||||
url(r'^Surveys\.csv/?$', views_other.downloadSurveys, name="downloadsurveys"),
|
||||
url(r'^logbook(?P<year>\d\d\d\d)\.(?P<extension>.*)/?$',views_other.downloadLogbook),
|
||||
url(r'^logbook/?$',views_other.downloadLogbook, name="downloadlogbook"),
|
||||
url(r'^cave/(?P<cave_id>[^/]+)/qm\.csv/?$', views_other.downloadQMs, name="downloadqms"),
|
||||
@@ -111,6 +106,10 @@ actualurlpatterns = patterns('',
|
||||
|
||||
# (r'^personform/(.*)$', personForm),
|
||||
|
||||
(r'^expofiles/(?P<path>.*)$', 'django.views.static.serve',
|
||||
{'document_root': settings.EXPOFILES, 'show_indexes': True}),
|
||||
(r'^static/(?P<path>.*)$', 'django.views.static.serve',
|
||||
{'document_root': settings.STATIC_ROOT, 'show_indexes': True}),
|
||||
(r'^site_media/(?P<path>.*)$', 'django.views.static.serve',
|
||||
{'document_root': settings.MEDIA_ROOT, 'show_indexes': True}),
|
||||
(r'^tinymce_media/(?P<path>.*)$', 'django.views.static.serve',
|
||||
@@ -124,9 +123,9 @@ actualurlpatterns = patterns('',
|
||||
url(r'^survexfile/(?P<survex_file>.*?)\.err$', views_survex.err),
|
||||
|
||||
|
||||
url(r'^survexfile/caves/$', views_survex.survexcaveslist, name="survexcaveslist"),
|
||||
url(r'^survexfile/caves/(?P<survex_cave>.*)$', views_survex.survexcavesingle, name="survexcavessingle"),
|
||||
url(r'^survexfileraw/(?P<survex_file>.*?)\.svx$', views_survex.svxraw, name="svxraw"),
|
||||
url(r'^survexfile/caves/$', views_survex.survexcaveslist, name="survexcaveslist"),
|
||||
url(r'^survexfile/(?P<survex_cave>.*)$', views_survex.survexcavesingle, name="survexcavessingle"),
|
||||
url(r'^survexfileraw/(?P<survex_file>.*?)\.svx$', views_survex.svxraw, name="svxraw"),
|
||||
|
||||
|
||||
(r'^survey_files/listdir/(?P<path>.*)$', view_surveys.listdir),
|
||||
@@ -138,7 +137,7 @@ actualurlpatterns = patterns('',
|
||||
#(r'^survey_scans/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.SURVEY_SCANS, 'show_indexes':True}),
|
||||
url(r'^survey_scans/$', view_surveys.surveyscansfolders, name="surveyscansfolders"),
|
||||
url(r'^survey_scans/(?P<path>[^/]+)/$', view_surveys.surveyscansfolder, name="surveyscansfolder"),
|
||||
url(r'^survey_scans/(?P<path>[^/]+)/(?P<file>[^/]+(?:png|jpg|jpeg))$',
|
||||
url(r'^survey_scans/(?P<path>[^/]+)/(?P<file>[^/]+(?:png|jpg|jpeg|pdf|PNG|JPG|JPEG|PDF))$',
|
||||
view_surveys.surveyscansingle, name="surveyscansingle"),
|
||||
|
||||
url(r'^tunneldata/$', view_surveys.tunneldata, name="tunneldata"),
|
||||
@@ -147,8 +146,8 @@ actualurlpatterns = patterns('',
|
||||
|
||||
#url(r'^tunneldatainfo/(?P<path>.+?\.xml)$', view_surveys.tunnelfileinfo, name="tunnelfileinfo"),
|
||||
|
||||
(r'^photos/(?P<path>.*)$', 'django.views.static.serve',
|
||||
{'document_root': settings.PHOTOS_ROOT, 'show_indexes':True}),
|
||||
#(r'^photos/(?P<path>.*)$', 'django.views.static.serve',
|
||||
#{'document_root': settings.PHOTOS_ROOT, 'show_indexes':True}),
|
||||
|
||||
url(r'^prospecting/(?P<name>[^.]+).png$', prospecting_image, name="prospecting_image"),
|
||||
|
||||
@@ -157,6 +156,7 @@ actualurlpatterns = patterns('',
|
||||
|
||||
# for those silly ideas
|
||||
url(r'^experimental.*$', views_logbooks.experimental, name="experimental"),
|
||||
url(r'^pathsreport.*$', views_logbooks.pathsreport, name="pathsreport"),
|
||||
|
||||
#url(r'^trip_report/?$',views_other.tripreport,name="trip_report")
|
||||
|
||||
|
||||
Reference in New Issue
Block a user