9 Commits

Author SHA1 Message Date
Sam Wenham
43dfe946b6 Just removing dud whitespace 2020-02-24 15:04:07 +00:00
Sam Wenham
656ddcfe93 Merge branch 'django-1.10' of ssh://expo.survex.com/~/troggle into django-1.10 2020-02-22 15:45:20 +00:00
Sam Wenham
505bc48331 Show coordinates for entrance
Use filter to find coordinates
2020-02-22 15:38:22 +00:00
Sam Wenham
92b273e45f Whitespace cleanup 2020-02-21 14:26:14 +00:00
Sam Wenham
978270b152 Remove dud settings
allow any site for dev
2020-02-21 14:25:13 +00:00
Sam Wenham
291e3baabf Get media working (at least in development) 2020-02-21 14:19:37 +00:00
Sam Wenham
eb5406f325 Add django migrations. These are needed on newer django installs to maintain the database 2020-02-20 11:55:35 +00:00
Sam Wenham
de22b071b0 Improve README
Make new style QMs from survexfiles work
2019-07-19 01:04:18 +01:00
Sam Wenham
08a41941f9 Part one of getting troggle to work with django 1.10
Major rework of how survex is processed
2019-07-16 00:07:37 +01:00
84 changed files with 2301 additions and 2748 deletions

20
.gitignore vendored
View File

@@ -14,23 +14,3 @@ media/images/*
.vscode/*
.swp
imagekit-off/
localsettings-expo-live.py
.gitignore
desktop.ini
troggle-reset.log
troggle-reset0.log
troggle-surveys.log
troggle.log
troggle.sqlite
troggle.sqlite.0
troggle.sqlite.1
my_project.dot
memdump.sql
troggle-sqlite.sql
import_profile.json
import_times.json
ignored-files.log
tunnel-import.log
posnotfound
troggle.sqlite-journal
loadsurvexblks.log

16
.hgignore Normal file
View File

@@ -0,0 +1,16 @@
# use glob syntax
syntax: glob
*.pyc
db*
localsettings.py
*~
parsing_log.txt
troggle
troggle_log.txt
.idea/*
*.orig
media/images/*
.vscode/*
.swp
imagekit-off/

View File

@@ -7,95 +7,36 @@ Troggle setup
Python, Django, and Database setup
-----------------------------------
Troggle requires Django 1.4 or greater, and any version of Python that works with it.
It is currently (Feb.2020) on django 1.7.11 (1.7.11-1+deb8u5).
Troggle requires Django 1.10, and Python 2.7.
Install Django with the following command:
sudo apt install python-django (on debian/ubuntu) -- does not work now as we need specific version
apt-get install python-django (on debian/ubuntu)
requirements.txt:
Django==1.7.11
django-registration==2.1.2
mysql
#imagekit
#django-imagekit
Image
django-tinymce==2.7.0
smartencoding
unidecode
If you want to use MySQL or Postgresql, download and install them. However, you can also use Django with Sqlite3, which is included in Python and thus requires no extra installation.
Install like this:
sudo apt install pip # does not work on Ubuntu 20.04 for python 2.7. Have to install from source. Use 18.04
pip install django==1.7
pip install django-tinymce==2.0.1
sudo apt install libfreetype6-dev
pip install django-registration==2.0
pip install unidecode
pip install --no-cache-dir pillow==2.7.0 # fails horribly on installing Ubuntu 20.04
pip install --no-cache-dir pillow # installs on Ubuntu 20.04 , don't know if it works though
If you want to use MySQL or Postgresql, download and install them.
However, you can also use Django with Sqlite3, which is included in Python and thus requires no extra installation.
pip install pygraphviz
apt install survex
pip install django-extensions
pip install pygraphviz # fails to install
pip install pyparsing pydot # installs fine
django extension graph_models # https://django-extensions.readthedocs.io/en/latest/graph_models.html
Or use a python3 virtual environment: (python3.5 not later)
$ cd troggle
$ cd ..
$ python3.5 -m venv pyth35d2
(creates folder with virtual env)
cd pyth35d2
bin/activate
(now install everything - not working yet..)
$ pip install -r requirements.txt
MariaDB database
----------------
Start it up with
$ sudo mysql -u -p
when it will prompt you to type in the password. Get this by reading the settings.py file in use on the server.
then
> CREATE DATABASE troggle;
> use troggle;
> exit;
Note the semicolons.
You can check the status of the db service:
$ sudo systemctl status mysql
You can start and stop the db service with
$ sudo systemctl restart mysql.service
$ sudo systemctl stop mysql.service
$ sudo systemctl start mysql.service
Troggle itself
-------------
Choose a directory where you will keep troggle, and git clone Troggle into it using the following command:
git clone git://expo.survex.com/troggle
git clone git://expo.survex.com/~/troggle
or more reliably
git clone ssh://expo@expo.survex.com/home/expo/troggle
Running in development
----------------------
The simplest way to run Troggle in development is through the docker-compose setup
See the docker folder in the repo for details
If you want to work on the source code and be able to commit, your account will need to be added to the troggle project members list. Contact wookey at wookware dot org to get this set up.
Next, you need to fill in your local settings. Copy either localsettingsubuntu.py or localsettingsserver.py to a new file called localsettings.py. Follow the instructions contained in the file to fill out your settings.
Setting up survex
-----------------
You need to have survex installed as the command line 'cavern' is used as part of the survex
import process.
Setting up tables and importing legacy data
------------------------------------------
Run "sudo python databaseReset.py reset" from the troggle directory.
Run "python databaseReset.py reset" from the troggle directory.
Once troggle is running, you can also log in and then go to "Import / export" data under "admin" on the menu.
@@ -105,38 +46,7 @@ folk/folk.csv table - a year doesn't exist until that is done.
Running a Troggle server
------------------------
For high volume use, Troggle should be run using a web server like apache. However, a quick way to get started is to use the development server built into Django. This is limited though: directory
redirection needs apache.
For high volume use, Troggle should be run using a web server like apache. However, a quick way to get started is to use the development server built into Django.
To do this, run "python manage.py runserver" from the troggle directory.
Running a Troggle server with Apache
------------------------------------
Troggle also needs these aliases to be configured. These are set in
/home/expo/config/apache/expo.conf
on the expo server.
At least these need setting:
DocumentRoot /home/expo/expoweb
WSGIScriptAlias / /home/expo/troggle/wsgi.py
<Directory /home/expo/troggle>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
Alias /expofiles /home/expo/expofiles
Alias /photos /home/expo/webphotos
Alias /map /home/expo/expoweb/map
Alias /javascript /usr/share/javascript
Alias /static/ /home/expo/static/
ScriptAlias /repositories /home/expo/config/apache/services/hgweb/hgweb.cgi
(The last is just for mercurial which will be remoived during 2020).
Unlike the "runserver" method, apache requires a restart before it will use
any changed files:
apache2ctl stop
apache2ctl start

View File

@@ -1,27 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Troggle - Coding Documentation</title>
<link rel="stylesheet" type="text/css" href="..media/css/main2.css" />
</head>
<body>
<h1>Troggle Code - README</h1>
<h2>Contents of README.txt file</h2>
<iframe name="erriframe" width="90%" height="45%"
src="../readme.txt" frameborder="1" ></iframe>
<h2>Troggle documentation in the Expo Handbook</h2>
<ul>
<li><a href="http://expo.survex.com/handbook/troggle/trogintro.html">Intro</a>
<li><a href="http://expo.survex.com/handbook/troggle/trogmanual.html">Troggle manual</a>
<li><a href="http://expo.survex.com/handbook/troggle/trogarch.html">Troggle data model</a>
<li><a href="http://expo.survex.com/handbook/troggle/trogimport.html">Troggle importing data</a>
<li><a href="http://expo.survex.com/handbook/troggle/trogdesign.html">Troggle design decisions</a>
<li><a href="http://expo.survex.com/handbook/troggle/trogdesignx.html">Troggle future architectures</a>
<li><a href="http://expo.survex.com/handbook/troggle/trogsimpler.html">a kinder simpler Troggle?</a>
</ul>
<hr />
</body></html>

View File

@@ -9,12 +9,12 @@ from troggle.core.views_other import downloadLogbook
class TroggleModelAdmin(admin.ModelAdmin):
def save_model(self, request, obj, form, change):
"""overriding admin save to fill the new_since parsing_field"""
obj.new_since_parsing=True
obj.save()
class Media:
js = ('jquery/jquery.min.js','js/QM_helper.js')
@@ -28,6 +28,10 @@ class SurvexBlockAdmin(TroggleModelAdmin):
inlines = (RoleInline,)
class SurvexStationAdmin(TroggleModelAdmin):
search_fields = ('name', 'block__name')
class ScannedImageInline(admin.TabularInline):
model = ScannedImage
extra = 4
@@ -40,7 +44,7 @@ class OtherCaveInline(admin.TabularInline):
class SurveyAdmin(TroggleModelAdmin):
inlines = (ScannedImageInline,)
search_fields = ('expedition__year','wallet_number')
search_fields = ('expedition__year','wallet_number')
class QMsFoundInline(admin.TabularInline):
@@ -48,12 +52,12 @@ class QMsFoundInline(admin.TabularInline):
fk_name='found_by'
fields=('number','grade','location_description','comment')#need to add foreignkey to cave part
extra=1
# class PhotoInline(admin.TabularInline):
# model = DPhoto
# exclude = ['is_mugshot' ]
# extra = 1
class PhotoInline(admin.TabularInline):
model = DPhoto
exclude = ['is_mugshot' ]
extra = 1
class PersonTripInline(admin.TabularInline):
@@ -64,21 +68,20 @@ class PersonTripInline(admin.TabularInline):
#class LogbookEntryAdmin(VersionAdmin):
class LogbookEntryAdmin(TroggleModelAdmin):
prepopulated_fields = {'slug':("title",)}
prepopulated_fields = {'slug':("title",)}
search_fields = ('title','expedition__year')
date_heirarchy = ('date')
# inlines = (PersonTripInline, PhotoInline, QMsFoundInline)
inlines = (PersonTripInline, QMsFoundInline)
inlines = (PersonTripInline, PhotoInline, QMsFoundInline)
class Media:
css = {
"all": ("css/troggleadmin.css",)
}
actions=('export_logbook_entries_as_html','export_logbook_entries_as_txt')
def export_logbook_entries_as_html(self, modeladmin, request, queryset):
response=downloadLogbook(request=request, queryset=queryset, extension='html')
return response
def export_logbook_entries_as_txt(self, modeladmin, request, queryset):
response=downloadLogbook(request=request, queryset=queryset, extension='txt')
return response
@@ -96,11 +99,11 @@ class PersonAdmin(TroggleModelAdmin):
class QMAdmin(TroggleModelAdmin):
search_fields = ('found_by__cave__kataster_number','number','found_by__date')
list_display = ('__unicode__','grade','found_by','ticked_off_by')
list_display = ('__unicode__','grade','found_by','ticked_off_by','nearest_station')
list_display_links = ('__unicode__',)
list_editable = ('found_by','ticked_off_by','grade')
list_editable = ('found_by','ticked_off_by','grade','nearest_station')
list_per_page = 20
raw_id_fields=('found_by','ticked_off_by')
raw_id_fields=('found_by','ticked_off_by','nearest_station')
class PersonExpeditionAdmin(TroggleModelAdmin):
@@ -117,26 +120,29 @@ class EntranceAdmin(TroggleModelAdmin):
search_fields = ('caveandentrance__cave__kataster_number',)
#admin.site.register(DPhoto)
admin.site.register(DPhoto)
admin.site.register(Cave, CaveAdmin)
admin.site.register(CaveSlug)
admin.site.register(Area)
#admin.site.register(OtherCaveName)
admin.site.register(CaveAndEntrance)
admin.site.register(NewSubCave)
admin.site.register(CaveDescription)
admin.site.register(Entrance, EntranceAdmin)
admin.site.register(SurvexBlock, SurvexBlockAdmin)
admin.site.register(Expedition)
admin.site.register(Person,PersonAdmin)
admin.site.register(SurvexPersonRole)
admin.site.register(PersonExpedition,PersonExpeditionAdmin)
admin.site.register(LogbookEntry, LogbookEntryAdmin)
#admin.site.register(PersonTrip)
admin.site.register(QM, QMAdmin)
admin.site.register(Survey, SurveyAdmin)
admin.site.register(ScannedImage)
admin.site.register(SurvexStation)
admin.site.register(SurvexDirectory)
admin.site.register(SurvexFile)
admin.site.register(SurvexStation, SurvexStationAdmin)
admin.site.register(SurvexBlock)
admin.site.register(SurvexPersonRole)
admin.site.register(SurvexScansFolder)
admin.site.register(SurvexScanSingle)

View File

@@ -15,7 +15,7 @@ def listdir(*path):
for p in os.listdir(root):
if os.path.isdir(os.path.join(root, p)):
l += p + "/\n"
elif os.path.isfile(os.path.join(root, p)):
l += p + "\n"
#Ignore non-files and non-directories
@@ -28,7 +28,7 @@ def listdir(*path):
c = c.replace("#", "%23")
print("FILE: ", settings.FILES + "listdir/" + c)
return urllib.urlopen(settings.FILES + "listdir/" + c).read()
def dirsAsList(*path):
return [d for d in listdir(*path).split("\n") if len(d) > 0 and d[-1] == "/"]

View File

@@ -16,7 +16,7 @@ class CaveForm(ModelForm):
underground_centre_line = forms.CharField(required = False, widget=forms.Textarea())
notes = forms.CharField(required = False, widget=forms.Textarea())
references = forms.CharField(required = False, widget=forms.Textarea())
url = forms.CharField(required = True)
url = forms.CharField(required = True)
class Meta:
model = Cave
exclude = ("filename",)
@@ -24,9 +24,9 @@ class CaveForm(ModelForm):
def clean(self):
if self.cleaned_data.get("kataster_number") == "" and self.cleaned_data.get("unofficial_number") == "":
self._errors["unofficial_number"] = self.error_class(["Either the kataster or unoffical number is required."])
self._errors["unofficial_number"] = self.error_class(["Either the kataster or unoffical number is required."])
if self.cleaned_data.get("kataster_number") != "" and self.cleaned_data.get("official_name") == "":
self._errors["official_name"] = self.error_class(["This field is required when there is a kataster number."])
self._errors["official_name"] = self.error_class(["This field is required when there is a kataster number."])
if self.cleaned_data.get("area") == []:
self._errors["area"] = self.error_class(["This field is required."])
if self.cleaned_data.get("url") and self.cleaned_data.get("url").startswith("/"):
@@ -82,11 +82,11 @@ class EntranceLetterForm(ModelForm):
# This function returns html-formatted paragraphs for each of the
# wikilink types that are related to this logbookentry. Each paragraph
# contains a list of all of the related wikilinks.
#
#
# Perhaps an admin javascript solution would be better.
# """
# res = ["Please use the following wikilinks, which are related to this logbook entry:"]
#
#
# res.append(r'</p><p style="float: left;"><b>QMs found:</b>')
# for QM in LogbookEntry.instance.QMs_found.all():
# res.append(QM.wiki_link())
@@ -94,12 +94,12 @@ class EntranceLetterForm(ModelForm):
# res.append(r'</p><p style="float: left;"><b>QMs ticked off:</b>')
# for QM in LogbookEntry.instance.QMs_ticked_off.all():
# res.append(QM.wiki_link())
# res.append(r'</p><p style="float: left; "><b>People</b>')
# for persontrip in LogbookEntry.instance.persontrip_set.all():
# res.append(persontrip.wiki_link())
# res.append(r'</p>')
# return string.join(res, r'<br />')
# def __init__(self, *args, **kwargs):
@@ -107,7 +107,7 @@ class EntranceLetterForm(ModelForm):
# self.fields['text'].help_text=self.wikiLinkHints()#
#class CaveForm(forms.Form):
# html = forms.CharField(widget=TinyMCE(attrs={'cols': 80, 'rows': 30}))
# html = forms.CharField(widget=TinyMCE(attrs={'cols': 80, 'rows': 30}))
def getTripForm(expedition):
@@ -118,18 +118,18 @@ def getTripForm(expedition):
caves.sort()
caves = ["-----"] + caves
cave = forms.ChoiceField([(c, c) for c in caves], required=False)
location = forms.CharField(max_length=200, required=False)
location = forms.CharField(max_length=200, required=False)
caveOrLocation = forms.ChoiceField([("cave", "Cave"), ("location", "Location")], widget = forms.widgets.RadioSelect())
html = forms.CharField(widget=TinyMCE(attrs={'cols': 80, 'rows': 30}))
html = forms.CharField(widget=TinyMCE(attrs={'cols': 80, 'rows': 30}))
def clean(self):
print(dir(self))
if self.cleaned_data.get("caveOrLocation") == "cave" and not self.cleaned_data.get("cave"):
self._errors["cave"] = self.error_class(["This field is required"])
self._errors["cave"] = self.error_class(["This field is required"])
if self.cleaned_data.get("caveOrLocation") == "location" and not self.cleaned_data.get("location"):
self._errors["location"] = self.error_class(["This field is required"])
self._errors["location"] = self.error_class(["This field is required"])
return self.cleaned_data
class PersonTripForm(forms.Form):
names = [get_name(pe) for pe in PersonExpedition.objects.filter(expedition = expedition)]
names.sort()
@@ -141,7 +141,7 @@ def getTripForm(expedition):
PersonTripFormSet = formset_factory(PersonTripForm, extra=1)
return PersonTripFormSet, TripForm
def get_name(pe):
if pe.nickname:
return pe.nickname
@@ -162,18 +162,18 @@ def get_name(pe):
# caves = ["-----"] + caves
# cave = forms.ChoiceField([(c, c) for c in caves], required=False)
# entrance = forms.ChoiceField([("-----", "Please select a cave"), ], required=False)
# qm = forms.ChoiceField([("-----", "Please select a cave"), ], required=False)
# entrance = forms.ChoiceField([("-----", "Please select a cave"), ], required=False)
# qm = forms.ChoiceField([("-----", "Please select a cave"), ], required=False)
# expeditions = [e.year for e in Expedition.objects.all()]
# expeditions.sort()
# expeditions = ["-----"] + expeditions
# expedition = forms.ChoiceField([(e, e) for e in expeditions], required=False)
# expedition = forms.ChoiceField([(e, e) for e in expeditions], required=False)
# logbookentry = forms.ChoiceField([("-----", "Please select an expedition"), ], required=False)
# logbookentry = forms.ChoiceField([("-----", "Please select an expedition"), ], required=False)
# person = forms.ChoiceField([("-----", "Please select an expedition"), ], required=False)
# person = forms.ChoiceField([("-----", "Please select an expedition"), ], required=False)
# survey_point = forms.CharField()

View File

@@ -1,21 +1,21 @@
from imagekit.specs import ImageSpec
from imagekit import processors
class ResizeThumb(processors.Resize):
width = 100
class ResizeThumb(processors.Resize):
width = 100
crop = False
class ResizeDisplay(processors.Resize):
width = 600
#class EnhanceThumb(processors.Adjustment):
width = 600
#class EnhanceThumb(processors.Adjustment):
#contrast = 1.2
#sharpness = 2
class Thumbnail(ImageSpec):
access_as = 'thumbnail_image'
pre_cache = True
processors = [ResizeThumb]
class Thumbnail(ImageSpec):
access_as = 'thumbnail_image'
pre_cache = True
processors = [ResizeThumb]
class Display(ImageSpec):
increment_count = True

View File

@@ -1,33 +1,183 @@
import os
from django.core.management.base import BaseCommand, CommandError
from optparse import make_option
from troggle.core.models import Cave
import settings
import os
from django.db import connection
from django.core import management
from django.core.urlresolvers import reverse
from django.core.management.base import BaseCommand, CommandError
from django.contrib.auth.models import User
from django.core.urlresolvers import reverse
from troggle.core.models import Cave, Entrance
import troggle.flatpages.models
import settings
"""Pretty much all of this is now replaced by databaseRest.py
I don't know why this still exists. Needs testing to see if
removing it makes django misbehave.
"""
databasename=settings.DATABASES['default']['NAME']
expouser=settings.EXPOUSER
expouserpass=settings.EXPOUSERPASS
expouseremail=settings.EXPOUSER_EMAIL
class Command(BaseCommand):
help = 'Removed as redundant - use databaseReset.py'
help = 'This is normal usage, clear database and reread everything'
option_list = BaseCommand.option_list + (
make_option('--reset',
action='store_true',
dest='reset',
default=False,
help='Removed as redundant'),
help='Reset the entier DB from files'),
)
def handle(self, *args, **options):
print(args)
print(options)
if "desc" in args:
self.resetdesc()
elif "scans" in args:
self.import_surveyscans()
elif "caves" in args:
self.reload_db()
self.make_dirs()
self.pageredirects()
self.import_caves()
elif "people" in args:
self.import_people()
elif "QMs" in args:
self.import_QMs()
elif "tunnel" in args:
self.import_tunnelfiles()
elif options['reset']:
self.reset(self)
elif "survex" in args:
self.import_survex()
elif "survexpos" in args:
import parsers.survex
parsers.survex.LoadPos()
elif "logbooks" in args:
self.import_logbooks()
elif "autologbooks" in args:
self.import_auto_logbooks()
elif "dumplogbooks" in args:
self.dumplogbooks()
elif "writeCaves" in args:
self.writeCaves()
elif options['foo']:
self.stdout.write(self.style.WARNING('Tesing....'))
else:
#self.stdout.write("%s not recognised" % args)
#self.usage(options)
self.stdout.write("poo")
#print(args)
def reload_db(obj):
if settings.DATABASES['default']['ENGINE'] == 'django.db.backends.sqlite3':
try:
os.remove(databasename)
except OSError:
pass
else:
cursor = connection.cursor()
cursor.execute("DROP DATABASE %s" % databasename)
cursor.execute("CREATE DATABASE %s" % databasename)
cursor.execute("ALTER DATABASE %s CHARACTER SET=utf8" % databasename)
cursor.execute("USE %s" % databasename)
management.call_command('migrate', interactive=False)
# management.call_command('syncdb', interactive=False)
user = User.objects.create_user(expouser, expouseremail, expouserpass)
user.is_staff = True
user.is_superuser = True
user.save()
def make_dirs(obj):
"""Make directories that troggle requires"""
# should also deal with permissions here.
if not os.path.isdir(settings.PHOTOS_ROOT):
os.mkdir(settings.PHOTOS_ROOT)
def import_caves(obj):
import parsers.caves
print("Importing Caves")
parsers.caves.readcaves()
def import_people(obj):
import parsers.people
parsers.people.LoadPersonsExpos()
def import_logbooks(obj):
# The below line was causing errors I didn't understand (it said LOGFILE was a string), and I couldn't be bothered to figure
# what was going on so I just catch the error with a try. - AC 21 May
try:
settings.LOGFILE.write('\nBegun importing logbooks at ' + time.asctime() + '\n' + '-' * 60)
except:
pass
import parsers.logbooks
parsers.logbooks.LoadLogbooks()
def import_survex(obj):
import parsers.survex
parsers.survex.LoadAllSurvexBlocks()
parsers.survex.LoadPos()
def import_QMs(obj):
import parsers.QMs
def import_surveys(obj):
import parsers.surveys
parsers.surveys.parseSurveys(logfile=settings.LOGFILE)
def import_surveyscans(obj):
import parsers.surveys
parsers.surveys.LoadListScans()
def import_tunnelfiles(obj):
import parsers.surveys
parsers.surveys.LoadTunnelFiles()
def reset(self, mgmt_obj):
""" Wipe the troggle database and import everything from legacy data
"""
self.reload_db()
self.make_dirs()
self.pageredirects()
self.import_caves()
self.import_people()
self.import_surveyscans()
self.import_survex()
self.import_logbooks()
self.import_QMs()
try:
self.import_tunnelfiles()
except:
print("Tunnel files parser broken.")
self.import_surveys()
def pageredirects(obj):
for oldURL, newURL in [("indxal.htm", reverse("caveindex"))]:
f = troggle.flatpages.models.Redirect(originalURL=oldURL, newURL=newURL)
f.save()
def writeCaves(obj):
for cave in Cave.objects.all():
cave.writeDataFile()
for entrance in Entrance.objects.all():
entrance.writeDataFile()
def troggle_usage(obj):
print("""Usage is 'manage.py reset_db <command>'
where command is:
reset - this is normal usage, clear database and reread everything
desc
caves - read in the caves
logbooks - read in the logbooks
autologbooks
dumplogbooks
people
QMs - read in the QM files
resetend
scans - read in the scanned surveynotes
survex - read in the survex files
survexpos
tunnel - read in the Tunnel files
writeCaves
""")

View File

@@ -0,0 +1,575 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2020-02-18 16:01
from __future__ import unicode_literals
from django.conf import settings
import django.core.files.storage
from django.db import migrations, models
import django.db.models.deletion
import troggle.core.models
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Area',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('non_public', models.BooleanField(default=False)),
('short_name', models.CharField(max_length=100)),
('name', models.CharField(blank=True, max_length=200, null=True)),
('description', models.TextField(blank=True, null=True)),
('parent', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.Area')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Cave',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('non_public', models.BooleanField(default=False)),
('official_name', models.CharField(max_length=160)),
('kataster_code', models.CharField(blank=True, max_length=20, null=True)),
('kataster_number', models.CharField(blank=True, max_length=10, null=True)),
('unofficial_number', models.CharField(blank=True, max_length=60, null=True)),
('explorers', models.TextField(blank=True, null=True)),
('underground_description', models.TextField(blank=True, null=True)),
('equipment', models.TextField(blank=True, null=True)),
('references', models.TextField(blank=True, null=True)),
('survey', models.TextField(blank=True, null=True)),
('kataster_status', models.TextField(blank=True, null=True)),
('underground_centre_line', models.TextField(blank=True, null=True)),
('notes', models.TextField(blank=True, null=True)),
('length', models.CharField(blank=True, max_length=100, null=True)),
('depth', models.CharField(blank=True, max_length=100, null=True)),
('extent', models.CharField(blank=True, max_length=100, null=True)),
('survex_file', models.CharField(blank=True, max_length=100, null=True)),
('description_file', models.CharField(blank=True, max_length=200, null=True)),
('url', models.CharField(blank=True, max_length=200, null=True)),
('filename', models.CharField(max_length=200)),
('area', models.ManyToManyField(blank=True, to='core.Area')),
],
options={
'ordering': ('kataster_code', 'unofficial_number'),
},
),
migrations.CreateModel(
name='CaveAndEntrance',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('entrance_letter', models.CharField(blank=True, max_length=20, null=True)),
('cave', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.Cave')),
],
),
migrations.CreateModel(
name='CaveDescription',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('non_public', models.BooleanField(default=False)),
('short_name', models.CharField(max_length=50, unique=True)),
('long_name', models.CharField(blank=True, max_length=200, null=True)),
('description', models.TextField(blank=True, null=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='CaveSlug',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('slug', models.SlugField(unique=True)),
('primary', models.BooleanField(default=False)),
('cave', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.Cave')),
],
),
migrations.CreateModel(
name='DataIssue',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('non_public', models.BooleanField(default=False)),
('date', models.DateTimeField(auto_now_add=True)),
('parser', models.CharField(blank=True, max_length=50, null=True)),
('message', models.CharField(blank=True, max_length=400, null=True)),
],
options={
'ordering': ['date'],
},
),
migrations.CreateModel(
name='DPhoto',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('caption', models.CharField(blank=True, max_length=1000, null=True)),
('file', models.ImageField(storage=django.core.files.storage.FileSystemStorage(base_url=b'http://127.0.0.1:8000/photos/', location=b'/expo/expoweb/photos'), upload_to=b'.')),
('is_mugshot', models.BooleanField(default=False)),
('lon_utm', models.FloatField(blank=True, null=True)),
('lat_utm', models.FloatField(blank=True, null=True)),
('contains_cave', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.Cave')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Entrance',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('non_public', models.BooleanField(default=False)),
('name', models.CharField(blank=True, max_length=100, null=True)),
('entrance_description', models.TextField(blank=True, null=True)),
('explorers', models.TextField(blank=True, null=True)),
('map_description', models.TextField(blank=True, null=True)),
('location_description', models.TextField(blank=True, null=True)),
('approach', models.TextField(blank=True, null=True)),
('underground_description', models.TextField(blank=True, null=True)),
('photo', models.TextField(blank=True, null=True)),
('marking', models.CharField(choices=[(b'P', b'Paint'), (b'P?', b'Paint (?)'), (b'T', b'Tag'), (b'T?', b'Tag (?)'), (b'R', b'Needs Retag'), (b'S', b'Spit'), (b'S?', b'Spit (?)'), (b'U', b'Unmarked'), (b'?', b'Unknown')], max_length=2)),
('marking_comment', models.TextField(blank=True, null=True)),
('findability', models.CharField(blank=True, choices=[(b'?', b'To be confirmed ...'), (b'S', b'Coordinates'), (b'L', b'Lost'), (b'R', b'Refindable')], max_length=1, null=True)),
('findability_description', models.TextField(blank=True, null=True)),
('alt', models.TextField(blank=True, null=True)),
('northing', models.TextField(blank=True, null=True)),
('easting', models.TextField(blank=True, null=True)),
('tag_station', models.TextField(blank=True, null=True)),
('exact_station', models.TextField(blank=True, null=True)),
('other_station', models.TextField(blank=True, null=True)),
('other_description', models.TextField(blank=True, null=True)),
('bearings', models.TextField(blank=True, null=True)),
('url', models.CharField(blank=True, max_length=200, null=True)),
('filename', models.CharField(max_length=200)),
('cached_primary_slug', models.CharField(blank=True, max_length=200, null=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='EntranceSlug',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('slug', models.SlugField(unique=True)),
('primary', models.BooleanField(default=False)),
('entrance', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.Entrance')),
],
),
migrations.CreateModel(
name='Expedition',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('non_public', models.BooleanField(default=False)),
('year', models.CharField(max_length=20, unique=True)),
('name', models.CharField(max_length=100)),
],
options={
'ordering': ('-year',),
'get_latest_by': 'year',
},
),
migrations.CreateModel(
name='ExpeditionDay',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('non_public', models.BooleanField(default=False)),
('date', models.DateField()),
('expedition', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.Expedition')),
],
options={
'ordering': ('date',),
},
),
migrations.CreateModel(
name='LogbookEntry',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('non_public', models.BooleanField(default=False)),
('date', models.DateTimeField()),
('title', models.CharField(max_length=200)),
('cave_slug', models.SlugField()),
('place', models.CharField(blank=True, help_text=b"Only use this if you haven't chosen a cave", max_length=100, null=True)),
('text', models.TextField()),
('slug', models.SlugField()),
('filename', models.CharField(max_length=200, null=True)),
('entry_type', models.CharField(choices=[(b'wiki', b'Wiki style logbook'), (b'html', b'Html style logbook')], default=b'wiki', max_length=50, null=True)),
('expedition', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.Expedition')),
('expeditionday', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='core.ExpeditionDay')),
],
options={
'ordering': ('-date',),
'verbose_name_plural': 'Logbook Entries',
},
),
migrations.CreateModel(
name='NewSubCave',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('non_public', models.BooleanField(default=False)),
('name', models.CharField(max_length=200, unique=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='OtherCaveName',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('non_public', models.BooleanField(default=False)),
('name', models.CharField(max_length=160)),
('cave', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.Cave')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Person',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('non_public', models.BooleanField(default=False)),
('first_name', models.CharField(max_length=100)),
('last_name', models.CharField(max_length=100)),
('fullname', models.CharField(max_length=200)),
('is_vfho', models.BooleanField(default=False, help_text=b'VFHO is the Vereines f&uuml;r H&ouml;hlenkunde in Obersteier, a nearby Austrian caving club.')),
('mug_shot', models.CharField(blank=True, max_length=100, null=True)),
('blurb', models.TextField(blank=True, null=True)),
('orderref', models.CharField(max_length=200)),
('user', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ('orderref',),
'verbose_name_plural': 'People',
},
),
migrations.CreateModel(
name='PersonExpedition',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('non_public', models.BooleanField(default=False)),
('slugfield', models.SlugField(blank=True, null=True)),
('is_guest', models.BooleanField(default=False)),
('expo_committee_position', models.CharField(blank=True, choices=[(b'leader', b'Expo leader'), (b'medical', b'Expo medical officer'), (b'treasurer', b'Expo treasurer'), (b'sponsorship', b'Expo sponsorship coordinator'), (b'research', b'Expo research coordinator')], max_length=200, null=True)),
('nickname', models.CharField(blank=True, max_length=100, null=True)),
('expedition', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.Expedition')),
('person', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.Person')),
],
options={
'ordering': ('-expedition',),
},
),
migrations.CreateModel(
name='PersonTrip',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('non_public', models.BooleanField(default=False)),
('time_underground', models.FloatField(help_text=b'In decimal hours')),
('is_logbook_entry_author', models.BooleanField(default=False)),
('logbook_entry', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.LogbookEntry')),
('personexpedition', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='core.PersonExpedition')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='QM',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('non_public', models.BooleanField(default=False)),
('number', models.IntegerField(help_text=b'this is the sequential number in the year')),
('grade', models.CharField(choices=[(b'A', b'A: Large obvious lead'), (b'B', b'B: Average lead'), (b'C', b'C: Tight unpromising lead'), (b'D', b'D: Dig'), (b'X', b'X: Unclimbable aven')], max_length=1)),
('location_description', models.TextField(blank=True)),
('nearest_station_description', models.CharField(blank=True, max_length=400, null=True)),
('nearest_station_name', models.CharField(blank=True, max_length=200, null=True)),
('area', models.CharField(blank=True, max_length=100, null=True)),
('completion_description', models.TextField(blank=True, null=True)),
('comment', models.TextField(blank=True, null=True)),
('found_by', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='QMs_found', to='core.LogbookEntry')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='ScannedImage',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('file', models.ImageField(storage=django.core.files.storage.FileSystemStorage(base_url=b'/survey_scans/', location=b'/expo/expofiles/'), upload_to=troggle.core.models.get_scan_path)),
('scanned_on', models.DateField(null=True)),
('contents', models.CharField(choices=[(b'notes', b'notes'), (b'plan', b'plan_sketch'), (b'elevation', b'elevation_sketch')], max_length=20)),
('number_in_wallet', models.IntegerField(null=True)),
('lon_utm', models.FloatField(blank=True, null=True)),
('lat_utm', models.FloatField(blank=True, null=True)),
('scanned_by', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.Person')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='SurvexBlock',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=100)),
('text', models.TextField()),
('date', models.DateTimeField(blank=True, null=True)),
('begin_char', models.IntegerField()),
('survexpath', models.CharField(max_length=200)),
('totalleglength', models.FloatField()),
('cave', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.Cave')),
('expedition', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.Expedition')),
('expeditionday', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='core.ExpeditionDay')),
('parent', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.SurvexBlock')),
],
options={
'ordering': ('id',),
},
),
migrations.CreateModel(
name='SurvexDirectory',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('path', models.CharField(max_length=200)),
('cave', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.Cave')),
],
options={
'ordering': ('path',),
},
),
migrations.CreateModel(
name='SurvexEquate',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('cave', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.Cave')),
],
),
migrations.CreateModel(
name='SurvexFile',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('path', models.CharField(max_length=200)),
('cave', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.Cave')),
('survexdirectory', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.SurvexDirectory')),
],
options={
'ordering': ('id',),
},
),
migrations.CreateModel(
name='SurvexLeg',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('tape', models.FloatField()),
('compass', models.FloatField()),
('clino', models.FloatField()),
('block', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.SurvexBlock')),
],
),
migrations.CreateModel(
name='SurvexPersonRole',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('nrole', models.CharField(blank=True, choices=[(b'insts', b'Instruments'), (b'dog', b'Other'), (b'notes', b'Notes'), (b'pics', b'Pictures'), (b'tape', b'Tape measure'), (b'useless', b'Useless'), (b'helper', b'Helper'), (b'disto', b'Disto'), (b'consultant', b'Consultant')], max_length=200, null=True)),
('personname', models.CharField(max_length=100)),
('expeditionday', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='core.ExpeditionDay')),
('person', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.Person')),
('personexpedition', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.PersonExpedition')),
('persontrip', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.PersonTrip')),
('survexblock', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.SurvexBlock')),
],
),
migrations.CreateModel(
name='SurvexScansFolder',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('fpath', models.CharField(max_length=200)),
('walletname', models.CharField(max_length=200)),
],
options={
'ordering': ('walletname',),
},
),
migrations.CreateModel(
name='SurvexScanSingle',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('ffile', models.CharField(max_length=200)),
('name', models.CharField(max_length=200)),
('survexscansfolder', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='core.SurvexScansFolder')),
],
options={
'ordering': ('name',),
},
),
migrations.CreateModel(
name='SurvexStation',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=100)),
('x', models.FloatField(blank=True, null=True)),
('y', models.FloatField(blank=True, null=True)),
('z', models.FloatField(blank=True, null=True)),
('block', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.SurvexBlock')),
('equate', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.SurvexEquate')),
],
),
migrations.CreateModel(
name='SurvexTitle',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=200)),
('cave', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.Cave')),
('survexblock', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.SurvexBlock')),
],
),
migrations.CreateModel(
name='Survey',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('new_since_parsing', models.BooleanField(default=False, editable=False)),
('non_public', models.BooleanField(default=False)),
('wallet_number', models.IntegerField(blank=True, null=True)),
('wallet_letter', models.CharField(blank=True, max_length=1, null=True)),
('comments', models.TextField(blank=True, null=True)),
('location', models.CharField(blank=True, max_length=400, null=True)),
('centreline_printed_on', models.DateField(blank=True, null=True)),
('tunnel_file', models.FileField(blank=True, null=True, upload_to=b'surveyXMLfiles')),
('integrated_into_main_sketch_on', models.DateField(blank=True, null=True)),
('rendered_image', models.ImageField(blank=True, null=True, upload_to=b'renderedSurveys')),
('centreline_printed_by', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='centreline_printed_by', to='core.Person')),
('expedition', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.Expedition')),
('integrated_into_main_sketch_by', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='integrated_into_main_sketch_by', to='core.Person')),
('logbook_entry', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.LogbookEntry')),
('subcave', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.NewSubCave')),
('survex_block', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.SurvexBlock')),
('tunnel_main_sketch', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.Survey')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='TunnelFile',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('tunnelpath', models.CharField(max_length=200)),
('tunnelname', models.CharField(max_length=200)),
('bfontcolours', models.BooleanField(default=False)),
('filesize', models.IntegerField(default=0)),
('npaths', models.IntegerField(default=0)),
('survexblocks', models.ManyToManyField(to='core.SurvexBlock')),
('survexscans', models.ManyToManyField(to='core.SurvexScanSingle')),
('survexscansfolders', models.ManyToManyField(to='core.SurvexScansFolder')),
('survextitles', models.ManyToManyField(to='core.SurvexTitle')),
('tunnelcontains', models.ManyToManyField(to='core.TunnelFile')),
],
options={
'ordering': ('tunnelpath',),
},
),
migrations.AddField(
model_name='survexleg',
name='stationfrom',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='stationfrom', to='core.SurvexStation'),
),
migrations.AddField(
model_name='survexleg',
name='stationto',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='stationto', to='core.SurvexStation'),
),
migrations.AddField(
model_name='survexdirectory',
name='primarysurvexfile',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='primarysurvexfile', to='core.SurvexFile'),
),
migrations.AddField(
model_name='survexblock',
name='survexfile',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.SurvexFile'),
),
migrations.AddField(
model_name='survexblock',
name='survexscansfolder',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='core.SurvexScansFolder'),
),
migrations.AddField(
model_name='scannedimage',
name='survey',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.Survey'),
),
migrations.AddField(
model_name='qm',
name='nearest_station',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.SurvexStation'),
),
migrations.AddField(
model_name='qm',
name='ticked_off_by',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='QMs_ticked_off', to='core.LogbookEntry'),
),
migrations.AddField(
model_name='dphoto',
name='contains_entrance',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='photo_file', to='core.Entrance'),
),
migrations.AddField(
model_name='dphoto',
name='contains_logbookentry',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.LogbookEntry'),
),
migrations.AddField(
model_name='dphoto',
name='contains_person',
field=models.ManyToManyField(blank=True, to='core.Person'),
),
migrations.AddField(
model_name='dphoto',
name='nearest_QM',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.QM'),
),
migrations.AddField(
model_name='cavedescription',
name='linked_entrances',
field=models.ManyToManyField(blank=True, to='core.Entrance'),
),
migrations.AddField(
model_name='cavedescription',
name='linked_qms',
field=models.ManyToManyField(blank=True, to='core.QM'),
),
migrations.AddField(
model_name='cavedescription',
name='linked_subcaves',
field=models.ManyToManyField(blank=True, to='core.NewSubCave'),
),
migrations.AddField(
model_name='caveandentrance',
name='entrance',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.Entrance'),
),
]

View File

View File

@@ -10,14 +10,13 @@ from django.db.models import Min, Max
from django.conf import settings
from decimal import Decimal, getcontext
from django.core.urlresolvers import reverse
from imagekit.models import ImageModel
from imagekit.models import ProcessedImageField #ImageModel
from django.template import Context, loader
import settings
getcontext().prec=2 #use 2 significant figures for decimal calculations
from troggle.core.models_survex import *
def get_related_by_wikilinks(wiki_text):
found=re.findall(settings.QM_PATTERN,wiki_text)
res=[]
@@ -28,10 +27,10 @@ def get_related_by_wikilinks(wiki_text):
qm=QM.objects.get(found_by__cave_slug__in = cave_slugs,
found_by__date__year = qmdict['year'],
number = qmdict['number'])
res.append(qm)
res.append(qm)
except QM.DoesNotExist:
print('fail on '+str(wikilink))
return res
try:
@@ -39,8 +38,10 @@ try:
filename=settings.LOGFILE,
filemode='w')
except:
# Opening of file for writing is going to fail currently, so decide it doesn't matter for now
pass
subprocess.call(settings.FIX_PERMISSIONS)
logging.basicConfig(level=logging.DEBUG,
filename=settings.LOGFILE,
filemode='w')
#This class is for adding fields and methods which all of our models will have.
class TroggleModel(models.Model):
@@ -57,7 +58,7 @@ class TroggleModel(models.Model):
class TroggleImageModel(models.Model):
new_since_parsing = models.BooleanField(default=False, editable=False)
def object_name(self):
return self._meta.object_name
@@ -68,23 +69,23 @@ class TroggleImageModel(models.Model):
class Meta:
abstract = True
#
#
# single Expedition, usually seen by year
#
class Expedition(TroggleModel):
year = models.CharField(max_length=20, unique=True)
name = models.CharField(max_length=100)
def __unicode__(self):
return self.year
class Meta:
ordering = ('-year',)
get_latest_by = 'year'
def get_absolute_url(self):
return urlparse.urljoin(settings.URL_ROOT, reverse('expedition', args=[self.year]))
# construction function. should be moved out
def get_expedition_day(self, date):
expeditiondays = self.expeditionday_set.filter(date=date)
@@ -94,13 +95,13 @@ class Expedition(TroggleModel):
res = ExpeditionDay(expedition=self, date=date)
res.save()
return res
def day_min(self):
res = self.expeditionday_set.all()
res = self.Expeditionday_set.all()
return res and res[0] or None
def day_max(self):
res = self.expeditionday_set.all()
res = self.Expeditionday_set.all()
return res and res[len(res) - 1] or None
@@ -112,9 +113,12 @@ class ExpeditionDay(TroggleModel):
ordering = ('date',)
def GetPersonTrip(self, personexpedition):
personexpeditions = self.persontrip_set.filter(expeditionday=self)
personexpeditions = self.Persontrip_set.filter(expeditionday=self)
return personexpeditions and personexpeditions[0] or None
def __unicode__(self):
return str(self.expedition) + ' ' + str(self.date)
#
# single Person, can go on many years
#
@@ -125,23 +129,23 @@ class Person(TroggleModel):
is_vfho = models.BooleanField(help_text="VFHO is the Vereines f&uuml;r H&ouml;hlenkunde in Obersteier, a nearby Austrian caving club.", default=False)
mug_shot = models.CharField(max_length=100, blank=True,null=True)
blurb = models.TextField(blank=True,null=True)
#href = models.CharField(max_length=200)
orderref = models.CharField(max_length=200) # for alphabetic
orderref = models.CharField(max_length=200) # for alphabetic
user = models.OneToOneField(User, null=True, blank=True)
def get_absolute_url(self):
return urlparse.urljoin(settings.URL_ROOT,reverse('person',kwargs={'first_name':self.first_name,'last_name':self.last_name}))
class Meta:
verbose_name_plural = "People"
ordering = ('orderref',) # "Wookey" makes too complex for: ('last_name', 'first_name')
ordering = ('orderref',) # "Wookey" makes too complex for: ('last_name', 'first_name')
def __unicode__(self):
if self.last_name:
return "%s %s" % (self.first_name, self.last_name)
return self.first_name
def notability(self):
notability = Decimal(0)
max_expo_val = 0
@@ -151,21 +155,21 @@ class Person(TroggleModel):
for personexpedition in self.personexpedition_set.all():
if not personexpedition.is_guest:
print(personexpedition.expedition.year)
# print(personexpedition.expedition.year)
notability += Decimal(1) / (max_expo_val - int(personexpedition.expedition.year))
return notability
def bisnotable(self):
return self.notability() > Decimal(1)/Decimal(3)
def surveyedleglength(self):
return sum([personexpedition.surveyedleglength() for personexpedition in self.personexpedition_set.all()])
def first(self):
return self.personexpedition_set.order_by('-expedition')[0]
def last(self):
return self.personexpedition_set.order_by('expedition')[0]
#def Sethref(self):
#if self.last_name:
#self.href = self.first_name.lower() + "_" + self.last_name.lower()
@@ -174,7 +178,7 @@ class Person(TroggleModel):
# self.href = self.first_name.lower()
#self.orderref = self.first_name
#self.notability = 0.0 # set temporarily
#
# Person's attenance to one Expo
@@ -183,8 +187,8 @@ class PersonExpedition(TroggleModel):
expedition = models.ForeignKey(Expedition)
person = models.ForeignKey(Person)
slugfield = models.SlugField(max_length=50,blank=True,null=True)
is_guest = models.BooleanField(default=False)
is_guest = models.BooleanField(default=False)
COMMITTEE_CHOICES = (
('leader','Expo leader'),
('medical','Expo medical officer'),
@@ -194,7 +198,7 @@ class PersonExpedition(TroggleModel):
)
expo_committee_position = models.CharField(blank=True,null=True,choices=COMMITTEE_CHOICES,max_length=200)
nickname = models.CharField(max_length=100,blank=True,null=True)
def GetPersonroles(self):
res = [ ]
for personrole in self.personrole_set.order_by('survexblock'):
@@ -210,8 +214,8 @@ class PersonExpedition(TroggleModel):
def __unicode__(self):
return "%s: (%s)" % (self.person, self.expedition)
#why is the below a function in personexpedition, rather than in person? - AC 14 Feb 09
def name(self):
if self.nickname:
@@ -222,11 +226,11 @@ class PersonExpedition(TroggleModel):
def get_absolute_url(self):
return urlparse.urljoin(settings.URL_ROOT, reverse('personexpedition',kwargs={'first_name':self.person.first_name,'last_name':self.person.last_name,'year':self.expedition.year}))
def surveyedleglength(self):
survexblocks = [personrole.survexblock for personrole in self.personrole_set.all() ]
return sum([survexblock.totalleglength for survexblock in set(survexblocks)])
# would prefer to return actual person trips so we could link to first and last ones
def day_min(self):
res = self.persontrip_set.aggregate(day_min=Min("expeditionday__date"))
@@ -238,7 +242,7 @@ class PersonExpedition(TroggleModel):
#
# Single parsed entry from Logbook
#
#
class LogbookEntry(TroggleModel):
LOGBOOK_ENTRY_TYPES = (
@@ -246,7 +250,7 @@ class LogbookEntry(TroggleModel):
("html", "Html style logbook")
)
date = models.DateField()#MJG wants to turn this into a datetime such that multiple Logbook entries on the same day can be ordered.ld()
date = models.DateTimeField()#MJG wants to turn this into a datetime such that multiple Logbook entries on the same day can be ordered.ld()
expeditionday = models.ForeignKey("ExpeditionDay", null=True)#MJG wants to KILL THIS (redundant information)
expedition = models.ForeignKey(Expedition,blank=True,null=True) # yes this is double-
title = models.CharField(max_length=settings.MAX_LOGBOOK_ENTRY_TITLE_LENGTH)
@@ -261,7 +265,7 @@ class LogbookEntry(TroggleModel):
verbose_name_plural = "Logbook Entries"
# several PersonTrips point in to this object
ordering = ('-date',)
def __getattribute__(self, item):
if item == "cave": #Allow a logbookentries cave to be directly accessed despite not having a proper foreignkey
return CaveSlug.objects.get(slug = self.cave_slug).cave
@@ -310,18 +314,18 @@ class LogbookEntry(TroggleModel):
#
class PersonTrip(TroggleModel):
personexpedition = models.ForeignKey("PersonExpedition",null=True)
#expeditionday = models.ForeignKey("ExpeditionDay")#MJG wants to KILL THIS (redundant information)
#date = models.DateField() #MJG wants to KILL THIS (redundant information)
time_underground = models.FloatField(help_text="In decimal hours")
logbook_entry = models.ForeignKey(LogbookEntry)
is_logbook_entry_author = models.BooleanField(default=False)
# sequencing by person (difficult to solve locally)
#persontrip_next = models.ForeignKey('PersonTrip', related_name='pnext', blank=True,null=True)#MJG wants to KILL THIS (and use funstion persontrip_next_auto)
#persontrip_prev = models.ForeignKey('PersonTrip', related_name='pprev', blank=True,null=True)#MJG wants to KILL THIS(and use funstion persontrip_prev_auto)
def persontrip_next(self):
futurePTs = PersonTrip.objects.filter(personexpedition = self.personexpedition, logbook_entry__date__gt = self.logbook_entry.date).order_by('logbook_entry__date').all()
if len(futurePTs) > 0:
@@ -341,7 +345,7 @@ class PersonTrip(TroggleModel):
def __unicode__(self):
return "%s (%s)" % (self.personexpedition, self.logbook_entry.date)
##########################################
@@ -368,19 +372,22 @@ class CaveAndEntrance(models.Model):
cave = models.ForeignKey('Cave')
entrance = models.ForeignKey('Entrance')
entrance_letter = models.CharField(max_length=20,blank=True,null=True)
def __unicode__(self):
return unicode(self.cave) + unicode(self.entrance_letter)
class CaveSlug(models.Model):
cave = models.ForeignKey('Cave')
slug = models.SlugField(max_length=50, unique = True)
primary = models.BooleanField(default=False)
def __unicode__(self):
return self.slug
class Cave(TroggleModel):
# too much here perhaps,
# too much here perhaps,
official_name = models.CharField(max_length=160)
area = models.ManyToManyField(Area, blank=True, null=True)
area = models.ManyToManyField(Area, blank=True)
kataster_code = models.CharField(max_length=20,blank=True,null=True)
kataster_number = models.CharField(max_length=10,blank=True, null=True)
unofficial_number = models.CharField(max_length=60,blank=True, null=True)
@@ -404,13 +411,13 @@ class Cave(TroggleModel):
#class Meta:
# unique_together = (("area", "kataster_number"), ("area", "unofficial_number"))
# FIXME Kataster Areas and CUCC defined sub areas need seperating
# FIXME Kataster Areas and CUCC defined sub areas need seperating
#href = models.CharField(max_length=100)
class Meta:
ordering = ('kataster_code', 'unofficial_number')
ordering = ('kataster_code', 'unofficial_number')
def hassurvey(self):
if not self.underground_centre_line:
@@ -425,7 +432,7 @@ class Cave(TroggleModel):
if self.survex_file:
return "Yes"
return "Missing"
def slug(self):
primarySlugs = self.caveslug_set.filter(primary = True)
if primarySlugs:
@@ -443,7 +450,7 @@ class Cave(TroggleModel):
return "%s-%s" % (self.kat_area(), self.kataster_number)
else:
return "%s-%s" % (self.kat_area(), self.unofficial_number)
def get_absolute_url(self):
if self.kataster_number:
href = self.kataster_number
@@ -455,10 +462,10 @@ class Cave(TroggleModel):
return urlparse.urljoin(settings.URL_ROOT, reverse('cave',kwargs={'cave_id':href,}))
def __unicode__(self, sep = u": "):
return unicode("slug:"+self.slug())
return unicode(self.slug())
def get_QMs(self):
return QM.objects.filter(found_by__cave_slug=self.caveslug_set.all())
return QM.objects.filter(nearest_station__block__cave__caveslug=self.caveslug_set.all())
def new_QM_number(self, year=datetime.date.today().year):
"""Given a cave and the current year, returns the next QM number."""
@@ -472,13 +479,13 @@ class Cave(TroggleModel):
for a in self.area.all():
if a.kat_area():
return a.kat_area()
def entrances(self):
return CaveAndEntrance.objects.filter(cave=self)
def singleentrance(self):
return len(CaveAndEntrance.objects.filter(cave=self)) == 1
def entrancelist(self):
rs = []
res = ""
@@ -506,12 +513,12 @@ class Cave(TroggleModel):
else:
res += "&ndash;" + prevR
return res
def writeDataFile(self):
try:
f = open(os.path.join(settings.CAVEDESCRIPTIONS, self.filename), "w")
except:
subprocess.call(settings.FIX_PERMISSIONS)
subprocess.call(settings.FIX_PERMISSIONS)
f = open(os.path.join(settings.CAVEDESCRIPTIONS, self.filename), "w")
t = loader.get_template('dataformat/cave.xml')
c = Context({'cave': self})
@@ -519,7 +526,7 @@ class Cave(TroggleModel):
u8 = u.encode("utf-8")
f.write(u8)
f.close()
def getArea(self):
areas = self.area.all()
lowestareas = list(areas)
@@ -548,7 +555,7 @@ class OtherCaveName(TroggleModel):
cave = models.ForeignKey(Cave)
def __unicode__(self):
return unicode(self.name)
class EntranceSlug(models.Model):
entrance = models.ForeignKey('Entrance')
slug = models.SlugField(max_length=50, unique = True)
@@ -599,31 +606,35 @@ class Entrance(TroggleModel):
def exact_location(self):
return SurvexStation.objects.lookup(self.exact_station)
def other_location(self):
return SurvexStation.objects.lookup(self.other_station)
def find_location(self):
r = {'': 'To be entered ',
'?': 'To be confirmed:',
'?': 'To be confirmed:',
'S': '',
'L': 'Lost:',
'R': 'Refindable:'}[self.findability]
if self.tag_station:
try:
s = SurvexStation.objects.lookup(self.tag_station)
s = SurvexStation.objects.filter(name=self.tag_station)[:1]
s = s[0]
return r + "%0.0fE %0.0fN %0.0fAlt" % (s.x, s.y, s.z)
except:
return r + "%s Tag Station not in dataset" % self.tag_station
if self.exact_station:
try:
s = SurvexStation.objects.lookup(self.exact_station)
s = SurvexStation.objects.filter(name=self.exact_station)[:1]
s = s[0]
return r + "%0.0fE %0.0fN %0.0fAlt" % (s.x, s.y, s.z)
except:
return r + "%s Exact Station not in dataset" % self.tag_station
if self.other_station:
try:
s = SurvexStation.objects.lookup(self.other_station)
s = SurvexStation.objects.filter(name=self.other_station)[:1]
s = s[0]
return r + "%0.0fE %0.0fN %0.0fAlt %s" % (s.x, s.y, s.z, self.other_description)
except:
return r + "%s Other Station not in dataset" % self.tag_station
@@ -658,28 +669,28 @@ class Entrance(TroggleModel):
for f in self.FINDABLE_CHOICES:
if f[0] == self.findability:
return f[1]
def tag(self):
return SurvexStation.objects.lookup(self.tag_station)
def needs_surface_work(self):
return self.findability != "S" or not self.has_photo or self.marking != "T"
def get_absolute_url(self):
ancestor_titles='/'.join([subcave.title for subcave in self.get_ancestors()])
if ancestor_titles:
res = '/'.join((self.get_root().cave.get_absolute_url(), ancestor_titles, self.title))
else:
res = '/'.join((self.get_root().cave.get_absolute_url(), self.title))
return res
def slug(self):
if not self.cached_primary_slug:
primarySlugs = self.entranceslug_set.filter(primary = True)
if primarySlugs:
if primarySlugs:
self.cached_primary_slug = primarySlugs[0].slug
self.save()
else:
@@ -693,7 +704,7 @@ class Entrance(TroggleModel):
try:
f = open(os.path.join(settings.ENTRANCEDESCRIPTIONS, self.filename), "w")
except:
subprocess.call(settings.FIX_PERMISSIONS)
subprocess.call(settings.FIX_PERMISSIONS)
f = open(os.path.join(settings.ENTRANCEDESCRIPTIONS, self.filename), "w")
t = loader.get_template('dataformat/entrance.xml')
c = Context({'entrance': self})
@@ -706,19 +717,19 @@ class CaveDescription(TroggleModel):
short_name = models.CharField(max_length=50, unique = True)
long_name = models.CharField(max_length=200, blank=True, null=True)
description = models.TextField(blank=True,null=True)
linked_subcaves = models.ManyToManyField("NewSubCave", blank=True,null=True)
linked_entrances = models.ManyToManyField("Entrance", blank=True,null=True)
linked_qms = models.ManyToManyField("QM", blank=True,null=True)
linked_subcaves = models.ManyToManyField("NewSubCave", blank=True)
linked_entrances = models.ManyToManyField("Entrance", blank=True)
linked_qms = models.ManyToManyField("QM", blank=True)
def __unicode__(self):
if self.long_name:
return unicode(self.long_name)
else:
return unicode(self.short_name)
def get_absolute_url(self):
return urlparse.urljoin(settings.URL_ROOT, reverse('cavedescription', args=(self.short_name,)))
def save(self):
"""
Overridden save method which stores wikilinks in text as links in database.
@@ -735,20 +746,20 @@ class NewSubCave(TroggleModel):
return unicode(self.name)
class QM(TroggleModel):
#based on qm.csv in trunk/expoweb/1623/204 which has the fields:
#"Number","Grade","Area","Description","Page reference","Nearest station","Completion description","Comment"
# based on qm.csv in trunk/expoweb/1623/204 which has the fields:
# "Number","Grade","Area","Description","Page reference","Nearest station","Completion description","Comment"
found_by = models.ForeignKey(LogbookEntry, related_name='QMs_found',blank=True, null=True )
ticked_off_by = models.ForeignKey(LogbookEntry, related_name='QMs_ticked_off',null=True,blank=True)
#cave = models.ForeignKey(Cave)
#expedition = models.ForeignKey(Expedition)
# cave = models.ForeignKey(Cave)
# expedition = models.ForeignKey(Expedition)
number = models.IntegerField(help_text="this is the sequential number in the year", )
GRADE_CHOICES=(
('A', 'A: Large obvious lead'),
('B', 'B: Average lead'),
('C', 'C: Tight unpromising lead'),
('D', 'D: Dig'),
('X', 'X: Unclimbable aven')
('A', 'A: Large obvious lead'),
('B', 'B: Average lead'),
('C', 'C: Tight unpromising lead'),
('D', 'D: Dig'),
('X', 'X: Unclimbable aven')
)
grade = models.CharField(max_length=1, choices=GRADE_CHOICES)
location_description = models.TextField(blank=True)
@@ -763,11 +774,19 @@ class QM(TroggleModel):
return u"%s %s" % (self.code(), self.grade)
def code(self):
return u"%s-%s-%s" % (unicode(self.found_by.cave)[6:], self.found_by.date.year, self.number)
if self.found_by:
# Old style QMs where found_by is a logbook entry
return u"%s-%s-%s" % (unicode(self.found_by.cave)[6:], self.found_by.date.year, self.number)
elif self.nearest_station:
# New style QMs where QMs are stored in SVX files and nearest station is a forigin key
return u"%s-%s-%s" % (self.nearest_station.block.name, self.nearest_station.name, self.number)
else:
# Just give up!!
return u"%s" % (self.number)
def get_absolute_url(self):
#return settings.URL_ROOT + '/cave/' + self.found_by.cave.kataster_number + '/' + str(self.found_by.date.year) + '-' + '%02d' %self.number
return urlparse.urljoin(settings.URL_ROOT, reverse('qm',kwargs={'cave_id':self.found_by.cave.kataster_number,'year':self.found_by.date.year,'qm_id':self.number,'grade':self.grade}))
return urlparse.urljoin(settings.URL_ROOT, reverse('qm',kwargs={'qm_id':self.id}))
def get_next_by_id(self):
return QM.objects.get(id=self.id+1)
@@ -778,32 +797,31 @@ class QM(TroggleModel):
def wiki_link(self):
return u"%s%s%s" % ('[[QM:',self.code(),']]')
#photoFileStorage = FileSystemStorage(location=settings.PHOTOS_ROOT, base_url=settings.PHOTOS_URL)
#class DPhoto(TroggleImageModel):
#caption = models.CharField(max_length=1000,blank=True,null=True)
#contains_logbookentry = models.ForeignKey(LogbookEntry,blank=True,null=True)
#contains_person = models.ManyToManyField(Person,blank=True,null=True)
# replace link to copied file with link to original file location
#file = models.ImageField(storage=photoFileStorage, upload_to='.',)
#is_mugshot = models.BooleanField(default=False)
#contains_cave = models.ForeignKey(Cave,blank=True,null=True)
#contains_entrance = models.ForeignKey(Entrance, related_name="photo_file",blank=True,null=True)
photoFileStorage = FileSystemStorage(location=settings.PHOTOS_ROOT, base_url=settings.PHOTOS_URL)
class DPhoto(TroggleImageModel):
caption = models.CharField(max_length=1000,blank=True,null=True)
contains_logbookentry = models.ForeignKey(LogbookEntry,blank=True,null=True)
contains_person = models.ManyToManyField(Person,blank=True)
file = models.ImageField(storage=photoFileStorage, upload_to='.',)
is_mugshot = models.BooleanField(default=False)
contains_cave = models.ForeignKey(Cave,blank=True,null=True)
contains_entrance = models.ForeignKey(Entrance, related_name="photo_file",blank=True,null=True)
#nearest_survey_point = models.ForeignKey(SurveyStation,blank=True,null=True)
#nearest_QM = models.ForeignKey(QM,blank=True,null=True)
#lon_utm = models.FloatField(blank=True,null=True)
#lat_utm = models.FloatField(blank=True,null=True)
# class IKOptions:
# spec_module = 'core.imagekit_specs'
# cache_dir = 'thumbs'
# image_field = 'file'
nearest_QM = models.ForeignKey(QM,blank=True,null=True)
lon_utm = models.FloatField(blank=True,null=True)
lat_utm = models.FloatField(blank=True,null=True)
class IKOptions:
spec_module = 'core.imagekit_specs'
cache_dir = 'thumbs'
image_field = 'file'
#content_type = models.ForeignKey(ContentType)
#object_id = models.PositiveIntegerField()
#location = generic.GenericForeignKey('content_type', 'object_id')
# def __unicode__(self):
# return self.caption
def __unicode__(self):
return self.caption
scansFileStorage = FileSystemStorage(location=settings.SURVEY_SCANS, base_url=settings.SURVEYS_URL)
def get_scan_path(instance, filename):
@@ -814,7 +832,7 @@ def get_scan_path(instance, filename):
number=str(instance.survey.wallet_letter) + number #two strings formatting because convention is 2009#01 or 2009#X01
return os.path.join('./',year,year+r'#'+number,str(instance.contents)+str(instance.number_in_wallet)+r'.jpg')
class ScannedImage(TroggleImageModel):
class ScannedImage(TroggleImageModel):
file = models.ImageField(storage=scansFileStorage, upload_to=get_scan_path)
scanned_by = models.ForeignKey(Person,blank=True, null=True)
scanned_on = models.DateField(null=True)
@@ -857,8 +875,9 @@ class Survey(TroggleModel):
integrated_into_main_sketch_on = models.DateField(blank=True,null=True)
integrated_into_main_sketch_by = models.ForeignKey('Person' ,related_name='integrated_into_main_sketch_by', blank=True,null=True)
rendered_image = models.ImageField(upload_to='renderedSurveys',blank=True,null=True)
def __unicode__(self):
return self.expedition.year+"#"+"%02d" % int(self.wallet_number)
return self.expedition.year+"#" + "%s%02d" % (self.wallet_letter, int(self.wallet_number))
def notes(self):
return self.scannedimage_set.filter(contents='notes')

View File

@@ -9,7 +9,7 @@ from django.core.urlresolvers import reverse
###########################################################
# These will allow browsing and editing of the survex data
###########################################################
# Needs to add:
# Needs to add:
# Equates
# reloading
@@ -18,29 +18,37 @@ class SurvexDirectory(models.Model):
cave = models.ForeignKey('Cave', blank=True, null=True)
primarysurvexfile = models.ForeignKey('SurvexFile', related_name='primarysurvexfile', blank=True, null=True)
# could also include files in directory but not referenced
def __unicode__(self):
return self.path
class Meta:
ordering = ('id',)
ordering = ('path',)
class SurvexFile(models.Model):
path = models.CharField(max_length=200)
survexdirectory = models.ForeignKey("SurvexDirectory", blank=True, null=True)
cave = models.ForeignKey('Cave', blank=True, null=True)
class Meta:
ordering = ('id',)
def __unicode__(self):
return self.path + '.svx' or 'no file'
def exists(self):
fname = os.path.join(settings.SURVEX_DATA, self.path + ".svx")
return os.path.isfile(fname)
def OpenFile(self):
fname = os.path.join(settings.SURVEX_DATA, self.path + ".svx")
return open(fname)
def SetDirectory(self):
dirpath = os.path.split(self.path)[0]
survexdirectorylist = SurvexDirectory.objects.filter(cave=self.cave, path=dirpath)
# if self.cave is '' or self.cave is None:
# print('No cave set for survex dir %s' % self.path)
if survexdirectorylist:
self.survexdirectory = survexdirectorylist[0]
else:
@@ -59,14 +67,20 @@ class SurvexStationLookUpManager(models.Manager):
name__iexact = stationname)
class SurvexStation(models.Model):
name = models.CharField(max_length=100)
name = models.CharField(max_length=100)
block = models.ForeignKey('SurvexBlock')
equate = models.ForeignKey('SurvexEquate', blank=True, null=True)
objects = SurvexStationLookUpManager()
x = models.FloatField(blank=True, null=True)
y = models.FloatField(blank=True, null=True)
z = models.FloatField(blank=True, null=True)
def __unicode__(self):
if self.block.cave:
# If we haven't got a cave we can't have a slug, saves a nonetype return
return self.block.cave.slug() + '/' + self.block.name + '/' + self.name or 'No station name'
else:
return str(self.block.cave) + '/' + self.block.name + '/' + self.name or 'No station name'
def path(self):
r = self.name
b = self.block
@@ -89,8 +103,8 @@ class SurvexLeg(models.Model):
#
# Single SurvexBlock
#
# Single SurvexBlock
#
class SurvexBlockLookUpManager(models.Manager):
def lookup(self, name):
if name == "":
@@ -108,20 +122,20 @@ class SurvexBlock(models.Model):
parent = models.ForeignKey('SurvexBlock', blank=True, null=True)
text = models.TextField()
cave = models.ForeignKey('Cave', blank=True, null=True)
date = models.DateField(blank=True, null=True)
date = models.DateTimeField(blank=True, null=True)
expeditionday = models.ForeignKey("ExpeditionDay", null=True)
expedition = models.ForeignKey('Expedition', blank=True, null=True)
survexfile = models.ForeignKey("SurvexFile", blank=True, null=True)
begin_char = models.IntegerField() # code for where in the survex data files this block sits
survexpath = models.CharField(max_length=200) # the path for the survex stations
survexscansfolder = models.ForeignKey("SurvexScansFolder", null=True)
survexscansfolder = models.ForeignKey("SurvexScansFolder", null=True)
#refscandir = models.CharField(max_length=100)
totalleglength = models.FloatField()
class Meta:
ordering = ('id',)
@@ -130,7 +144,7 @@ class SurvexBlock(models.Model):
def __unicode__(self):
return self.name and unicode(self.name) or 'no name'
def GetPersonroles(self):
res = [ ]
for personrole in self.personrole_set.order_by('personexpedition'):
@@ -147,12 +161,12 @@ class SurvexBlock(models.Model):
return ssl[0]
#print name
ss = SurvexStation(name=name, block=self)
#ss.save()
ss.save()
return ss
def DayIndex(self):
return list(self.expeditionday.survexblock_set.all()).index(self)
class SurvexTitle(models.Model):
survexblock = models.ForeignKey('SurvexBlock')
@@ -177,45 +191,45 @@ ROLE_CHOICES = (
class SurvexPersonRole(models.Model):
survexblock = models.ForeignKey('SurvexBlock')
nrole = models.CharField(choices=ROLE_CHOICES, max_length=200, blank=True, null=True)
# increasing levels of precision
# increasing levels of precision
personname = models.CharField(max_length=100)
person = models.ForeignKey('Person', blank=True, null=True)
personexpedition = models.ForeignKey('PersonExpedition', blank=True, null=True)
persontrip = models.ForeignKey('PersonTrip', blank=True, null=True)
persontrip = models.ForeignKey('PersonTrip', blank=True, null=True)
expeditionday = models.ForeignKey("ExpeditionDay", null=True)
def __unicode__(self):
return unicode(self.person) + " - " + unicode(self.survexblock) + " - " + unicode(self.nrole)
class SurvexScansFolder(models.Model):
fpath = models.CharField(max_length=200)
walletname = models.CharField(max_length=200)
class Meta:
ordering = ('walletname',)
def __unicode__(self):
return self.walletname or 'no wallet'
def get_absolute_url(self):
return urlparse.urljoin(settings.URL_ROOT, reverse('surveyscansfolder', kwargs={"path":re.sub("#", "%23", self.walletname)}))
def __unicode__(self):
return unicode(self.walletname) + " (Survey Scans Folder)"
class SurvexScanSingle(models.Model):
ffile = models.CharField(max_length=200)
name = models.CharField(max_length=200)
survexscansfolder = models.ForeignKey("SurvexScansFolder", null=True)
class Meta:
ordering = ('name',)
def __unicode__(self):
return self.survexscansfolder.walletname + '/' + self.name
def get_absolute_url(self):
return urlparse.urljoin(settings.URL_ROOT, reverse('surveyscansingle', kwargs={"path":re.sub("#", "%23", self.survexscansfolder.walletname), "file":self.name}))
def __unicode__(self):
return "Survey Scan Image: " + unicode(self.name) + " in " + unicode(self.survexscansfolder)
class TunnelFile(models.Model):
tunnelpath = models.CharField(max_length=200)
tunnelname = models.CharField(max_length=200)
@@ -227,8 +241,8 @@ class TunnelFile(models.Model):
filesize = models.IntegerField(default=0)
npaths = models.IntegerField(default=0)
survextitles = models.ManyToManyField("SurvexTitle")
class Meta:
ordering = ('tunnelpath',)

View File

@@ -3,7 +3,7 @@ from django.utils.html import conditional_escape
from django.template.defaultfilters import stringfilter
from django.utils.safestring import mark_safe
from django.conf import settings
from troggle.core.models import QM, LogbookEntry, Cave
from troggle.core.models import QM, DPhoto, LogbookEntry, Cave
import re, urlparse
register = template.Library()
@@ -120,13 +120,13 @@ def wiki_to_html_short(value, autoescape=None):
except KeyError:
linkText=None
# try:
# photo=DPhoto.objects.get(file=matchdict['photoName'])
# if not linkText:
# linkText=str(photo)
# res=r'<a href=' + photo.get_admin_url() +'>' + linkText + '</a>'
# except Photo.DoesNotExist:
# res = r'<a class="redtext" href="">make new photo</a>'
try:
photo=DPhoto.objects.get(file=matchdict['photoName'])
if not linkText:
linkText=str(photo)
res=r'<a href=' + photo.get_admin_url() +'>' + linkText + '</a>'
except Photo.DoesNotExist:
res = r'<a class="redtext" href="">make new photo</a>'
return res
def photoSrcRepl(matchobj):

View File

@@ -1,6 +1,6 @@
from django.conf import settings
import fileAbstraction
from django.shortcuts import render_to_response
from django.shortcuts import render
from django.http import HttpResponse, Http404
import os, stat
import re
@@ -8,7 +8,7 @@ from troggle.core.models import SurvexScansFolder, SurvexScanSingle, SurvexBlock
import parsers.surveys
import urllib
# inline fileabstraction into here if it's not going to be useful anywhere else
# inline fileabstraction into here if it's not going to be useful anywhere else
# keep things simple and ignore exceptions everywhere for now
@@ -33,7 +33,7 @@ def upload(request, path):
def download(request, path):
#try:
return HttpResponse(fileAbstraction.readFile(path), content_type=getMimeType(path.split(".")[-1]))
#except:
# raise Http404
@@ -49,32 +49,32 @@ extmimetypes = {".txt": "text/plain",
".jpg": "image/jpeg",
".jpeg": "image/jpeg",
}
# dead
def jgtfile(request, f):
fp = os.path.join(settings.SURVEYS, f)
# could also surf through SURVEX_DATA
# directory listing
if os.path.isdir(fp):
listdirfiles = [ ]
listdirdirs = [ ]
for lf in sorted(os.listdir(fp)):
hpath = os.path.join(f, lf) # not absolute path
if lf[0] == "." or lf[-1] == "~":
continue
hpath = hpath.replace("\\", "/") # for windows users
href = hpath.replace("#", "%23") # '#' in file name annoyance
flf = os.path.join(fp, lf)
if os.path.isdir(flf):
nfiles = len([sf for sf in os.listdir(flf) if sf[0] != "."])
listdirdirs.append((href, hpath + "/", nfiles))
else:
listdirfiles.append((href, hpath, os.path.getsize(flf)))
upperdirs = [ ]
lf = f
while lf:
@@ -85,9 +85,9 @@ def jgtfile(request, f):
lf = os.path.split(lf)[0]
upperdirs.append((href, hpath))
upperdirs.append(("", "/"))
return render_to_response('listdir.html', {'file':f, 'listdirfiles':listdirfiles, 'listdirdirs':listdirdirs, 'upperdirs':upperdirs, 'settings': settings})
return render(request, 'listdir.html', {'file':f, 'listdirfiles':listdirfiles, 'listdirdirs':listdirdirs, 'upperdirs':upperdirs, 'settings': settings})
# flat output of file when loaded
if os.path.isfile(fp):
ext = os.path.splitext(fp)[1].lower()
@@ -123,16 +123,16 @@ def SaveImageInDir(name, imgdir, project, fdata, bbinary):
print "*** Making directory", fprojdir
os.path.mkdir(fprojdir)
print "hhh"
fname = os.path.join(fprojdir, name)
print fname, "fff"
fname = UniqueFile(fname)
p2, p1 = os.path.split(fname)
p3, p2 = os.path.split(p2)
p4, p3 = os.path.split(p3)
res = os.path.join(p3, p2, p1)
print "saving file", fname
fout = open(fname, (bbinary and "wb" or "w"))
fout.write(fdata.read())
@@ -163,73 +163,73 @@ def jgtuploadfile(request):
#print ("FFF", request.FILES.values())
message = ""
print "gothere"
return render_to_response('fileupload.html', {'message':message, 'filesuploaded':filesuploaded, 'settings': settings})
return render(request, 'fileupload.html', {'message':message, 'filesuploaded':filesuploaded, 'settings': settings})
def surveyscansfolder(request, path):
#print [ s.walletname for s in SurvexScansFolder.objects.all() ]
survexscansfolder = SurvexScansFolder.objects.get(walletname=urllib.unquote(path))
return render_to_response('survexscansfolder.html', { 'survexscansfolder':survexscansfolder, 'settings': settings })
return render(request, 'survexscansfolder.html', { 'survexscansfolder':survexscansfolder, 'settings': settings })
def surveyscansingle(request, path, file):
survexscansfolder = SurvexScansFolder.objects.get(walletname=urllib.unquote(path))
survexscansingle = SurvexScanSingle.objects.get(survexscansfolder=survexscansfolder, name=file)
return HttpResponse(content=open(survexscansingle.ffile), content_type=getMimeType(path.split(".")[-1]))
#return render_to_response('survexscansfolder.html', { 'survexscansfolder':survexscansfolder, 'settings': settings })
#return render(request, 'survexscansfolder.html', { 'survexscansfolder':survexscansfolder, 'settings': settings })
def surveyscansfolders(request):
survexscansfolders = SurvexScansFolder.objects.all()
return render_to_response('survexscansfolders.html', { 'survexscansfolders':survexscansfolders, 'settings': settings })
return render(request, 'survexscansfolders.html', { 'survexscansfolders':survexscansfolders, 'settings': settings })
def tunneldata(request):
tunnelfiles = TunnelFile.objects.all()
return render_to_response('tunnelfiles.html', { 'tunnelfiles':tunnelfiles, 'settings': settings })
return render(request, 'tunnelfiles.html', { 'tunnelfiles':tunnelfiles, 'settings': settings })
def tunnelfile(request, path):
tunnelfile = TunnelFile.objects.get(tunnelpath=urllib.unquote(path))
tfile = os.path.join(settings.TUNNEL_DATA, tunnelfile.tunnelpath)
return HttpResponse(content=open(tfile), content_type="text/plain")
def tunnelfileupload(request, path):
tunnelfile = TunnelFile.objects.get(tunnelpath=urllib.unquote(path))
tfile = os.path.join(settings.TUNNEL_DATA, tunnelfile.tunnelpath)
project, user, password, tunnelversion = request.POST["tunnelproject"], request.POST["tunneluser"], request.POST["tunnelpassword"], request.POST["tunnelversion"]
print (project, user, tunnelversion)
assert len(request.FILES.values()) == 1, "only one file to upload"
uploadedfile = request.FILES.values()[0]
if uploadedfile.field_name != "sketch":
return HttpResponse(content="Error: non-sketch file uploaded", content_type="text/plain")
if uploadedfile.content_type != "text/plain":
return HttpResponse(content="Error: non-plain content type", content_type="text/plain")
# could use this to add new files
if os.path.split(path)[1] != uploadedfile.name:
if os.path.split(path)[1] != uploadedfile.name:
return HttpResponse(content="Error: name disagrees", content_type="text/plain")
orgsize = tunnelfile.filesize # = os.stat(tfile)[stat.ST_SIZE]
ttext = uploadedfile.read()
# could check that the user and projects agree here
fout = open(tfile, "w")
fout.write(ttext)
fout.close()
# redo its settings of
# redo its settings of
parsers.surveys.SetTunnelfileInfo(tunnelfile)
tunnelfile.save()
uploadedfile.close()
message = "File size %d overwritten with size %d" % (orgsize, tunnelfile.filesize)
return HttpResponse(content=message, content_type="text/plain")

264
core/views_caves.py Executable file → Normal file
View File

@@ -1,7 +1,5 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
from __future__ import (absolute_import, division,
print_function, unicode_literals)
from troggle.core.models import CaveSlug, Cave, CaveAndEntrance, Survey, Expedition, QM, CaveDescription, EntranceSlug, Entrance, Area, SurvexStation
from troggle.core.forms import CaveForm, CaveAndEntranceFormSet, VersionControlCommentForm, EntranceForm, EntranceLetterForm
@@ -9,44 +7,18 @@ import troggle.core.models as models
import troggle.settings as settings
from troggle.helper import login_required_if_public
from PIL import Image, ImageDraw, ImageFont
from django.forms.models import modelformset_factory
from django import forms
from django.core.urlresolvers import reverse
from django.http import HttpResponse, HttpResponseRedirect
from django.conf import settings
import re
import os
import urlparse
#import urllib.parse
import re, urlparse
from django.shortcuts import get_object_or_404, render
import settings
class MapLocations(object):
p = [
("laser.0_7", "BNase", "Reference", "Br&auml;uning Nase laser point"),
("226-96", "BZkn", "Reference", "Br&auml;uning Zinken trig point"),
("vd1","VD1","Reference", "VD1 survey point"),
("laser.kt114_96","HSK","Reference", "Hinterer Schwarzmooskogel trig point"),
("2000","Nipple","Reference", "Nipple (Wei&szlig;e Warze)"),
("3000","VSK","Reference", "Vorderer Schwarzmooskogel summit"),
("topcamp", "OTC", "Reference", "Old Top Camp"),
("laser.0", "LSR0", "Reference", "Laser Point 0"),
("laser.0_1", "LSR1", "Reference", "Laser Point 0/1"),
("laser.0_3", "LSR3", "Reference", "Laser Point 0/3"),
("laser.0_5", "LSR5", "Reference", "Laser Point 0/5"),
("225-96", "BAlm", "Reference", "Br&auml;uning Alm trig point")
]
def points(self):
for ent in Entrance.objects.all():
if ent.best_station():
areaName = ent.caveandentrance_set.all()[0].cave.getArea().short_name
self.p.append((ent.best_station(), "%s-%s" % (areaName, str(ent)[5:]), ent.needs_surface_work(), str(ent)))
return self.p
def __str__(self):
return "{} map locations".format(len(self.p))
from PIL import Image, ImageDraw, ImageFont
import string, os, sys, subprocess
def getCave(cave_id):
"""Returns a cave object when given a cave name or number. It is used by views including cavehref, ent, and qm."""
@@ -57,7 +29,7 @@ def getCave(cave_id):
return cave
def pad5(x):
return "0" * (5 -len(x.group(0))) + x.group(0)
return "0" * (5 -len(x.group(0))) + x.group(0)
def padnumber(x):
return re.sub("\d+", pad5, x)
def numericalcmp(x, y):
@@ -65,7 +37,7 @@ def numericalcmp(x, y):
def caveCmp(x, y):
def caveCmp(x, y):
if x.kataster_number:
if y.kataster_number:
return numericalcmp(x.kataster_number, y.kataster_number) # Note that cave kataster numbers are not generally integers.
@@ -74,11 +46,11 @@ def caveCmp(x, y):
else:
if y.kataster_number:
return 1
else:
else:
return numericalcmp(x.unofficial_number, y.unofficial_number)
def caveindex(request):
caves = Cave.objects.all()
#caves = Cave.objects.all()
notablecavehrefs = settings.NOTABLECAVESHREFS
notablecaves = [Cave.objects.get(kataster_number=kataster_number) for kataster_number in notablecavehrefs ]
caves1623 = list(Cave.objects.filter(area__short_name = "1623"))
@@ -90,7 +62,6 @@ def caveindex(request):
def millenialcaves(request):
#RW messing around area
return HttpResponse("Test text", content_type="text/plain")
def cave3d(request, cave_id=''):
@@ -134,6 +105,7 @@ def caveQMs(request, slug):
return render(request,'nonpublic.html', {'instance': cave})
else:
return render(request,'cave_qms.html', {'cave': cave})
def caveLogbook(request, slug):
cave = Cave.objects.get(caveslug__slug = slug)
if cave.non_public and settings.PUBLIC_SITE and not request.user.is_authenticated():
@@ -181,14 +153,14 @@ def edit_cave(request, slug=None):
ceinst.cave = cave
ceinst.save()
cave.writeDataFile()
return HttpResponseRedirect("/" + cave.url)
return HttpResponseRedirect("/" + cave.url)
else:
form = CaveForm(instance=cave)
ceFormSet = CaveAndEntranceFormSet(queryset=cave.caveandentrance_set.all())
versionControlForm = VersionControlCommentForm()
return render(request,
'editcave2.html',
return render(request,
'editcave2.html',
{'form': form,
'caveAndEntranceFormSet': ceFormSet,
'versionControlForm': versionControlForm
@@ -196,7 +168,7 @@ def edit_cave(request, slug=None):
@login_required_if_public
def editEntrance(request, caveslug, slug=None):
cave = Cave.objects.get(caveslug__slug = caveslug)
cave = Cave.objects.get(caveslug__slug = caveslug)
if slug is not None:
entrance = Entrance.objects.get(entranceslug__slug = slug)
else:
@@ -223,7 +195,7 @@ def editEntrance(request, caveslug, slug=None):
el.entrance = entrance
el.save()
entrance.writeDataFile()
return HttpResponseRedirect("/" + cave.url)
return HttpResponseRedirect("/" + cave.url)
else:
form = EntranceForm(instance = entrance)
versionControlForm = VersionControlCommentForm()
@@ -231,27 +203,25 @@ def editEntrance(request, caveslug, slug=None):
entletter = EntranceLetterForm(request.POST)
else:
entletter = None
return render(request,
'editentrance.html',
return render(request,
'editentrance.html',
{'form': form,
'versionControlForm': versionControlForm,
'entletter': entletter
})
def qm(request,cave_id,qm_id,year,grade=None):
year=int(year)
def qm(request,qm_id):
try:
qm=getCave(cave_id).get_QMs().get(number=qm_id,found_by__date__year=year)
qm=QM.objects.get(id=qm_id)
return render(request,'qm.html',locals())
except QM.DoesNotExist:
url=urllib.parse.urljoin(settings.URL_ROOT, r'/admin/core/qm/add/'+'?'+ r'number=' + qm_id)
url=urlparse.urljoin(settings.URL_ROOT, r'/admin/core/qm/add/'+'?'+ r'number=' + qm_id)
if grade:
url += r'&grade=' + grade
return HttpResponseRedirect(url)
def ent(request, cave_id, ent_letter):
cave = Cave.objects.filter(kataster_number = cave_id)[0]
cave_and_ent = CaveAndEntrance.objects.filter(cave = cave).filter(entrance_letter = ent_letter)[0]
@@ -283,7 +253,7 @@ def survey(request,year,wallet_number):
surveys=Survey.objects.all()
expeditions=Expedition.objects.order_by("-year")
current_expedition=Expedition.objects.filter(year=year)[0]
if wallet_number!='':
current_survey=Survey.objects.filter(expedition=current_expedition,wallet_number=wallet_number)[0]
notes=current_survey.scannedimage_set.filter(contents='notes')
@@ -305,30 +275,30 @@ def get_qms(request, caveslug):
return render(request,'options.html', {"items": [(e.entrance.slug(), e.entrance.slug()) for e in cave.entrances()]})
areanames = [
#('', 'Location unclear'),
('1a', '1a &ndash; Plateau: around Top Camp'),
('1b', '1b &ndash; Western plateau near 182'),
('1c', '1c &ndash; Eastern plateau near 204 walk-in path'),
('1d', '1d &ndash; Further plateau around 76'),
('2a', '2a &ndash; Southern Schwarzmooskogel near 201 path and the Nipple'),
('2b', '2b &ndash; Eish&ouml;hle area'),
('2b or 4 (unclear)', '2b or 4 (unclear)'),
('2c', '2c &ndash; Kaninchenh&ouml;hle area'),
('2d', '2d &ndash; Steinbr&uuml;ckenh&ouml;hle area'),
('3', '3 &ndash; Br&auml;uning Alm'),
('4', '4 &ndash; Kratzer valley'),
('5', '5 &ndash; Schwarzmoos-Wildensee'),
('6', '6 &ndash; Far plateau'),
('1626 or 6 (borderline)', '1626 or 6 (borderline)'),
('7', '7 &ndash; Egglgrube'),
('8a', '8a &ndash; Loser south face'),
('8b', '8b &ndash; Loser below Dimmelwand'),
('8c', '8c &ndash; Augst See'),
('8d', '8d &ndash; Loser-Hochganger ridge'),
('9', '9 &ndash; Gschwandt Alm'),
('10', '10 &ndash; Altaussee'),
('11', '11 &ndash; Augstbach')
]
#('', 'Location unclear'),
('1a', '1a &ndash; Plateau: around Top Camp'),
('1b', '1b &ndash; Western plateau near 182'),
('1c', '1c &ndash; Eastern plateau near 204 walk-in path'),
('1d', '1d &ndash; Further plateau around 76'),
('2a', '2a &ndash; Southern Schwarzmooskogel near 201 path and the Nipple'),
('2b', '2b &ndash; Eish&ouml;hle area'),
('2b or 4 (unclear)', '2b or 4 (unclear)'),
('2c', '2c &ndash; Kaninchenh&ouml;hle area'),
('2d', '2d &ndash; Steinbr&uuml;ckenh&ouml;hle area'),
('3', '3 &ndash; Br&auml;uning Alm'),
('4', '4 &ndash; Kratzer valley'),
('5', '5 &ndash; Schwarzmoos-Wildensee'),
('6', '6 &ndash; Far plateau'),
('1626 or 6 (borderline)', '1626 or 6 (borderline)'),
('7', '7 &ndash; Egglgrube'),
('8a', '8a &ndash; Loser south face'),
('8b', '8b &ndash; Loser below Dimmelwand'),
('8c', '8c &ndash; Augst See'),
('8d', '8d &ndash; Loser-Hochganger ridge'),
('9', '9 &ndash; Gschwandt Alm'),
('10', '10 &ndash; Altaussee'),
('11', '11 &ndash; Augstbach')
]
def prospecting(request):
@@ -341,29 +311,29 @@ def prospecting(request):
caves.sort(caveCmp)
areas.append((name, a, caves))
return render(request, 'prospecting.html', {"areas": areas})
# Parameters for big map and zoomed subarea maps:
# big map first (zoom factor ignored)
maps = {
# id left top right bottom zoom
# G&K G&K G&K G&K factor
"all": [33810.4, 85436.5, 38192.0, 81048.2, 0.35,
"All"],
"40": [36275.6, 82392.5, 36780.3, 81800.0, 3.0,
"Eish&ouml;hle"],
"76": [35440.0, 83220.0, 36090.0, 82670.0, 1.3,
"Eislufth&ouml;hle"],
"204": [36354.1, 84154.5, 37047.4, 83300, 3.0,
"Steinbr&uuml;ckenh&ouml;hle"],
"tc": [35230.0, 82690.0, 36110.0, 82100.0, 3.0,
"Near Top Camp"],
# id left top right bottom zoom
# G&K G&K G&K G&K factor
"all": [33810.4, 85436.5, 38192.0, 81048.2, 0.35,
"All"],
"40": [36275.6, 82392.5, 36780.3, 81800.0, 3.0,
"Eish&ouml;hle"],
"76": [35440.0, 83220.0, 36090.0, 82670.0, 1.3,
"Eislufth&ouml;hle"],
"204": [36354.1, 84154.5, 37047.4, 83300, 3.0,
"Steinbr&uuml;ckenh&ouml;hle"],
"tc": [35230.0, 82690.0, 36110.0, 82100.0, 3.0,
"Near Top Camp"],
"grieß":
[36000.0, 86300.0, 38320.0, 84400.0, 4.0,
"Grießkogel Area"],
[36000.0, 86300.0, 38320.0, 84400.0, 4.0,
"Grießkogel Area"],
}
for n in list(maps.keys()):
for n in maps.keys():
L, T, R, B, S, name = maps[n]
W = (R-L)/2
H = (T-B)/2
@@ -371,7 +341,7 @@ for n in list(maps.keys()):
for j in range(2):
maps["%s%i%i" % (n, i, j)] = [L + i * W, T - j * H, L + (i + 1) * W, T - (j + 1) * H, S, name]
# Keys in the order in which we want the maps output
mapcodes = ["all", "grieß","40", "76", "204", "tc"]
mapcodes = ["all", "grieß","40", "76", "204", "tc"]
# Field codes
L = 0
T = 1
@@ -381,61 +351,60 @@ ZOOM = 4
DESC = 5
areacolours = {
'1a' : '#00ffff',
'1b' : '#ff00ff',
'1c' : '#ffff00',
'1d' : '#ffffff',
'2a' : '#ff0000',
'2b' : '#00ff00',
'2c' : '#008800',
'2d' : '#ff9900',
'3' : '#880000',
'4' : '#0000ff',
'6' : '#000000', # doubles for surface fixed pts, and anything else
'7' : '#808080'
}
'1a' : '#00ffff',
'1b' : '#ff00ff',
'1c' : '#ffff00',
'1d' : '#ffffff',
'2a' : '#ff0000',
'2b' : '#00ff00',
'2c' : '#008800',
'2d' : '#ff9900',
'3' : '#880000',
'4' : '#0000ff',
'6' : '#000000', # doubles for surface fixed pts, and anything else
'7' : '#808080'
}
for FONT in [
"/usr/share/fonts/truetype/freefont/FreeSans.ttf",
"/usr/X11R6/lib/X11/fonts/truetype/arial.ttf",
"/mnt/c/windows/fonts/arial.ttf",
"C:\WINNT\Fonts\ARIAL.TTF"
]:
if os.path.isfile(FONT): break
"/usr/share/fonts/truetype/freefont/FreeSans.ttf",
"/usr/X11R6/lib/X11/fonts/truetype/arial.ttf",
"C:\WINNT\Fonts\ARIAL.TTF"
]:
if os.path.isfile(FONT): break
TEXTSIZE = 16
CIRCLESIZE =8
LINEWIDTH = 2
myFont = ImageFont.truetype(FONT, TEXTSIZE)
def mungecoord(x, y, mapcode, img):
# Top of Zinken is 73 1201 = dataset 34542 81967
# Top of Hinter is 1073 562 = dataset 36670 83317
# image is 1417 by 2201
# FACTOR1 = 1000.0 / (36670.0-34542.0)
# FACTOR2 = (1201.0-562.0) / (83317 - 81967)
# FACTOR = (FACTOR1 + FACTOR2)/2
# The factors aren't the same as the scanned map's at a slight angle. I
# can't be bothered to fix this. Since we zero on the Hinter it makes
# very little difference for caves in the areas round 76 or 204.
# xoffset = (x - 36670)*FACTOR
# yoffset = (y - 83317)*FACTOR
# return (1073 + xoffset, 562 - yoffset)
# Top of Zinken is 73 1201 = dataset 34542 81967
# Top of Hinter is 1073 562 = dataset 36670 83317
# image is 1417 by 2201
# FACTOR1 = 1000.0 / (36670.0-34542.0)
# FACTOR2 = (1201.0-562.0) / (83317 - 81967)
# FACTOR = (FACTOR1 + FACTOR2)/2
# The factors aren't the same as the scanned map's at a slight angle. I
# can't be bothered to fix this. Since we zero on the Hinter it makes
# very little difference for caves in the areas round 76 or 204.
# xoffset = (x - 36670)*FACTOR
# yoffset = (y - 83317)*FACTOR
# return (1073 + xoffset, 562 - yoffset)
m = maps[mapcode]
factorX, factorY = img.size[0] / (m[R] - m[L]), img.size[1] / (m[T] - m[B])
return ((x - m[L]) * factorX, (m[T] - y) * factorY)
m = maps[mapcode]
factorX, factorY = img.size[0] / (m[R] - m[L]), img.size[1] / (m[T] - m[B])
return ((x - m[L]) * factorX, (m[T] - y) * factorY)
COL_TYPES = {True: "red",
False: "#dddddd",
"Reference": "#dddddd"}
def plot(surveypoint, number, point_type, label, mapcode, draw, img):
try:
ss = SurvexStation.objects.lookup(surveypoint)
E, N = ss.x, ss.y
shortnumber = number.replace("&mdash;","")
(x,y) = list(map(int, mungecoord(E, N, mapcode, img)))
(x,y) = map(int, mungecoord(E, N, mapcode, img))
#imgmaps[maparea].append( [x-4, y-SIZE/2, x+4+draw.textsize(shortnumber)[0], y+SIZE/2, shortnumber, label] )
draw.rectangle([(x+CIRCLESIZE, y-TEXTSIZE/2), (x+CIRCLESIZE*2+draw.textsize(shortnumber)[0], y+TEXTSIZE/2)], fill="#ffffff")
draw.text((x+CIRCLESIZE * 1.5,y-TEXTSIZE/2), shortnumber, fill="#000000")
@@ -447,7 +416,7 @@ def prospecting_image(request, name):
mainImage = Image.open(os.path.join(settings.SURVEY_SCANS, "location_maps", "pguidemap.jpg"))
if settings.PUBLIC_SITE and not request.user.is_authenticated():
mainImage = Image.new("RGB", mainImage.size, '#ffffff')
mainImage = Image.new("RGB", mainImage.size, '#ffffff')
m = maps[name]
#imgmaps = []
if name == "all":
@@ -466,7 +435,7 @@ def prospecting_image(request, name):
draw = ImageDraw.Draw(img)
draw.setfont(myFont)
if name == "all":
for maparea in list(maps.keys()):
for maparea in maps.keys():
if maparea == "all":
continue
localm = maps[maparea]
@@ -496,7 +465,7 @@ def prospecting_image(request, name):
plot("laser.0_7", "BNase", "Reference", "Br&auml;uning Nase laser point", name, draw, img)
plot("226-96", "BZkn", "Reference", "Br&auml;uning Zinken trig point", name, draw, img)
plot("vd1","VD1","Reference", "VD1 survey point", name, draw, img)
plot("laser.kt114_96","HSK","Reference", "Hinterer Schwarzmooskogel trig point", name, draw, img)
plot("laser.kt114_96","HSK","Reference", "Hinterer Schwarzmooskogel trig point", name, draw, img)
plot("2000","Nipple","Reference", "Nipple (Wei&szlig;e Warze)", name, draw, img)
plot("3000","VSK","Reference", "Vorderer Schwarzmooskogel summit", name, draw, img)
plot("topcamp", "TC", "Reference", "Top Camp", name, draw, img)
@@ -510,20 +479,21 @@ def prospecting_image(request, name):
if station:
#try:
areaName = entrance.caveandentrance_set.all()[0].cave.getArea().short_name
plot(station, "%s-%s" % (areaName, str(entrance)[5:]), entrance.needs_surface_work(), str(entrance), name, draw, img)
plot(station, "%s-%s" % (areaName, str(entrance)
[5:]), entrance.needs_surface_work(), str(entrance), name, draw, img)
#except:
# pass
for (N, E, D, num) in [(35975.37, 83018.21, 100,"177"), # Calculated from bearings
(35350.00, 81630.00, 50, "71"), # From Auer map
(36025.00, 82475.00, 50, "146"), # From mystery map
(35600.00, 82050.00, 50, "35"), # From Auer map
(35650.00, 82025.00, 50, "44"), # From Auer map
(36200.00, 82925.00, 50, "178"), # Calculated from bearings
(35232.64, 82910.37, 25, "181"), # Calculated from bearings
(35323.60, 81357.83, 50, "74") # From Auer map
# pass
for (N, E, D, num) in [(35975.37, 83018.21, 100,"177"), # Calculated from bearings
(35350.00, 81630.00, 50, "71"), # From Auer map
(36025.00, 82475.00, 50, "146"), # From mystery map
(35600.00, 82050.00, 50, "35"), # From Auer map
(35650.00, 82025.00, 50, "44"), # From Auer map
(36200.00, 82925.00, 50, "178"), # Calculated from bearings
(35232.64, 82910.37, 25, "181"), # Calculated from bearings
(35323.60, 81357.83, 50, "74") # From Auer map
]:
(N,E,D) = list(map(float, (N, E, D)))
(N,E,D) = map(float, (N, E, D))
maparea = Cave.objects.get(kataster_number = num).getArea().short_name
lo = mungecoord(N-D, E+D, name, img)
hi = mungecoord(N+D, E-D, name, img)
@@ -537,8 +507,8 @@ def prospecting_image(request, name):
del draw
img.save(response, "PNG")
return response
STATIONS = {}
STATIONS = {}
poslineregex = re.compile("^\(\s*([+-]?\d*\.\d*),\s*([+-]?\d*\.\d*),\s*([+-]?\d*\.\d*)\s*\)\s*([^\s]+)$")
def LoadPos():
call([settings.CAVERN, "--output=%s/all.3d" % settings.SURVEX_DATA, "%s/all.svx" % settings.SURVEX_DATA])
@@ -546,7 +516,7 @@ def LoadPos():
posfile = open("%sall.pos" % settings.SURVEX_DATA)
posfile.readline()#Drop header
for line in posfile.readlines():
r = poslineregex.match(line)
r = poslineregex.match(line)
if r:
x, y, z, name = r.groups()
STATIONS[name] = (x, y, z)

133
core/views_logbooks.py Executable file → Normal file
View File

@@ -37,7 +37,7 @@ def getNotablePersons():
for person in Person.objects.all():
if person.bisnotable():
notablepersons.append(person)
return notablepersons
return notablepersons
def personindex(request):
@@ -48,7 +48,7 @@ def personindex(request):
nc = (len(persons) + ncols - 1) / ncols
for i in range(ncols):
personss.append(persons[i * nc: (i + 1) * nc])
notablepersons = []
for person in Person.objects.all():
if person.bisnotable():
@@ -67,16 +67,20 @@ def expedition(request, expeditionname):
for personexpedition in this_expedition.personexpedition_set.all():
prow = [ ]
for date in dates:
pcell = { "persontrips": PersonTrip.objects.filter(personexpedition=personexpedition,
pcell = { "persontrips": PersonTrip.objects.filter(personexpedition=personexpedition,
logbook_entry__date=date) }
pcell["survexblocks"] = set(SurvexBlock.objects.filter(survexpersonrole__personexpedition=personexpedition,
date = date))
pcell["survexblocks"] = set(SurvexBlock.objects.filter(survexpersonrole__personexpedition=personexpedition,
date=date))
prow.append(pcell)
personexpeditiondays.append({"personexpedition":personexpedition, "personrow":prow})
if "reload" in request.GET:
LoadLogbookForExpedition(this_expedition)
return render(request,'expedition.html', {'expedition': this_expedition, 'expeditions':expeditions, 'personexpeditiondays':personexpeditiondays, 'settings':settings, 'dateditems': dateditems })
return render(request,'expedition.html', {'this_expedition': this_expedition,
'expeditions':expeditions,
'personexpeditiondays':personexpeditiondays,
'settings':settings,
'dateditems': dateditems })
def get_absolute_url(self):
return ('expedition', (expedition.year))
@@ -93,14 +97,14 @@ class ExpeditionListView(ListView):
def person(request, first_name='', last_name='', ):
this_person = Person.objects.get(first_name = first_name, last_name = last_name)
# This is for removing the reference to the user's profile, in case they set it to the wrong person
if request.method == 'GET':
if request.GET.get('clear_profile')=='True':
this_person.user=None
this_person.save()
return HttpResponseRedirect(reverse('profiles_select_profile'))
return render(request,'person.html', {'person': this_person, })
@@ -113,19 +117,19 @@ def GetPersonChronology(personexpedition):
for personrole in personexpedition.survexpersonrole_set.all():
a = res.setdefault(personrole.survexblock.date, { })
a.setdefault("personroles", [ ]).append(personrole.survexblock)
# build up the tables
rdates = res.keys()
rdates.sort()
res2 = [ ]
for rdate in rdates:
persontrips = res[rdate].get("persontrips", [])
personroles = res[rdate].get("personroles", [])
for n in range(max(len(persontrips), len(personroles))):
res2.append(((n == 0 and rdate or "--"), (n < len(persontrips) and persontrips[n]), (n < len(personroles) and personroles[n])))
return res2
@@ -164,95 +168,22 @@ def personForm(request,pk):
form=PersonForm(instance=person)
return render(request,'personform.html', {'form':form,})
from settings import *
def pathsreport(request):
pathsdict={
"ADMIN_MEDIA_PREFIX" : ADMIN_MEDIA_PREFIX,
"ADMIN_MEDIA_PREFIX" : ADMIN_MEDIA_PREFIX,
"CAVEDESCRIPTIONSX" : CAVEDESCRIPTIONS,
"DIR_ROOT" : DIR_ROOT,
"ENTRANCEDESCRIPTIONS" : ENTRANCEDESCRIPTIONS,
"EXPOUSER_EMAIL" : EXPOUSER_EMAIL,
"EXPOUSERPASS" :"<redacted>",
"EXPOUSER" : EXPOUSER,
"EXPOWEB" : EXPOWEB,
"EXPOWEB_URL" : EXPOWEB_URL,
"FILES" : FILES,
"JSLIB_URL" : JSLIB_URL,
"LOGFILE" : LOGFILE,
"LOGIN_REDIRECT_URL" : LOGIN_REDIRECT_URL,
"MEDIA_ADMIN_DIR" : MEDIA_ADMIN_DIR,
"MEDIA_ROOT" : MEDIA_ROOT,
"MEDIA_URL" : MEDIA_URL,
#"PHOTOS_ROOT" : PHOTOS_ROOT,
"PHOTOS_URL" : PHOTOS_URL,
"PYTHON_PATH" : PYTHON_PATH,
"REPOS_ROOT_PATH" : REPOS_ROOT_PATH,
"ROOT_URLCONF" : ROOT_URLCONF,
"STATIC_ROOT" : STATIC_ROOT,
"STATIC_URL" : STATIC_URL,
"SURVEX_DATA" : SURVEX_DATA,
"SURVEY_SCANS" : SURVEY_SCANS,
"SURVEYS" : SURVEYS,
"SURVEYS_URL" : SURVEYS_URL,
"SVX_URL" : SVX_URL,
"TEMPLATE_DIRS" : TEMPLATE_DIRS,
"THREEDCACHEDIR" : THREEDCACHEDIR,
"TINY_MCE_MEDIA_ROOT" : TINY_MCE_MEDIA_ROOT,
"TINY_MCE_MEDIA_URL" : TINY_MCE_MEDIA_URL,
"TUNNEL_DATA" : TUNNEL_DATA,
"URL_ROOT" : URL_ROOT
}
ncodes = len(pathsdict)
bycodeslist = sorted(pathsdict.iteritems())
bypathslist = sorted(pathsdict.iteritems(), key=lambda x: x[1])
return render(request, 'pathsreport.html', {
"pathsdict":pathsdict,
"bycodeslist":bycodeslist,
"bypathslist":bypathslist,
"ncodes":ncodes})
def experimental(request):
blockroots = models.SurvexBlock.objects.filter(name="root")
if len(blockroots)>1:
print(" ! more than one root survexblock {}".format(len(blockroots)))
for sbr in blockroots:
print("{} {} {} {}".format(sbr.id, sbr.name, sbr.text, sbr.date))
sbr = blockroots[0]
totalsurvexlength = sbr.totalleglength
try:
nimportlegs = int(sbr.text)
except:
print("{} {} {} {}".format(sbr.id, sbr.name, sbr.text, sbr.date))
nimportlegs = -1
legsbyexpo = [ ]
addupsurvexlength = 0
for expedition in Expedition.objects.all():
survexblocks = expedition.survexblock_set.all()
#survexlegs = [ ]
legsyear=0
survexlegs = [ ]
survexleglength = 0.0
for survexblock in survexblocks:
#survexlegs.extend(survexblock.survexleg_set.all())
survexlegs.extend(survexblock.survexleg_set.all())
survexleglength += survexblock.totalleglength
try:
legsyear += int(survexblock.text)
except:
pass
addupsurvexlength += survexleglength
legsbyexpo.append((expedition, {"nsurvexlegs":legsyear, "survexleglength":survexleglength}))
legsbyexpo.reverse()
legsbyexpo.append((expedition, {"nsurvexlegs":len(survexlegs), "survexleglength":survexleglength}))
legsbyexpo.reverse()
#removing survexleg objects completely
#survexlegs = models.SurvexLeg.objects.all()
#totalsurvexlength = sum([survexleg.tape for survexleg in survexlegs])
return render(request, 'experimental.html', { "nsurvexlegs":nimportlegs, "totalsurvexlength":totalsurvexlength, "addupsurvexlength":addupsurvexlength, "legsbyexpo":legsbyexpo })
survexlegs = models.SurvexLeg.objects.all()
totalsurvexlength = sum([survexleg.tape for survexleg in survexlegs])
return render(request, 'experimental.html', { "nsurvexlegs":len(survexlegs), "totalsurvexlength":totalsurvexlength, "legsbyexpo":legsbyexpo })
@login_required_if_public
def newLogbookEntry(request, expeditionyear, pdate = None, pslug = None):
@@ -267,11 +198,11 @@ def newLogbookEntry(request, expeditionyear, pdate = None, pslug = None):
personTripFormSet = PersonTripFormSet(request.POST)
if tripForm.is_valid() and personTripFormSet.is_valid(): # All validation rules pass
dateStr = tripForm.cleaned_data["date"].strftime("%Y-%m-%d")
directory = os.path.join(settings.EXPOWEB,
"years",
expedition.year,
directory = os.path.join(settings.EXPOWEB,
"years",
expedition.year,
"autologbook")
filename = os.path.join(directory,
filename = os.path.join(directory,
dateStr + "." + slugify(tripForm.cleaned_data["title"])[:50] + ".html")
if not os.path.isdir(directory):
os.mkdir(directory)
@@ -279,7 +210,7 @@ def newLogbookEntry(request, expeditionyear, pdate = None, pslug = None):
delLogbookEntry(previouslbe)
f = open(filename, "w")
template = loader.get_template('dataformat/logbookentry.html')
context = Context({'trip': tripForm.cleaned_data,
context = Context({'trip': tripForm.cleaned_data,
'persons': personTripFormSet.cleaned_data,
'date': dateStr,
'expeditionyear': expeditionyear})
@@ -303,11 +234,11 @@ def newLogbookEntry(request, expeditionyear, pdate = None, pslug = None):
"location": previouslbe.place,
"caveOrLocation": "location",
"html": previouslbe.text})
personTripFormSet = PersonTripFormSet(initial=[{"name": get_name(py.personexpedition),
"TU": py.time_underground,
personTripFormSet = PersonTripFormSet(initial=[{"name": get_name(py.personexpedition),
"TU": py.time_underground,
"author": py.is_logbook_entry_author}
for py in previouslbe.persontrip_set.all()])
else:
else:
tripForm = TripForm() # An unbound form
personTripFormSet = PersonTripFormSet()

View File

@@ -1,5 +1,5 @@
from troggle.core.models import Cave, Expedition, Person, LogbookEntry, PersonExpedition, PersonTrip, QM
#from troggle.core.forms import UploadFileForm, DPhoto
from troggle.core.models import Cave, Expedition, Person, LogbookEntry, PersonExpedition, PersonTrip, DPhoto, QM
#from troggle.core.forms import UploadFileForm
from django.conf import settings
from django import forms
from django.template import loader, Context
@@ -30,12 +30,12 @@ def frontpage(request):
expeditions = Expedition.objects.order_by("-year")
logbookentry = LogbookEntry
cave = Cave
#photo = DPhoto
photo = DPhoto
from django.contrib.admin.templatetags import log
return render(request,'frontpage.html', locals())
def todo(request):
message = "no test message" #reverse('personn', kwargs={"name":"hkjhjh"})
message = "no test message" #reverse('personn', kwargs={"name":"hkjhjh"})
if "reloadexpos" in request.GET:
message = LoadPersonsExpos()
message = "Reloaded personexpos"
@@ -52,12 +52,11 @@ def controlPanel(request):
jobs_completed=[]
if request.method=='POST':
if request.user.is_superuser:
#importlist is mostly here so that things happen in the correct order.
#http post data seems to come in an unpredictable order, so we do it this way.
importlist=['reinit_db', 'import_people', 'import_caves', 'import_logbooks',
'import_survexblks', 'import_QMs', 'import_survexpos', 'import_surveyscans', 'import_tunnelfiles']
databaseReset.dirsredirect()
importlist=['reload_db', 'import_people', 'import_cavetab', 'import_logbooks', 'import_surveys', 'import_QMs']
databaseReset.make_dirs()
for item in importlist:
if item in request.POST:
print("running"+ " databaseReset."+item+"()")
@@ -71,8 +70,22 @@ def controlPanel(request):
return render(request,'controlPanel.html', {'caves':Cave.objects.all(),'expeditions':Expedition.objects.all(),'jobs_completed':jobs_completed})
def downloadCavetab(request):
from export import tocavetab
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename=CAVETAB2.CSV'
tocavetab.writeCaveTab(response)
return response
def downloadSurveys(request):
from export import tosurveys
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename=Surveys.csv'
tosurveys.writeCaveTab(response)
return response
def downloadLogbook(request,year=None,extension=None,queryset=None):
if year:
current_expedition=Expedition.objects.get(year=year)
logbook_entries=LogbookEntry.objects.filter(expedition=current_expedition)
@@ -83,7 +96,7 @@ def downloadLogbook(request,year=None,extension=None,queryset=None):
else:
response = HttpResponse(content_type='text/plain')
return response(r"Error: Logbook downloader doesn't know what year you want")
if 'year' in request.GET:
year=request.GET['year']
if 'extension' in request.GET:
@@ -95,14 +108,14 @@ def downloadLogbook(request,year=None,extension=None,queryset=None):
elif extension == 'html':
response = HttpResponse(content_type='text/html')
style='2005'
template='logbook'+style+'style.'+extension
response['Content-Disposition'] = 'attachment; filename='+filename+'.'+extension
response['Content-Disposition'] = 'attachment; filename='+filename+'.'+extension
t=loader.get_template(template)
c=Context({'logbook_entries':logbook_entries})
response.write(t.render(c))
return response
def downloadQMs(request):
# Note to self: use get_cave method for the below
@@ -118,14 +131,14 @@ def downloadQMs(request):
response['Content-Disposition'] = 'attachment; filename=qm.csv'
toqms.writeQmTable(response,cave)
return response
def ajax_test(request):
post_text = request.POST['post_data']
return HttpResponse("{'response_text': '"+post_text+" recieved.'}",
return HttpResponse("{'response_text': '"+post_text+" recieved.'}",
content_type="application/json")
def eyecandy(request):
return
return
def ajax_QM_number(request):
if request.method=='POST':
@@ -145,14 +158,14 @@ def logbook_entry_suggestions(request):
unwiki_QM_pattern=r"(?P<whole>(?P<explorer_code>[ABC]?)(?P<cave>\d*)-?(?P<year>\d\d\d?\d?)-(?P<number>\d\d)(?P<grade>[ABCDXV]?))"
unwiki_QM_pattern=re.compile(unwiki_QM_pattern)
#wikilink_QM_pattern=settings.QM_PATTERN
slug=request.POST['slug']
date=request.POST['date']
lbo=LogbookEntry.objects.get(slug=slug, date=date)
#unwiki_QMs=re.findall(unwiki_QM_pattern,lbo.text)
unwiki_QMs=[m.groupdict() for m in unwiki_QM_pattern.finditer(lbo.text)]
print(unwiki_QMs)
for qm in unwiki_QMs:
#try:
@@ -167,7 +180,7 @@ def logbook_entry_suggestions(request):
lbo=LogbookEntry.objects.get(date__year=qm['year'],title__icontains="placeholder for QMs in")
except:
print("failed to get placeholder for year "+str(qm['year']))
temp_QM=QM(found_by=lbo,number=qm['number'],grade=qm['grade'])
temp_QM.grade=qm['grade']
qm['wikilink']=temp_QM.wiki_link()
@@ -175,16 +188,16 @@ def logbook_entry_suggestions(request):
#print 'failed'
print(unwiki_QMs)
#wikilink_QMs=re.findall(wikilink_QM_pattern,lbo.text)
attached_QMs=lbo.QMs_found.all()
unmentioned_attached_QMs=''#not implemented, fill this in by subtracting wiklink_QMs from attached_QMs
#Find unattached_QMs. We only look at the QMs with a proper wiki link.
#for qm in wikilink_QMs:
#Try to look up the QM.
#Try to look up the QM.
print('got 208')
any_suggestions=True
print('got 210')
@@ -204,11 +217,11 @@ def newFile(request, pslug = None):
# personTripFormSet = PersonTripFormSet(request.POST)
# if tripForm.is_valid() and personTripFormSet.is_valid(): # All validation rules pass
# dateStr = tripForm.cleaned_data["date"].strftime("%Y-%m-%d")
# directory = os.path.join(settings.EXPOWEB,
# "years",
# expedition.year,
# directory = os.path.join(settings.EXPOWEB,
# "years",
# expedition.year,
# "autologbook")
# filename = os.path.join(directory,
# filename = os.path.join(directory,
# dateStr + "." + slugify(tripForm.cleaned_data["title"])[:50] + ".html")
# if not os.path.isdir(directory):
# os.mkdir(directory)
@@ -216,7 +229,7 @@ def newFile(request, pslug = None):
# delLogbookEntry(previouslbe)
# f = open(filename, "w")
# template = loader.get_template('dataformat/logbookentry.html')
# context = Context({'trip': tripForm.cleaned_data,
# context = Context({'trip': tripForm.cleaned_data,
# 'persons': personTripFormSet.cleaned_data,
# 'date': dateStr,
# 'expeditionyear': expeditionyear})
@@ -241,11 +254,11 @@ def newFile(request, pslug = None):
# "location": previouslbe.place,
# "caveOrLocation": "location",
# "html": previouslbe.text})
# personTripFormSet = PersonTripFormSet(initial=[{"name": get_name(py.personexpedition),
# "TU": py.time_underground,
# personTripFormSet = PersonTripFormSet(initial=[{"name": get_name(py.personexpedition),
# "TU": py.time_underground,
# "author": py.is_logbook_entry_author}
# for py in previouslbe.persontrip_set.all()])
# else:
# else:
# fileform = UploadFileForm() # An unbound form
return render(request, 'editfile.html', {

249
core/views_survex.py Executable file → Normal file
View File

@@ -1,7 +1,8 @@
from django import forms
from django.http import HttpResponseRedirect, HttpResponse
from django.shortcuts import render_to_response, render
from django.core.context_processors import csrf
from django.shortcuts import render
from django.views.decorators import csrf
from django.views.decorators.csrf import csrf_protect
from django.http import HttpResponse, Http404
import re
import os
@@ -15,76 +16,47 @@ from parsers.people import GetPersonExpeditionNameLookup
import troggle.settings as settings
import parsers.survex
survextemplatefile = """; *** THIS IS A TEMPLATE FILE NOT WHAT YOU MIGHT BE EXPECTING ***
*** DO NOT SAVE THIS FILE WITHOUT RENAMING IT !! ***
;[Stuff in square brackets is example text to be replaced with real data,
; removing the square brackets]
survextemplatefile = """; Locn: Totes Gebirge, Austria - Loser/Augst-Eck Plateau (kataster group 1623)
; Cave:
*begin [surveyname]
; stations linked into other surveys (or likely to)
*export [1 8 12 34]
*export [connecting stations]
; Cave:
; Area in cave/QM:
*title ""
*date [2040.07.04] ; <-- CHANGE THIS DATE
*team Insts [Fred Fossa]
*team Notes [Brenda Badger]
*team Pics [Luke Lynx]
*team Tape [Albert Aadvark]
*instrument [SAP #+Laser Tape/DistoX/Compass # ; Clino #]
; Calibration: [Where, readings]
*ref [2040#00] ; <-- CHANGE THIS TOO
; the #number is on the clear pocket containing the original notes
*title "area title"
*date 2999.99.99
*team Insts [Caver]
*team Insts [Caver]
*team Notes [Caver]
*instrument [set number]
; if using a tape:
*calibrate tape +0.0 ; +ve if tape was too short, -ve if too long
;ref.: 2009#NN
; Centreline data
*data normal from to length bearing gradient ignoreall
[ 1 2 5.57 034.5 -12.8 ]
*calibrate tape +0.0 ; +ve if tape was too short, -ve if too long
;-----------
;recorded station details (leave commented out)
;(NP=Nail Polish, LHW/RHW=Left/Right Hand Wall)
;Station Left Right Up Down Description
;[Red] nail varnish markings
[;1 0.8 0 5.3 1.6 ; NP on boulder. pt 23 on foo survey ]
[;2 0.3 1.2 6 1.2 ; NP '2' LHW ]
[;3 1.3 0 3.4 0.2 ; Rock on floor - not refindable ]
*data normal from to tape compass clino
1 2 3.90 298 -20
*data passage station left right up down ignoreall
1 [L] [R] [U] [D] comment
*end [surveyname]"""
;LRUDs arranged into passage tubes
;new *data command for each 'passage',
;repeat stations and adjust numbers as needed
*data passage station left right up down
;[ 1 0.8 0 5.3 1.6 ]
;[ 2 0.3 1.2 6 1.2 ]
*data passage station left right up down
;[ 1 1.3 1.5 5.3 1.6 ]
;[ 3 2.4 0 3.4 0.2 ]
;-----------
;Question Mark List ;(leave commented-out)
; The nearest-station is the name of the survey and station which are nearest to
; the QM. The resolution-station is either '-' to indicate that the QM hasn't
; been checked; or the name of the survey and station which push that QM. If a
; QM doesn't go anywhere, set the resolution-station to be the same as the
; nearest-station. Include any relevant details of how to find or push the QM in
; the textual description.
;Serial number grade(A/B/C/X) nearest-station resolution-station description
;[ QM1 A surveyname.3 - description of QM ]
;[ QM2 B surveyname.5 - description of QM ]
;------------
;Cave description ;(leave commented-out)
;freeform text describing this section of the cave
*end [surveyname]
"""
def ReplaceTabs(stext):
res = [ ]
nsl = 0
for s in re.split("(\t|\n)", stext):
if s == "\t":
res.append(" " * (4 - (nsl % 4)))
nsl = 0
continue
if s == "\n":
nsl = 0
else:
nsl += len(s)
res.append(s)
return "".join(res)
class SvxForm(forms.Form):
@@ -92,17 +64,18 @@ class SvxForm(forms.Form):
filename = forms.CharField(widget=forms.TextInput(attrs={"readonly":True}))
datetime = forms.DateTimeField(widget=forms.TextInput(attrs={"readonly":True}))
outputtype = forms.CharField(widget=forms.TextInput(attrs={"readonly":True}))
code = forms.CharField(widget=forms.Textarea(attrs={"cols":150, "rows":36}))
code = forms.CharField(widget=forms.Textarea(attrs={"cols":150, "rows":18}))
def GetDiscCode(self):
fname = settings.SURVEX_DATA + self.data['filename'] + ".svx"
if not os.path.isfile(fname):
return survextemplatefile
fin = open(fname, "rt")
svxtext = fin.read().encode("utf8")
fin = open(fname, "rb")
svxtext = fin.read().decode("latin1") # unicode(a, "latin1")
svxtext = ReplaceTabs(svxtext).strip()
fin.close()
return svxtext
def DiffCode(self, rcode):
code = self.GetDiscCode()
difftext = difflib.unified_diff(code.splitlines(), rcode.splitlines())
@@ -112,28 +85,19 @@ class SvxForm(forms.Form):
def SaveCode(self, rcode):
fname = settings.SURVEX_DATA + self.data['filename'] + ".svx"
if not os.path.isfile(fname):
if re.search(r"\[|\]", rcode):
return "Error: remove all []s from the text. They are only template guidance."
# only save if appears valid
if re.search(r"\[|\]", rcode):
return "Error: clean up all []s from the text"
mbeginend = re.search(r"(?s)\*begin\s+(\w+).*?\*end\s+(\w+)", rcode)
if not mbeginend:
return "Error: no begin/end block here"
if mbeginend.group(1) != mbeginend.group(2):
return "Error: mismatching begin/end labels"
return "Error: mismatching beginend"
# Make this create new survex folders if needed
try:
fout = open(fname, "wb")
except IOError:
pth = os.path.dirname(self.data['filename'])
newpath = os.path.join(settings.SURVEX_DATA, pth)
if not os.path.exists(newpath):
os.makedirs(newpath)
fout = open(fname, "wb")
# javascript seems to insert CRLF on WSL1 whatever you say. So fix that:
res = fout.write(rcode.replace("\r",""))
fout = open(fname, "w")
res = fout.write(rcode.encode("latin1"))
fout.close()
return "SAVED ."
return "SAVED"
def Process(self):
print("....\n\n\n....Processing\n\n\n")
@@ -141,34 +105,34 @@ class SvxForm(forms.Form):
os.chdir(os.path.split(settings.SURVEX_DATA + self.data['filename'])[0])
os.system(settings.CAVERN + " --log " + settings.SURVEX_DATA + self.data['filename'] + ".svx")
os.chdir(cwd)
fin = open(settings.SURVEX_DATA + self.data['filename'] + ".log", "rt")
fin = open(settings.SURVEX_DATA + self.data['filename'] + ".log", "rb")
log = fin.read()
fin.close()
log = re.sub("(?s).*?(Survey contains)", "\\1", log)
return log
@csrf_protect
def svx(request, survex_file):
# get the basic data from the file given in the URL
dirname = os.path.split(survex_file)[0]
dirname += "/"
nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
outputtype = "normal"
form = SvxForm({'filename':survex_file, 'dirname':dirname, 'datetime':nowtime, 'outputtype':outputtype})
form = SvxForm({'filename':survex_file, 'dirname':dirname, 'datetime':nowtime, 'outputtype':outputtype})
# if the form has been returned
difflist = [ ]
logmessage = ""
message = ""
if request.method == 'POST': # If the form has been submitted...
rform = SvxForm(request.POST) #
rform = SvxForm(request.POST) #
if rform.is_valid(): # All validation rules pass (how do we check it against the filename and users?)
rcode = rform.cleaned_data['code']
outputtype = rform.cleaned_data['outputtype']
difflist = form.DiffCode(rcode)
#print "ssss", rform.data
if "revert" in rform.data:
pass
if "process" in rform.data:
@@ -181,6 +145,7 @@ def svx(request, survex_file):
form.data['code'] = rcode
if "save" in rform.data:
if request.user.is_authenticated():
#print("sssavvving")
message = form.SaveCode(rcode)
else:
message = "You do not have authority to save this file"
@@ -188,20 +153,20 @@ def svx(request, survex_file):
form.data['code'] = rcode
if "diff" in rform.data:
form.data['code'] = rcode
#process(survex_file)
if 'code' not in form.data:
if 'code' not in form.data:
form.data['code'] = form.GetDiscCode()
if not difflist:
difflist.append("none")
if message:
difflist.insert(0, message)
#print [ form.data['code'] ]
svxincludes = re.findall(r'\*include\s+(\S+)(?i)', form.data['code'] or "")
vmap = {'settings': settings,
'has_3d': os.path.isfile(settings.SURVEX_DATA + survex_file + ".3d"),
'title': survex_file,
@@ -209,13 +174,13 @@ def svx(request, survex_file):
'difflist': difflist,
'logmessage':logmessage,
'form':form}
vmap.update(csrf(request))
# vmap.update(csrf(request))
if outputtype == "ajax":
return render_to_response('svxfiledifflistonly.html', vmap)
return render_to_response('svxfile.html', vmap)
return render(request, 'svxfiledifflistonly.html', vmap)
return render(request, 'svxfile.html', vmap)
def svxraw(request, survex_file):
svx = open(os.path.join(settings.SURVEX_DATA, survex_file+".svx"), "rt",encoding='utf8')
svx = open(os.path.join(settings.SURVEX_DATA, survex_file+".svx"), "rb")
return HttpResponse(svx, content_type="text")
@@ -230,25 +195,25 @@ def process(survex_file):
def threed(request, survex_file):
process(survex_file)
try:
threed = open(settings.SURVEX_DATA + survex_file + ".3d", "rt",encoding='utf8')
threed = open(settings.SURVEX_DATA + survex_file + ".3d", "rb")
return HttpResponse(threed, content_type="model/3d")
except:
log = open(settings.SURVEX_DATA + survex_file + ".log", "rt",encoding='utf8')
log = open(settings.SURVEX_DATA + survex_file + ".log", "rb")
return HttpResponse(log, content_type="text")
def log(request, survex_file):
process(survex_file)
log = open(settings.SURVEX_DATA + survex_file + ".log", "rt",encoding='utf8')
log = open(settings.SURVEX_DATA + survex_file + ".log", "rb")
return HttpResponse(log, content_type="text")
def err(request, survex_file):
process(survex_file)
err = open(settings.SURVEX_DATA + survex_file + ".err", "rt",encoding='utf8')
err = open(settings.SURVEX_DATA + survex_file + ".err", "rb")
return HttpResponse(err, content_type="text")
def identifycavedircontents(gcavedir):
# find the primary survex file in each cave directory
name = os.path.split(gcavedir)[1]
@@ -262,13 +227,13 @@ def identifycavedircontents(gcavedir):
pass
elif name == "115" and (f in ["115cufix.svx", "115fix.svx"]):
pass
elif os.path.isdir(os.path.join(gcavedir, f)):
if f[0] != ".":
subdirs.append(f)
elif f[-4:] == ".svx":
nf = f[:-4]
if nf.lower() == name.lower() or nf[:3] == "all" or (name, nf) in [("resurvey2005", "145-2005"), ("cucc", "cu115")]:
if primesvx:
if nf[:3] == "all":
@@ -288,38 +253,50 @@ def identifycavedircontents(gcavedir):
if primesvx:
subsvx.insert(0, primesvx)
return subdirs, subsvx
# direct local non-database browsing through the svx file repositories
# perhaps should use the database and have a reload button for it
def survexcaveslist(request):
cavesdir = os.path.join(settings.SURVEX_DATA, "caves-1623")
#cavesdircontents = { }
kat_areas = settings.KAT_AREAS
fnumlist = []
kat_areas = ['1623']
for area in kat_areas:
print(area)
cavesdir = os.path.join(settings.SURVEX_DATA, "caves-%s" % area)
print(cavesdir)
#cavesdircontents = { }
fnumlist += [ (-int(re.match(r"\d*", f).group(0) or "0"), f, area) for f in os.listdir(cavesdir) ]
print(fnumlist)
print(len(fnumlist))
# first sort the file list
fnumlist.sort()
onefilecaves = [ ]
multifilecaves = [ ]
subdircaves = [ ]
# first sort the file list
fnumlist = [ (-int(re.match(r"\d*", f).group(0) or "0"), f) for f in os.listdir(cavesdir) ]
fnumlist.sort()
print(fnumlist)
# go through the list and identify the contents of each cave directory
for num, cavedir in fnumlist:
for num, cavedir, area in fnumlist:
if cavedir in ["144", "40"]:
continue
cavesdir = os.path.join(settings.SURVEX_DATA, "caves-%s" % area)
gcavedir = os.path.join(cavesdir, cavedir)
if os.path.isdir(gcavedir) and cavedir[0] != ".":
subdirs, subsvx = identifycavedircontents(gcavedir)
survdirobj = [ ]
for lsubsvx in subsvx:
survdirobj.append(("caves-1623/"+cavedir+"/"+lsubsvx, lsubsvx))
survdirobj.append(("caves-" + area + "/"+cavedir+"/"+lsubsvx, lsubsvx))
# caves with subdirectories
if subdirs:
subsurvdirs = [ ]
@@ -328,10 +305,10 @@ def survexcaveslist(request):
assert not dsubdirs
lsurvdirobj = [ ]
for lsubsvx in dsubsvx:
lsurvdirobj.append(("caves-1623/"+cavedir+"/"+subdir+"/"+lsubsvx, lsubsvx))
lsurvdirobj.append(("caves-" + area + "/"+cavedir+"/"+subdir+"/"+lsubsvx, lsubsvx))
subsurvdirs.append((lsurvdirobj[0], lsurvdirobj[1:]))
subdircaves.append((cavedir, (survdirobj[0], survdirobj[1:]), subsurvdirs))
# multifile caves
elif len(survdirobj) > 1:
multifilecaves.append((survdirobj[0], survdirobj[1:]))
@@ -340,24 +317,22 @@ def survexcaveslist(request):
#print("survdirobj = ")
#print(survdirobj)
onefilecaves.append(survdirobj[0])
return render_to_response('svxfilecavelist.html', {'settings': settings, "onefilecaves":onefilecaves, "multifilecaves":multifilecaves, "subdircaves":subdircaves })
return render(request, 'svxfilecavelist.html', {"onefilecaves":onefilecaves, "multifilecaves":multifilecaves, "subdircaves":subdircaves })
# parsing all the survex files of a single cave and showing that it's consistent and can find all the files and people
# doesn't use recursion. just writes it twice
def survexcavesingle(request, survex_cave):
breload = False
cave = Cave.objects.get(kataster_number=survex_cave)
cave = Cave.objects.filter(kataster_number=survex_cave)
if len(cave) < 1:
cave = Cave.objects.filter(unofficial_number=survex_cave)
if breload:
parsers.survex.ReloadSurvexCave(survex_cave)
return render_to_response('svxcavesingle.html', {'settings': settings, "cave":cave })
if len(cave) > 0:
return render(request, 'svxcavesingle.html', {"cave":cave[0] })
else:
return render(request, 'svxcavesingle.html', {"cave":cave })

572
databaseReset.py Executable file → Normal file
View File

@@ -1,403 +1,195 @@
from __future__ import (absolute_import, division,
print_function)
import os
import time
import timeit
import json
import settings
if os.geteuid() == 0:
print("This script should be run as expo not root - quitting")
exit()
os.environ['PYTHONPATH'] = settings.PYTHON_PATH
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'settings')
if __name__ == '__main__':
import django
django.setup()
from django.core import management
from django.db import connection, close_old_connections
from django.db import connection
from django.contrib.auth.models import User
from django.http import HttpResponse
from django.core.urlresolvers import reverse
from troggle.core.models import Cave, Entrance
import troggle.settings
import troggle.flatpages.models
import troggle.logbooksdump
# NOTE databaseReset.py is *imported* by views_other.py as it is used in the control panel
# presented there.
databasename=settings.DATABASES['default']['NAME']
expouser=settings.EXPOUSER
expouserpass=settings.EXPOUSERPASS
expouseremail=settings.EXPOUSER_EMAIL
def reinit_db():
"""Rebuild database from scratch. Deletes the file first if sqlite is used,
otherwise it drops the database and creates it.
"""
currentdbname = settings.DATABASES['default']['NAME']
def reload_db():
if settings.DATABASES['default']['ENGINE'] == 'django.db.backends.sqlite3':
try:
os.remove(currentdbname)
os.remove(databasename)
except OSError:
pass
else:
cursor = connection.cursor()
cursor.execute("DROP DATABASE %s" % currentdbname)
cursor.execute("CREATE DATABASE %s" % currentdbname)
cursor.execute("ALTER DATABASE %s CHARACTER SET=utf8" % currentdbname)
cursor.execute("USE %s" % currentdbname)
syncuser()
def syncuser():
"""Sync user - needed after reload
"""
print("Synchronizing user")
cursor.execute("DROP DATABASE %s" % databasename)
cursor.execute("CREATE DATABASE %s" % databasename)
cursor.execute("ALTER DATABASE %s CHARACTER SET=utf8" % databasename)
cursor.execute("USE %s" % databasename)
management.call_command('migrate', interactive=False)
user = User.objects.create_user(expouser, expouseremail, expouserpass)
user.is_staff = True
user.is_superuser = True
user.save()
def dirsredirect():
"""Make directories that troggle requires and sets up page redirects
"""
def make_dirs():
"""Make directories that troggle requires"""
#should also deal with permissions here.
#if not os.path.isdir(settings.PHOTOS_ROOT):
#os.mkdir(settings.PHOTOS_ROOT)
# for oldURL, newURL in [("indxal.htm", reverse("caveindex"))]:
# f = troggle.flatpages.models.Redirect(originalURL = oldURL, newURL = newURL)
# f.save()
if not os.path.isdir(settings.PHOTOS_ROOT):
os.mkdir(settings.PHOTOS_ROOT)
def import_caves():
import troggle.parsers.caves
import parsers.caves
print("Importing Caves")
troggle.parsers.caves.readcaves()
parsers.caves.readcaves()
def import_people():
import troggle.parsers.people
print("Importing People (folk.csv)")
troggle.parsers.people.LoadPersonsExpos()
import parsers.people
parsers.people.LoadPersonsExpos()
def import_logbooks():
import troggle.parsers.logbooks
print("Importing Logbooks")
troggle.parsers.logbooks.LoadLogbooks()
# The below line was causing errors I didn't understand (it said LOGFILE was a string), and I couldn't be bothered to figure
# what was going on so I just catch the error with a try. - AC 21 May
try:
settings.LOGFILE.write('\nBegun importing logbooks at ' + time.asctime() +'\n'+'-'*60)
except:
pass
import parsers.logbooks
parsers.logbooks.LoadLogbooks()
def import_survex():
import parsers.survex
parsers.survex.LoadAllSurvexBlocks()
parsers.survex.LoadPos()
def import_QMs():
print("Importing QMs (old caves)")
import troggle.parsers.QMs
# import process itself runs on qm.csv in only 3 old caves, not the modern ones!
def import_survexblks():
import troggle.parsers.survex
print("Importing Survex Blocks")
troggle.parsers.survex.LoadAllSurvexBlocks()
import parsers.QMs
def import_survexpos():
import troggle.parsers.survex
print("Importing Survex x/y/z Positions")
troggle.parsers.survex.LoadPos()
def import_surveyimgs():
"""This appears to store data in unused objects. The code is kept
for future re-working to manage progress against notes, plans and elevs.
"""
#import troggle.parsers.surveys
print("NOT Importing survey images")
#troggle.parsers.surveys.parseSurveys(logfile=settings.LOGFILE)
def import_surveys():
import parsers.surveys
parsers.surveys.parseSurveys(logfile=settings.LOGFILE)
def import_surveyscans():
import troggle.parsers.surveys
print("Importing Survey Scans")
troggle.parsers.surveys.LoadListScans()
import parsers.surveys
parsers.surveys.LoadListScans()
def import_tunnelfiles():
import troggle.parsers.surveys
print("Importing Tunnel files")
troggle.parsers.surveys.LoadTunnelFiles()
import parsers.surveys
parsers.surveys.LoadTunnelFiles()
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# These functions moved to a different file - not used currently.
#import logbooksdump
#def import_auto_logbooks():
#def dumplogbooks():
#def writeCaves():
# Writes out all cave and entrance HTML files to
# folder specified in settings.CAVEDESCRIPTIONS
# for cave in Cave.objects.all():
# cave.writeDataFile()
# for entrance in Entrance.objects.all():
# entrance.writeDataFile()
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
class JobQueue():
"""A list of import operations to run. Always reports profile times
in the same order.
def reset():
""" Wipe the troggle database and import everything from legacy data
"""
def __init__(self,run):
self.runlabel = run
self.queue = [] # tuples of (jobname, jobfunction)
self.results = {}
self.results_order=[
"date","runlabel","reinit", "caves", "people",
"logbooks", "QMs", "scans", "survexblks", "survexpos",
"tunnel", "surveyimgs", "test", "dirsredirect", "syncuser" ]
for k in self.results_order:
self.results[k]=[]
self.tfile = "import_profile.json"
self.htmlfile = "profile.html" # for HTML results table. Not yet done.
reload_db()
make_dirs()
pageredirects()
import_caves()
import_people()
import_surveyscans()
#Adding elements to queue - enqueue
def enq(self,label,func):
self.queue.append((label,func))
return True
import_logbooks()
import_QMs()
#Removing the last element from the queue - dequeue
# def deq(self):
# if len(self.queue)>0:
# return self.queue.pop()
# return ("Queue Empty!")
import_survex()
try:
import_tunnelfiles()
except:
print("Tunnel files parser broken.")
def loadprofiles(self):
"""Load timings for previous runs from file
"""
if os.path.isfile(self.tfile):
try:
f = open(self.tfile, "r")
data = json.load(f)
for j in data:
self.results[j] = data[j]
except:
print("FAILURE parsing JSON file %s" % (self.tfile))
# Python bug: https://github.com/ShinNoNoir/twitterwebsearch/issues/12
f.close()
for j in self.results_order:
self.results[j].append(None) # append a placeholder
return True
def saveprofiles(self):
with open(self.tfile, 'w') as f:
json.dump(self.results, f)
return True
def memdumpsql(self):
djconn = django.db.connection
from dump import _iterdump
with open('memdump.sql', 'w') as f:
for line in _iterdump(djconn):
f.write('%s\n' % line.encode("utf8"))
return True
def runqonce(self):
"""Run all the jobs in the queue provided - once
"""
print("** Running job ", self.runlabel)
jobstart = time.time()
self.results["date"].pop()
self.results["date"].append(jobstart)
self.results["runlabel"].pop()
self.results["runlabel"].append(self.runlabel)
for i in self.queue:
start = time.time()
i[1]() # looks ugly but invokes function passed in the second item in the tuple
duration = time.time()-start
print("\n*- Ended \"", i[0], "\" %.1f seconds" % duration)
self.results[i[0]].pop() # the null item
self.results[i[0]].append(duration)
jobend = time.time()
jobduration = jobend-jobstart
print("** Ended job %s - %.1f seconds total." % (self.runlabel,jobduration))
return True
def run(self):
"""First runs all the jobs in the queue against a scratch in-memory db
then re-runs the import against the db specified in settings.py
Default behaviour is to skip the in-memory phase.
When MySQL is the db the in-memory phase crashes as MySQL does not properly
relinquish some kind of db connection (not fixed yet)
"""
self.loadprofiles()
# save db settings for later
dbengine = settings.DATABASES['default']['ENGINE']
dbname = settings.DATABASES['default']['NAME']
dbdefault = settings.DATABASES['default']
skipmem = False
if self.runlabel:
if self.runlabel == "":
skipmem = True
elif self.runlabel[0:2] == "F-":
skipmem = True
else:
skipmem = True
print("-- ", settings.DATABASES['default']['NAME'], settings.DATABASES['default']['ENGINE'])
#print "-- DATABASES.default", settings.DATABASES['default']
if dbname ==":memory:":
# just run, and save the sql file
self.runqonce()
self.memdumpsql() # saved contents of scratch db, could be imported later..
self.saveprofiles()
elif skipmem:
self.runqonce()
self.saveprofiles()
else:
django.db.close_old_connections() # needed if MySQL running?
# run all the imports through :memory: first
settings.DATABASES['default']['ENGINE'] = 'django.db.backends.sqlite3'
settings.DATABASES['default']['NAME'] = ":memory:"
settings.DATABASES['default'] = {'ENGINE': 'django.db.backends.sqlite3',
'AUTOCOMMIT': True,
'ATOMIC_REQUESTS': False,
'NAME': ':memory:',
'CONN_MAX_AGE': 0,
'TIME_ZONE': 'UTC',
'OPTIONS': {},
'HOST': '',
'USER': '',
'TEST': {'COLLATION': None, 'CHARSET': None, 'NAME': None, 'MIRROR': None},
'PASSWORD': '',
'PORT': ''}
import_surveys()
print("-- ", settings.DATABASES['default']['NAME'], settings.DATABASES['default']['ENGINE'])
#print("-- DATABASES.default", settings.DATABASES['default'])
def import_auto_logbooks():
import parsers.logbooks
import os
for pt in troggle.core.models.PersonTrip.objects.all():
pt.delete()
for lbe in troggle.core.models.LogbookEntry.objects.all():
lbe.delete()
for expedition in troggle.core.models.Expedition.objects.all():
directory = os.path.join(settings.EXPOWEB,
"years",
expedition.year,
"autologbook")
for root, dirs, filenames in os.walk(directory):
for filename in filenames:
print(os.path.join(root, filename))
parsers.logbooks.parseAutoLogBookEntry(os.path.join(root, filename))
# but because the user may be expecting to add this to a db with lots of tables already there,
# the jobqueue may not start from scratch so we need to initialise the db properly first
# because we are using an empty :memory: database
# But initiating twice crashes it; so be sure to do it once only.
# Damn. syncdb() is still calling MySQL somehow **conn_params not sqlite3. So crashes on expo server.
if ("reinit",reinit_db) not in self.queue:
reinit_db()
if ("dirsredirect",dirsredirect) not in self.queue:
dirsredirect()
if ("caves",import_caves) not in self.queue:
import_caves() # sometime extract the initialising code from this and put in reinit...
if ("people",import_people) not in self.queue:
import_people() # sometime extract the initialising code from this and put in reinit...
django.db.close_old_connections() # maybe not needed here
self.runqonce()
self.memdumpsql()
self.showprofile()
# restore the original db and import again
# if we wanted to, we could re-import the SQL generated in the first pass to be
# blazing fast. But for the present just re-import the lot.
settings.DATABASES['default'] = dbdefault
settings.DATABASES['default']['ENGINE'] = dbengine
settings.DATABASES['default']['NAME'] = dbname
print("-- ", settings.DATABASES['default']['NAME'], settings.DATABASES['default']['ENGINE'])
django.db.close_old_connections() # maybe not needed here
for j in self.results_order:
self.results[j].pop() # throw away results from :memory: run
self.results[j].append(None) # append a placeholder
django.db.close_old_connections() # magic rune. works. found by looking in django.db__init__.py
#django.setup() # should this be needed?
self.runqonce() # crashes because it thinks it has no migrations to apply, when it does.
self.saveprofiles()
return True
def showprofile(self):
"""Prints out the time it took to run the jobqueue
"""
for k in self.results_order:
if k =="dirsredirect":
break
if k =="surveyimgs":
break
elif k =="syncuser":
break
elif k =="test":
break
elif k =="date":
print(" days ago ", end=' ')
#Temporary function until definative source of data transfered.
from django.template.defaultfilters import slugify
from django.template import Context, loader
def dumplogbooks():
def get_name(pe):
if pe.nickname:
return pe.nickname
else:
print('%10s (s)' % k, end=' ')
percen=0
r = self.results[k]
for i in range(len(r)):
if k == "runlabel":
if r[i]:
rp = r[i]
else:
rp = " - "
print('%8s' % rp, end=' ')
elif k =="date":
# Calculate dates as days before present
if r[i]:
if i == len(r)-1:
print(" this", end=' ')
else:
# prints one place to the left of where you expect
if r[len(r)-1]:
s = r[i]-r[len(r)-1]
else:
s = 0
days = (s)/(24*60*60)
print('%8.2f' % days, end=' ')
elif r[i]:
print('%8.1f' % r[i], end=' ')
if i == len(r)-1 and r[i-1]:
percen = 100* (r[i] - r[i-1])/r[i-1]
if abs(percen) >0.1:
print('%8.1f%%' % percen, end=' ')
else:
print(" - ", end=' ')
print("")
print("\n")
return True
return pe.person.first_name
for lbe in troggle.core.models.LogbookEntry.objects.all():
dateStr = lbe.date.strftime("%Y-%m-%d")
directory = os.path.join(settings.EXPOWEB,
"years",
lbe.expedition.year,
"autologbook")
if not os.path.isdir(directory):
os.mkdir(directory)
filename = os.path.join(directory,
dateStr + "." + slugify(lbe.title)[:50] + ".html")
if lbe.cave:
print(lbe.cave.reference())
trip = {"title": lbe.title, "html":lbe.text, "cave": lbe.cave.reference(), "caveOrLocation": "cave"}
else:
trip = {"title": lbe.title, "html":lbe.text, "location":lbe.place, "caveOrLocation": "location"}
pts = [pt for pt in lbe.persontrip_set.all() if pt.personexpedition]
persons = [{"name": get_name(pt.personexpedition), "TU": pt.time_underground, "author": pt.is_logbook_entry_author} for pt in pts]
f = open(filename, "wb")
template = loader.get_template('dataformat/logbookentry.html')
context = Context({'trip': trip,
'persons': persons,
'date': dateStr,
'expeditionyear': lbe.expedition.year})
output = template.render(context)
f.write(unicode(output).encode( "utf-8" ))
f.close()
def pageredirects():
for oldURL, newURL in [("indxal.htm", reverse("caveindex"))]:
f = troggle.flatpages.models.Redirect(originalURL = oldURL, newURL = newURL)
f.save()
def writeCaves():
for cave in Cave.objects.all():
cave.writeDataFile()
for entrance in Entrance.objects.all():
entrance.writeDataFile()
def usage():
print("""Usage is 'python databaseReset.py <command> [runlabel]'
print("""Usage is 'python databaseReset.py <command>'
where command is:
test - testing... imports people and prints profile. Deletes nothing.
profile - print the profile from previous runs. Import nothing.
reset - normal usage: clear database and reread everything from files - time-consuming
caves - read in the caves (must run first after reset)
people - read in the people from folk.csv (must run before logbooks)
logbooks - read in the logbooks
QMs - read in the QM csv files (older caves only)
scans - the survey scans in all the wallets (must run before survex)
survex - read in the survex files - all the survex blocks but not the x/y/z positions
survexpos - set the x/y/z positions for entrances and fixed points
tunnel - read in the Tunnel files - which scans the survey scans too
reinit - clear database (delete everything) and make empty tables. Import nothing.
syncuser - needed after reloading database from SQL backup
autologbooks - Not used. read in autologbooks (what are these?)
dumplogbooks - Not used. write out autologbooks (not working?)
surveyimgs - Not used. read in scans by-expo, must run after "people".
and [runlabel] is an optional string identifying this run of the script
in the stored profiling data 'import-profile.json'
if [runlabel] is absent or begins with "F-" then it will skip the :memory: pass
caves and logbooks must be run on an empty db before the others as they
set up db tables used by the others.
the in-memory phase is on an empty db, so always runs reinit, caves & people for this phase
reset - this is normal usage, clear database and reread everything
desc
caves - read in the caves
logbooks - read in the logbooks
autologbooks
dumplogbooks
people
QMs - read in the QM files
resetend
scans - read in the scanned surveynotes
survex - read in the survex files
survexpos
surveys
tunnel - read in the Tunnel files
writeCaves
""")
if __name__ == "__main__":
@@ -405,66 +197,52 @@ if __name__ == "__main__":
import sys
import django
django.setup()
if len(sys.argv)>2:
runlabel = sys.argv[len(sys.argv)-1]
else:
runlabel=None
jq = JobQueue(runlabel)
if len(sys.argv)==1:
usage()
exit()
elif "test" in sys.argv:
jq.enq("caves",import_caves)
jq.enq("people",import_people)
elif "caves" in sys.argv:
jq.enq("caves",import_caves)
elif "logbooks" in sys.argv:
jq.enq("logbooks",import_logbooks)
elif "people" in sys.argv:
jq.enq("people",import_people)
elif "QMs" in sys.argv:
jq.enq("QMs",import_QMs)
elif "reset" in sys.argv:
jq.enq("reinit",reinit_db)
jq.enq("dirsredirect",dirsredirect)
jq.enq("caves",import_caves)
jq.enq("people",import_people)
jq.enq("scans",import_surveyscans)
jq.enq("logbooks",import_logbooks)
jq.enq("QMs",import_QMs)
jq.enq("tunnel",import_tunnelfiles)
#jq.enq("survexblks",import_survexblks)
#jq.enq("survexpos",import_survexpos)
if "desc" in sys.argv:
resetdesc()
elif "scans" in sys.argv:
jq.enq("scans",import_surveyscans)
elif "survex" in sys.argv:
jq.enq("survexblks",import_survexblks)
elif "survexpos" in sys.argv:
jq.enq("survexpos",import_survexpos)
import_surveyscans()
elif "caves" in sys.argv:
import_caves()
elif "people" in sys.argv:
import_people()
elif "QMs" in sys.argv:
import_QMs()
elif "tunnel" in sys.argv:
jq.enq("tunnel",import_tunnelfiles)
elif "surveyimgs" in sys.argv:
jq.enq("surveyimgs",import_surveyimgs) # imports into tables which are never read
elif "autologbooks" in sys.argv: # untested in 2020
import_tunnelfiles()
elif "reset" in sys.argv:
reset()
elif "resetend" in sys.argv:
#import_logbooks()
import_QMs()
try:
import_tunnelfiles()
except:
print("Tunnel files parser broken.")
import_surveys()
import_descriptions()
parse_descriptions()
elif "survex" in sys.argv:
# management.call_command('syncdb', interactive=False) # this sets the path so that import settings works in import_survex
import_survex()
elif "survexpos" in sys.argv:
# management.call_command('syncdb', interactive=False) # this sets the path so that import settings works in import_survex
import parsers.survex
parsers.survex.LoadPos()
elif "logbooks" in sys.argv:
# management.call_command('syncdb', interactive=False) # this sets the path so that import settings works in import_survex
import_logbooks()
elif "autologbooks" in sys.argv:
import_auto_logbooks()
elif "dumplogbooks" in sys.argv: # untested in 2020
elif "dumplogbooks" in sys.argv:
dumplogbooks()
# elif "writecaves" in sys.argv: # untested in 2020 - will overwrite input files!!
# writeCaves()
elif "profile" in sys.argv:
jq.loadprofiles()
jq.showprofile()
exit()
elif "writeCaves" in sys.argv:
writeCaves()
elif "surveys" in sys.argv:
import_surveys()
elif "help" in sys.argv:
usage()
exit()
elif "reload_db" in sys.argv:
reload_db()
else:
print("%s not recognised" % sys.argv)
usage()
print(("%s not recognised as a command." % sys.argv[1]))
exit()
jq.run()
jq.showprofile()

View File

@@ -2,15 +2,17 @@ FROM python:2.7-stretch
#COPY backports.list /etc/apt/sources.list.d/
RUN apt-get -y update && apt-get install -y mercurial fonts-freefont-ttf locales survex
RUN apt-get -y update && apt-get install -y mercurial \
fonts-freefont-ttf locales survex python-levenshtein \
python-pygraphviz
#RUN apt-get -y -t -backports install survex
# Set the locale
RUN locale-gen en_GB.UTF-8
ENV LANG en_GB.UTF-8
ENV LANGUAGE en_GB:en
ENV LC_ALL en_GB.UTF-8
ENV LANG en_GB.UTF-8
ENV LANGUAGE en_GB:en
ENV LC_ALL en_GB.UTF-8
WORKDIR /opt/expo/troggle
COPY requirements.txt .

View File

@@ -1 +1 @@
requirements.txt.dj-1.7.11
requirements.txt.dj-1.10

View File

@@ -0,0 +1,13 @@
Django==1.10.8
django-registration==2.1.2
mysql
django-imagekit
Image
django-tinymce
smartencoding
fuzzywuzzy
GitPython
unidecode
django-extensions
pygraphviz
python-Levenshtein

View File

@@ -6,4 +6,7 @@ django-imagekit
Image
django-tinymce==2.7.0
smartencoding
fuzzywuzzy
GitPython
unidecode
django-extensions

69
dump.py
View File

@@ -1,69 +0,0 @@
# Mimic the sqlite3 console shell's .dump command
# Author: Paul Kippes <kippesp@gmail.com>
# Every identifier in sql is quoted based on a comment in sqlite
# documentation "SQLite adds new keywords from time to time when it
# takes on new features. So to prevent your code from being broken by
# future enhancements, you should normally quote any identifier that
# is an English language word, even if you do not have to."
def _iterdump(connection):
"""
Returns an iterator to the dump of the database in an SQL text format.
Used to produce an SQL dump of the database. Useful to save an in-memory
database for later restoration. This function should not be called
directly but instead called from the Connection method, iterdump().
"""
cu = connection.cursor()
yield('BEGIN TRANSACTION;')
# sqlite_master table contains the SQL CREATE statements for the database.
q = """
SELECT "name", "type", "sql"
FROM "sqlite_master"
WHERE "sql" NOT NULL AND
"type" == 'table'
ORDER BY "name"
"""
schema_res = cu.execute(q)
for table_name, type, sql in schema_res.fetchall():
if table_name == 'sqlite_sequence':
yield('DELETE FROM "sqlite_sequence";')
elif table_name == 'sqlite_stat1':
yield('ANALYZE "sqlite_master";')
elif table_name.startswith('sqlite_'):
continue
# NOTE: Virtual table support not implemented
#elif sql.startswith('CREATE VIRTUAL TABLE'):
# qtable = table_name.replace("'", "''")
# yield("INSERT INTO sqlite_master(type,name,tbl_name,rootpage,sql)"\
# "VALUES('table','{0}','{0}',0,'{1}');".format(
# qtable,
# sql.replace("''")))
else:
yield('{0};'.format(sql))
# Build the insert statement for each row of the current table
table_name_ident = table_name.replace('"', '""')
res = cu.execute('PRAGMA table_info("{0}")'.format(table_name_ident))
column_names = [str(table_info[1]) for table_info in res.fetchall()]
q = """SELECT 'INSERT INTO "{0}" VALUES({1})' FROM "{0}";""".format(
table_name_ident,
",".join("""'||quote("{0}")||'""".format(col.replace('"', '""')) for col in column_names))
query_res = cu.execute(q)
for row in query_res:
yield(row[0]) # '{0}'.format(row[0]) had unicode errors
# Now when the type is 'index', 'trigger', or 'view'
q = """
SELECT "name", "type", "sql"
FROM "sqlite_master"
WHERE "sql" NOT NULL AND
"type" IN ('index', 'trigger', 'view')
"""
schema_res = cu.execute(q)
for name, type, sql in schema_res.fetchall():
yield('{0};'.format(sql))
yield('COMMIT;')

View File

@@ -33,4 +33,3 @@ def writeQmTable(outfile,cave):
cavewriter.writerow(headers)
for qm in cave.get_QMs():
cavewriter.writerow(qmRow(qm))

View File

@@ -0,0 +1,34 @@
# -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2020-02-18 16:01
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
('core', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='EntranceRedirect',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('originalURL', models.CharField(max_length=200)),
('entrance', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.Entrance')),
],
),
migrations.CreateModel(
name='Redirect',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('originalURL', models.CharField(max_length=200, unique=True)),
('newURL', models.CharField(max_length=200)),
],
),
]

View File

View File

@@ -33,7 +33,6 @@ def flatpage(request, path):
except EntranceRedirect.DoesNotExist:
pass
if path.startswith("noinfo") and settings.PUBLIC_SITE and not request.user.is_authenticated():
print("flat path noinfo", path)
return HttpResponseRedirect(reverse("auth_login") + '?next=%s' % request.path)
@@ -48,7 +47,7 @@ def flatpage(request, path):
path = path + "index.htm"
except IOError:
return render(request, 'pagenotfound.html', {'path': path})
else:
else:
try:
filetobeopened = os.path.normpath(settings.EXPOWEB + path)
o = open(filetobeopened, "rb")
@@ -56,7 +55,7 @@ def flatpage(request, path):
return render(request, 'pagenotfound.html', {'path': path})
if path.endswith(".htm") or path.endswith(".html"):
html = o.read()
m = re.search(r"(.*)<\s*head([^>]*)>(.*)<\s*/head\s*>(.*)<\s*body([^>]*)>(.*)<\s*/body\s*>(.*)", html, re.DOTALL + re.IGNORECASE)
if m:
preheader, headerattrs, head, postheader, bodyattrs, body, postbody = m.groups()
@@ -67,24 +66,15 @@ def flatpage(request, path):
title, = m.groups()
else:
title = ""
m = re.search(r"<meta([^>]*)noedit", head, re.DOTALL + re.IGNORECASE)
if m:
editable = False
else:
editable = True
has_menu = False
menumatch = re.match('(.*)<div id="menu">', body, re.DOTALL + re.IGNORECASE)
if menumatch:
has_menu = True
menumatch = re.match('(.*)<ul id="links">', body, re.DOTALL + re.IGNORECASE)
if menumatch:
has_menu = True
#body, = menumatch.groups()
if re.search(r"iso-8859-1", html):
body = unicode(body, "iso-8859-1")
body.strip
return render(request, 'flatpage.html', {'editable': editable, 'path': path, 'title': title, 'body': body, 'homepage': (path == "index.htm"), 'has_menu': has_menu})
return render(request, 'flatpage.html', {'editable': True, 'path': path, 'title': title, 'body': body, 'homepage': (path == "index.htm"), 'has_menu': has_menu})
else:
return HttpResponse(o.read(), content_type=getmimetype(path))
@@ -134,7 +124,7 @@ def editflatpage(request, path):
return HttpResponse("Page could not be split into header and body")
except IOError:
filefound = False
if request.method == 'POST': # If the form has been submitted...
flatpageForm = FlatPageForm(request.POST) # A form bound to the POST data
@@ -151,7 +141,7 @@ def editflatpage(request, path):
headerargs = ""
postheader = ""
bodyargs = ""
postbody = "</html>"
postbody = "</html>"
body = flatpageForm.cleaned_data["html"]
body = body.replace("\r", "")
result = u"%s<head%s>%s</head>%s<body%s>\n%s</body>%s" % (preheader, headerargs, head, postheader, bodyargs, body, postbody)
@@ -162,7 +152,7 @@ def editflatpage(request, path):
else:
if filefound:
m = re.search(r"<title>(.*)</title>", head, re.DOTALL + re.IGNORECASE)
if m:
if m:
title, = m.groups()
else:
title = ""

View File

@@ -1,13 +0,0 @@
"""
Django ImageKit
Author: Justin Driscoll <justin.driscoll@gmail.com>
Version: 0.2
"""
VERSION = "0.2"

View File

@@ -1,21 +0,0 @@
""" Default ImageKit configuration """
from imagekit.specs import ImageSpec
from imagekit import processors
class ResizeThumbnail(processors.Resize):
width = 100
height = 50
crop = True
class EnhanceSmall(processors.Adjustment):
contrast = 1.2
sharpness = 1.1
class SampleReflection(processors.Reflection):
size = 0.5
background_color = "#000000"
class DjangoAdminThumbnail(ImageSpec):
access_as = 'admin_thumbnail'
processors = [ResizeThumbnail, EnhanceSmall, SampleReflection]

View File

@@ -1,17 +0,0 @@
# Required PIL classes may or may not be available from the root namespace
# depending on the installation method used.
try:
import Image
import ImageFile
import ImageFilter
import ImageEnhance
import ImageColor
except ImportError:
try:
from PIL import Image
from PIL import ImageFile
from PIL import ImageFilter
from PIL import ImageEnhance
from PIL import ImageColor
except ImportError:
raise ImportError('ImageKit was unable to import the Python Imaging Library. Please confirm it`s installed and available on your current Python path.')

View File

@@ -1 +0,0 @@

View File

@@ -1 +0,0 @@

View File

@@ -1,134 +0,0 @@
""" Imagekit Image "ImageProcessors"
A processor defines a set of class variables (optional) and a
class method named "process" which processes the supplied image using
the class properties as settings. The process method can be overridden as well allowing user to define their
own effects/processes entirely.
"""
from imagekit.lib import *
class ImageProcessor(object):
""" Base image processor class """
@classmethod
def process(cls, image, obj=None):
return image
class Adjustment(ImageProcessor):
color = 1.0
brightness = 1.0
contrast = 1.0
sharpness = 1.0
@classmethod
def process(cls, image, obj=None):
for name in ['Color', 'Brightness', 'Contrast', 'Sharpness']:
factor = getattr(cls, name.lower())
if factor != 1.0:
image = getattr(ImageEnhance, name)(image).enhance(factor)
return image
class Reflection(ImageProcessor):
background_color = '#FFFFFF'
size = 0.0
opacity = 0.6
@classmethod
def process(cls, image, obj=None):
# convert bgcolor string to rgb value
background_color = ImageColor.getrgb(cls.background_color)
# copy orignial image and flip the orientation
reflection = image.copy().transpose(Image.FLIP_TOP_BOTTOM)
# create a new image filled with the bgcolor the same size
background = Image.new("RGB", image.size, background_color)
# calculate our alpha mask
start = int(255 - (255 * cls.opacity)) # The start of our gradient
steps = int(255 * cls.size) # the number of intermedite values
increment = (255 - start) / float(steps)
mask = Image.new('L', (1, 255))
for y in range(255):
if y < steps:
val = int(y * increment + start)
else:
val = 255
mask.putpixel((0, y), val)
alpha_mask = mask.resize(image.size)
# merge the reflection onto our background color using the alpha mask
reflection = Image.composite(background, reflection, alpha_mask)
# crop the reflection
reflection_height = int(image.size[1] * cls.size)
reflection = reflection.crop((0, 0, image.size[0], reflection_height))
# create new image sized to hold both the original image and the reflection
composite = Image.new("RGB", (image.size[0], image.size[1]+reflection_height), background_color)
# paste the orignal image and the reflection into the composite image
composite.paste(image, (0, 0))
composite.paste(reflection, (0, image.size[1]))
# return the image complete with reflection effect
return composite
class Resize(ImageProcessor):
width = None
height = None
crop = False
upscale = False
@classmethod
def process(cls, image, obj=None):
cur_width, cur_height = image.size
if cls.crop:
crop_horz = getattr(obj, obj._ik.crop_horz_field, 1)
crop_vert = getattr(obj, obj._ik.crop_vert_field, 1)
ratio = max(float(cls.width)/cur_width, float(cls.height)/cur_height)
resize_x, resize_y = ((cur_width * ratio), (cur_height * ratio))
crop_x, crop_y = (abs(cls.width - resize_x), abs(cls.height - resize_y))
x_diff, y_diff = (int(crop_x / 2), int(crop_y / 2))
box_left, box_right = {
0: (0, cls.width),
1: (int(x_diff), int(x_diff + cls.width)),
2: (int(crop_x), int(resize_x)),
}[crop_horz]
box_upper, box_lower = {
0: (0, cls.height),
1: (int(y_diff), int(y_diff + cls.height)),
2: (int(crop_y), int(resize_y)),
}[crop_vert]
box = (box_left, box_upper, box_right, box_lower)
image = image.resize((int(resize_x), int(resize_y)), Image.ANTIALIAS).crop(box)
else:
if not cls.width is None and not cls.height is None:
ratio = min(float(cls.width)/cur_width,
float(cls.height)/cur_height)
else:
if cls.width is None:
ratio = float(cls.height)/cur_height
else:
ratio = float(cls.width)/cur_width
new_dimensions = (int(round(cur_width*ratio)),
int(round(cur_height*ratio)))
if new_dimensions[0] > cur_width or \
new_dimensions[1] > cur_height:
if not cls.upscale:
return image
image = image.resize(new_dimensions, Image.ANTIALIAS)
return image
class Transpose(ImageProcessor):
""" Rotates or flips the image
Method should be one of the following strings:
- FLIP_LEFT RIGHT
- FLIP_TOP_BOTTOM
- ROTATE_90
- ROTATE_270
- ROTATE_180
"""
method = 'FLIP_LEFT_RIGHT'
@classmethod
def process(cls, image, obj=None):
return image.transpose(getattr(Image, cls.method))

View File

@@ -1,15 +0,0 @@
""" ImageKit utility functions """
import tempfile
def img_to_fobj(img, format, **kwargs):
tmp = tempfile.TemporaryFile()
if format != 'JPEG':
try:
img.save(tmp, format, **kwargs)
return
except KeyError:
pass
img.save(tmp, format, **kwargs)
tmp.seek(0)
return tmp

View File

@@ -1,78 +0,0 @@
import sys
# link localsettings to this file for use on a Windows 10 machine running WSL1
# expofiles on a different drive
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3', # 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME' : 'troggle.sqlite', # Or path to database file if using sqlite3.
'USER' : 'expo', # Not used with sqlite3.
'PASSWORD' : 'sekrit', # Not used with sqlite3.
'HOST' : '', # Set to empty string for localhost. Not used with sqlite3.
'PORT' : '', # Set to empty string for default. Not used with sqlite3.
}
}
EXPOUSER = 'expo'
EXPOUSERPASS = 'nnn:ggggggr'
EXPOUSER_EMAIL = 'philip.sargent@gmail.com'
REPOS_ROOT_PATH = '/mnt/d/CUCC-Expo/'
sys.path.append(REPOS_ROOT_PATH)
sys.path.append(REPOS_ROOT_PATH + 'troggle')
PUBLIC_SITE = False
SURVEX_DATA = REPOS_ROOT_PATH + 'loser/'
TUNNEL_DATA = REPOS_ROOT_PATH + 'drawings/'
THREEDCACHEDIR = REPOS_ROOT_PATH + 'expowebcache/3d/'
CAVERN = 'cavern'
THREEDTOPOS = '3dtopos'
EXPOWEB = REPOS_ROOT_PATH + 'expoweb/'
SURVEYS = REPOS_ROOT_PATH
#SURVEY_SCANS = REPOS_ROOT_PATH + 'expofiles/'
SURVEY_SCANS = '/mnt/f/expofiles/'
#FILES = REPOS_ROOT_PATH + 'expofiles'
FILES = '/mnt/f/expofiles'
EXPOWEB_URL = ''
SURVEYS_URL = '/survey_scans/'
PYTHON_PATH = REPOS_ROOT_PATH + 'troggle/'
URL_ROOT = 'http://127.0.0.1:8000/'
#URL_ROOT = "/mnt/d/CUCC-Expo/expoweb/"
DIR_ROOT = ''#this should end in / if a value is given
#MEDIA_URL = URL_ROOT + DIR_ROOT + '/site_media/'
MEDIA_URL = '/site_media/'
MEDIA_ROOT = REPOS_ROOT_PATH + 'troggle/media/'
MEDIA_ADMIN_DIR = '/usr/lib/python2.7/site-packages/django/contrib/admin/media/'
STATIC_URL = URL_ROOT + 'static/'
STATIC_ROOT = DIR_ROOT + '/mnt/d/CUCC-Expo/'
JSLIB_URL = URL_ROOT + 'javascript/'
TINY_MCE_MEDIA_ROOT = '/usr/share/tinymce/www/'
TINY_MCE_MEDIA_URL = URL_ROOT + DIR_ROOT + '/tinymce_media/'
TEMPLATE_DIRS = (
PYTHON_PATH + "templates",
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
LOGFILE = PYTHON_PATH + 'troggle.log'

View File

@@ -2,7 +2,7 @@ import sys
# This is the local settings for use with the docker compose dev setup. It is imported automatically
DATABASES = {
'default': {
'default': {
'ENGINE': 'django.db.backends.mysql', # 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME' : 'troggle', # Or path to database file if using sqlite3.
'USER' : 'troggleuser', # Not used with sqlite3.
@@ -12,6 +12,8 @@ DATABASES = {
}
}
ALLOWED_HOSTS = ['*']
EXPOUSER = 'expo'
EXPOUSERPASS = 'somepasshere'
EXPOUSER_EMAIL = 'wookey@wookware.org'
@@ -55,11 +57,4 @@ JSLIB_URL = URL_ROOT + 'javascript/'
TINY_MCE_MEDIA_ROOT = STATIC_ROOT + '/tiny_mce/'
TINY_MCE_MEDIA_URL = STATIC_ROOT + '/tiny_mce/'
TEMPLATE_DIRS = (
PYTHON_PATH + "templates",
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
LOGFILE = PYTHON_PATH + 'troggle_log.txt'

View File

@@ -15,6 +15,8 @@ DATABASES = {
}
}
ALLOWED_HOSTS = ['*']
REPOS_ROOT_PATH = '/home/expo/'
sys.path.append(REPOS_ROOT_PATH)
sys.path.append(REPOS_ROOT_PATH + 'troggle')
@@ -53,13 +55,6 @@ JSLIB_PATH = '/usr/share/javascript/'
TINY_MCE_MEDIA_ROOT = '/usr/share/tinymce/www/'
TINY_MCE_MEDIA_URL = URL_ROOT + DIR_ROOT + 'tinymce_media/'
TEMPLATE_DIRS = (
PYTHON_PATH + "templates",
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
LOGFILE = '/home/expo/troggle/troggle_log.txt'
FEINCMS_ADMIN_MEDIA='/site_media/feincms/'

View File

@@ -1,6 +1,6 @@
import sys
# This is an example file. Copy it to localsettings.py, set the
# password and _don't_ check that file back to the repo as it exposes
# password and _don't_ check that file back to the repo as it exposes
# your/our password to the world!
DATABASES = {
@@ -14,6 +14,8 @@ DATABASES = {
}
}
ALLOWED_HOSTS = ['*']
EXPOUSER = 'expo'
EXPOUSERPASS = 'realpasshere'
EXPOUSER_EMAIL = 'wookey@wookware.org'
@@ -55,19 +57,16 @@ JSLIB_URL = URL_ROOT + 'javascript/'
TINY_MCE_MEDIA_ROOT = STATIC_ROOT + '/tiny_mce/'
TINY_MCE_MEDIA_URL = STATIC_ROOT + '/tiny_mce/'
TEMPLATE_DIRS = (
PYTHON_PATH + "templates",
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
LOGFILE = '/home/expo/troggle/troggle.log'
LOGFILE = '/home/expo/troggle/troggle_log.txt'
FEINCMS_ADMIN_MEDIA='/site_media/feincms/'
#EMAIL_HOST = "smtp.gmail.com"
#EMAIL_HOST_USER = "cuccexpo@gmail.com"
#EMAIL_HOST_PASSWORD = "khvtffkhvtff"
#EMAIL_PORT=587
#EMAIL_USE_TLS = True
EMAIL_HOST = "smtp.gmail.com"
EMAIL_HOST_USER = "cuccexpo@gmail.com"
EMAIL_HOST_PASSWORD = "khvtffkhvtff"
EMAIL_PORT=587
EMAIL_USE_TLS = True

View File

@@ -2,7 +2,7 @@ import sys
# link localsettings to this file for use on expo computer in austria
DATABASES = {
'default': {
'default': {
'ENGINE': 'django.db.backends.mysql', # 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME' : 'troggle', # Or path to database file if using sqlite3.
'USER' : 'expo', # Not used with sqlite3.
@@ -12,6 +12,8 @@ DATABASES = {
}
}
ALLOWED_HOSTS = ['*']
EXPOUSER = 'expo'
EXPOUSERPASS = 'realpasshere'
EXPOUSER_EMAIL = 'wookey@wookware.org'
@@ -57,11 +59,4 @@ JSLIB_URL = URL_ROOT + 'javascript/'
TINY_MCE_MEDIA_ROOT = '/usr/share/tinymce/www/'
TINY_MCE_MEDIA_URL = URL_ROOT + DIR_ROOT + '/tinymce_media/'
TEMPLATE_DIRS = (
PYTHON_PATH + "templates",
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
LOGFILE = PYTHON_PATH + 'troggle_log.txt'

View File

@@ -9,6 +9,8 @@ DATABASES = {
}
}
ALLOWED_HOSTS = ['*']
EXPOUSER = 'expo'
EXPOUSERPASS = 'realpasshere'
EXPOUSER_EMAIL = 'wookey@wookware.org'
@@ -30,7 +32,7 @@ URL_ROOT = 'http://127.0.0.1:8000'
DIR_ROOT = ''#this should end in / if a value is given
PUBLIC_SITE = False
TINY_MCE_MEDIA_ROOT = '/usr/share/tinymce/www/'
TINY_MCE_MEDIA_ROOT = '/usr/share/tinymce/www/'
TINY_MCE_MEDIA_URL = URL_ROOT + DIR_ROOT + 'tinymce_media/'
PYTHON_PATH = 'C:\\expoweb\\troggle\\'
@@ -56,14 +58,3 @@ EMAIL_USE_TLS = True
# URL that handles the media served from MEDIA_ROOT. Make sure to use a
# trailing slash if there is a path component (optional in other cases).
# Examples: "http://media.lawrence.com", "http://example.com/media/"
TEMPLATE_DIRS = (
"C:/Expo/expoweb/troggle/templates",
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)

View File

@@ -1,68 +0,0 @@
import os
import time
import timeit
import settings
os.environ['PYTHONPATH'] = settings.PYTHON_PATH
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'settings')
from django.core import management
from django.db import connection, close_old_connections
from django.contrib.auth.models import User
from django.http import HttpResponse
from django.core.urlresolvers import reverse
from troggle.core.models import Cave, Entrance
import troggle.flatpages.models
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
def import_auto_logbooks():
import parsers.logbooks
import os
for pt in troggle.core.models.PersonTrip.objects.all():
pt.delete()
for lbe in troggle.core.models.LogbookEntry.objects.all():
lbe.delete()
for expedition in troggle.core.models.Expedition.objects.all():
directory = os.path.join(settings.EXPOWEB,
"years",
expedition.year,
"autologbook")
for root, dirs, filenames in os.walk(directory):
for filename in filenames:
print(os.path.join(root, filename))
parsers.logbooks.parseAutoLogBookEntry(os.path.join(root, filename))
#Temporary function until definitive source of data transfered.
from django.template.defaultfilters import slugify
from django.template import Context, loader
def dumplogbooks():
def get_name(pe):
if pe.nickname:
return pe.nickname
else:
return pe.person.first_name
for lbe in troggle.core.models.LogbookEntry.objects.all():
dateStr = lbe.date.strftime("%Y-%m-%d")
directory = os.path.join(settings.EXPOWEB,
"years",
lbe.expedition.year,
"autologbook")
if not os.path.isdir(directory):
os.mkdir(directory)
filename = os.path.join(directory,
dateStr + "." + slugify(lbe.title)[:50] + ".html")
if lbe.cave:
print(lbe.cave.reference())
trip = {"title": lbe.title, "html":lbe.text, "cave": lbe.cave.reference(), "caveOrLocation": "cave"}
else:
trip = {"title": lbe.title, "html":lbe.text, "location":lbe.place, "caveOrLocation": "location"}
pts = [pt for pt in lbe.persontrip_set.all() if pt.personexpedition]
persons = [{"name": get_name(pt.personexpedition), "TU": pt.time_underground, "author": pt.is_logbook_entry_author} for pt in pts]
f = open(filename, "wb")
template = loader.get_template('dataformat/logbookentry.html')
context = Context({'trip': trip,
'persons': persons,
'date': dateStr,
'expeditionyear': lbe.expedition.year})
output = template.render(context)
f.write(unicode(output).encode( "utf-8" ))
f.close()
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

View File

@@ -29,12 +29,12 @@
}
function redirectSurvey(){
window.location = "{{ settings.URL_ROOT }}/survey/" + document.getElementById("expeditionChooser").value + "%23" + document.getElementById("surveyChooser").value;
window.location = "{{ URL_ROOT }}/survey/" + document.getElementById("expeditionChooser").value + "%23" + document.getElementById("surveyChooser").value;
document.getElementById("progressTableContent").style.display='hidden'
}
function redirectYear(){
window.location = "{{ settings.URL_ROOT }}/survey/" + document.getElementById("expeditionChooser").value + "%23"
window.location = "{{ URL_ROOT }}/survey/" + document.getElementById("expeditionChooser").value + "%23"
}

View File

@@ -46,4 +46,4 @@ def _resolves(url):
return True
except http.Http404:
return False

0
modelviz.py Normal file → Executable file
View File

View File

@@ -30,7 +30,7 @@ def parseCaveQMs(cave,inputFile):
kh=Cave.objects.get(official_name="Kaninchenh&ouml;hle")
except Cave.DoesNotExist:
print("KH is not in the database. Please run parsers.cavetab first.")
parse_KH_QMs(kh, inputFile=inputFile)
parse_KH_QMs(kh, inputFile=inputFile)
return
qmPath = settings.EXPOWEB+inputFile
@@ -46,9 +46,9 @@ def parseCaveQMs(cave,inputFile):
if cave=='stein':
placeholder, hadToCreate = LogbookEntry.objects.get_or_create(date__year=year, title="placeholder for QMs in 204", text="QMs temporarily attached to this should be re-attached to their actual trips", defaults={"date": date(year, 1, 1),"cave":steinBr})
elif cave=='hauch':
placeholder, hadToCreate = LogbookEntry.objects.get_or_create(date__year=year, title="placeholder for QMs in 234", text="QMs temporarily attached to this should be re-attached to their actual trips", defaults={"date": date(year, 1, 1),"cave":hauchHl})
placeholder, hadToCreate = LogbookEntry.objects.get_or_create(date__year=year, title="placeholder for QMs in 234", text="QMs temporarily attached to this should be re-attached to their actual trips", defaults={"date": date(year, 1, 1),"cave":hauchHl})
if hadToCreate:
print((" - placeholder logbook entry for " + cave + " " + str(year) + " added to database"))
print(cave + " placeholder logbook entry for " + str(year) + " added to database")
QMnum=re.match(r".*?-\d*?-X?(?P<numb>\d*)",line[0]).group("numb")
newQM = QM()
newQM.found_by=placeholder
@@ -59,7 +59,7 @@ def parseCaveQMs(cave,inputFile):
newQM.grade=line[1]
newQM.area=line[2]
newQM.location_description=line[3]
newQM.completion_description=line[4]
newQM.nearest_station_description=line[5]
if newQM.completion_description: # Troggle checks if QMs are completed by checking if they have a ticked_off_by trip. In the table, completion is indicated by the presence of a completion discription.
@@ -71,14 +71,14 @@ def parseCaveQMs(cave,inputFile):
if preexistingQM.new_since_parsing==False: #if the pre-existing QM has not been modified, overwrite it
preexistingQM.delete()
newQM.save()
#print((" - overwriting " + str(preexistingQM) +"\r"))
print("overwriting " + str(preexistingQM) +"\r")
else: # otherwise, print that it was ignored
print((" - preserving " + str(preexistingQM) + ", which was edited in admin \r"))
print("preserving " + str(preexistingQM) + ", which was edited in admin \r")
except QM.DoesNotExist: #if there is no pre-existing QM, save the new one
newQM.save()
# print("QM "+str(newQM) + ' added to database\r')
newQM.save()
print("QM "+str(newQM) + ' added to database\r')
except KeyError: #check on this one
continue
except IndexError:
@@ -106,9 +106,9 @@ def parse_KH_QMs(kh, inputFile):
'nearest_station_name':res['nearest_station'],
'location_description':res['description']
}
save_carefully(QM,lookupArgs,nonLookupArgs)
parseCaveQMs(cave='stein',inputFile=r"1623/204/qm.csv")
parseCaveQMs(cave='hauch',inputFile=r"1623/234/qm.csv")

15
parsers/caves.py Executable file → Normal file
View File

@@ -152,10 +152,10 @@ def readcave(filename):
slug = slug,
primary = primary)
except:
message = " ! Can't find text (slug): %s, skipping %s" % (slug, context)
message = "Can't find text (slug): %s, skipping %s" % (slug, context)
models.DataIssue.objects.create(parser='caves', message=message)
print(message)
primary = False
for entrance in entrances:
slug = getXML(entrance, "entranceslug", maxItems = 1, context = context)[0]
@@ -164,23 +164,22 @@ def readcave(filename):
entrance = models.Entrance.objects.get(entranceslug__slug = slug)
ce = models.CaveAndEntrance.objects.update_or_create(cave = c, entrance_letter = letter, entrance = entrance)
except:
message = " ! Entrance text (slug) %s missing %s" % (slug, context)
message = "Entrance text (slug) %s missing %s" % (slug, context)
models.DataIssue.objects.create(parser='caves', message=message)
print(message)
def getXML(text, itemname, minItems = 1, maxItems = None, printwarnings = True, context = ""):
# this next line is where it crashes horribly if a stray umlaut creeps in. Will fix itself in python3
items = re.findall("<%(itemname)s>(.*?)</%(itemname)s>" % {"itemname": itemname}, text, re.S)
if len(items) < minItems and printwarnings:
message = " ! %(count)i %(itemname)s found, at least %(min)i expected" % {"count": len(items),
message = "%(count)i %(itemname)s found, at least %(min)i expected" % {"count": len(items),
"itemname": itemname,
"min": minItems} + context
models.DataIssue.objects.create(parser='caves', message=message)
print(message)
if maxItems is not None and len(items) > maxItems and printwarnings:
message = " ! %(count)i %(itemname)s found, no more than %(max)i expected" % {"count": len(items),
message = "%(count)i %(itemname)s found, no more than %(max)i expected" % {"count": len(items),
"itemname": itemname,
"max": maxItems} + context
models.DataIssue.objects.create(parser='caves', message=message)

View File

@@ -1,23 +1,24 @@
#.-*- coding: utf-8 -*-
from __future__ import (absolute_import, division,
print_function)
import csv
import re
import datetime, time
import os
import pickle
from django.conf import settings
from django.template.defaultfilters import slugify
from troggle.core.models import DataIssue, Expedition
import troggle.core.models as models
from parsers.people import GetPersonExpeditionNameLookup
from parsers.cavetab import GetCaveLookup
from django.template.defaultfilters import slugify
from django.utils.timezone import get_current_timezone
from django.utils.timezone import make_aware
import csv
import re
import datetime
import os
from fuzzywuzzy import fuzz
from utils import save_carefully
#
#
# When we edit logbook entries, allow a "?" after any piece of data to say we've frigged it and
# it can be checked up later from the hard-copy if necessary; or it's not possible to determin (name, trip place, etc)
#
@@ -31,6 +32,7 @@ def GetTripPersons(trippeople, expedition, logtime_underground):
round_bracket_regex = re.compile(r"[\(\[].*?[\)\]]")
for tripperson in re.split(r",|\+|&amp;|&(?!\w+;)| and ", trippeople):
tripperson = tripperson.strip()
tripperson = tripperson.strip('.')
mul = re.match(r"<u>(.*?)</u>$(?i)", tripperson)
if mul:
tripperson = mul.group(1).strip()
@@ -42,6 +44,15 @@ def GetTripPersons(trippeople, expedition, logtime_underground):
print(" - No name match for: '%s'" % tripperson)
message = "No name match for: '%s' in year '%s'" % (tripperson, expedition.year)
models.DataIssue.objects.create(parser='logbooks', message=message)
print(' - Lets try something fuzzy')
fuzzy_matches = {}
for person in GetPersonExpeditionNameLookup(expedition):
fuzz_num = fuzz.ratio(tripperson.lower(), person)
if fuzz_num > 50:
#print(" - %s -> %s = %d" % (tripperson.lower(), person, fuzz_num))
fuzzy_matches[person] = fuzz_num
for i in sorted(fuzzy_matches.items(), key = lambda kv:(kv[1]), reverse=True):
print(' - %s -> %s' % (i[0], i[1]))
res.append((personyear, logtime_underground))
if mul:
author = personyear
@@ -78,20 +89,13 @@ def GetTripCave(place): #need to be fuzzier about matching here. Already a very
print("No cave found for place " , place)
return
logentries = [] # the entire logbook is a single object: a list of entries
noncaveplaces = [ "Journey", "Loser Plateau" ]
noncaveplaces = [ "Journey", "Loser Plateau" ]
def EnterLogIntoDbase(date, place, title, text, trippeople, expedition, logtime_underground, entry_type="wiki"):
""" saves a logbook entry and related persontrips """
global logentries
entrytuple = (date, place, title, text,
trippeople, expedition, logtime_underground, entry_type)
logentries.append(entrytuple)
trippersons, author = GetTripPersons(trippeople, expedition, logtime_underground)
if not author:
print(" * Skipping logentry: " + title + " - no author for entry")
print(" - Skipping logentry: " + title + " - no author for entry")
message = "Skipping logentry: %s - no author for entry in year '%s'" % (title, expedition.year)
models.DataIssue.objects.create(parser='logbooks', message=message)
return
@@ -108,12 +112,13 @@ def EnterLogIntoDbase(date, place, title, text, trippeople, expedition, logtime_
nonLookupAttribs={'place':place, 'text':text, 'expedition':expedition, 'cave':cave, 'slug':slugify(title)[:50], 'entry_type':entry_type}
lbo, created=save_carefully(models.LogbookEntry, lookupAttribs, nonLookupAttribs)
for tripperson, time_underground in trippersons:
lookupAttribs={'personexpedition':tripperson, 'logbook_entry':lbo}
nonLookupAttribs={'time_underground':time_underground, 'is_logbook_entry_author':(tripperson == author)}
#print nonLookupAttribs
save_carefully(models.PersonTrip, lookupAttribs, nonLookupAttribs)
def ParseDate(tripdate, year):
""" Interprets dates in the expo logbooks and returns a correct datetime.date object """
mdatestandard = re.match(r"(\d\d\d\d)-(\d\d)-(\d\d)", tripdate)
@@ -127,13 +132,14 @@ def ParseDate(tripdate, year):
day, month, year = int(mdategoof.group(1)), int(mdategoof.group(2)), int(mdategoof.group(4)) + yadd
else:
assert False, tripdate
return datetime.date(year, month, day)
return make_aware(datetime.datetime(year, month, day), get_current_timezone())
# 2006, 2008 - 2009
# 2006, 2008 - 2010
def Parselogwikitxt(year, expedition, txt):
trippara = re.findall(r"===(.*?)===([\s\S]*?)(?====)", txt)
for triphead, triptext in trippara:
tripheadp = triphead.split("|")
#print "ttt", tripheadp
assert len(tripheadp) == 3, (tripheadp, triptext)
tripdate, tripplace, trippeople = tripheadp
tripsplace = tripplace.split(" - ")
@@ -141,14 +147,19 @@ def Parselogwikitxt(year, expedition, txt):
tul = re.findall(r"T/?U:?\s*(\d+(?:\.\d*)?|unknown)\s*(hrs|hours)?", triptext)
if tul:
#assert len(tul) <= 1, (triphead, triptext)
#assert tul[0][1] in ["hrs", "hours"], (triphead, triptext)
tu = tul[0][0]
else:
tu = ""
#assert tripcave == "Journey", (triphead, triptext)
#print tripdate
ldate = ParseDate(tripdate.strip(), year)
#print "\n", tripcave, "--- ppp", trippeople, len(triptext)
EnterLogIntoDbase(date = ldate, place = tripcave, title = tripplace, text = triptext, trippeople=trippeople, expedition=expedition, logtime_underground=0)
# 2002, 2004, 2005, 2007, 2010 - now
# 2002, 2004, 2005, 2007, 2011 - 2018
def Parseloghtmltxt(year, expedition, txt):
#print(" - Starting log html parser")
tripparas = re.findall(r"<hr\s*/>([\s\S]*?)(?=<hr)", txt)
@@ -156,7 +167,7 @@ def Parseloghtmltxt(year, expedition, txt):
for trippara in tripparas:
#print(" - HR detected - maybe a trip?")
logbook_entry_count += 1
s = re.match(r'''(?x)(?:\s*<div\sclass="tripdate"\sid=".*?">.*?</div>\s*<p>)? # second date
\s*(?:<a\s+id="(.*?)"\s*/>\s*</a>)?
\s*<div\s+class="tripdate"\s*(?:id="(.*?)")?>(.*?)</div>(?:<p>)?
@@ -168,7 +179,8 @@ def Parseloghtmltxt(year, expedition, txt):
''', trippara)
if not s:
if not re.search(r"Rigging Guide", trippara):
print(("can't parse: ", trippara)) # this is 2007 which needs editing
print("can't parse: ", trippara) # this is 2007 which needs editing
#assert s, trippara
continue
tripid, tripid1, tripdate, trippeople, triptitle, triptext, tu = s.groups()
ldate = ParseDate(tripdate.strip(), year)
@@ -177,12 +189,15 @@ def Parseloghtmltxt(year, expedition, txt):
tripcave = triptitles[0]
else:
tripcave = "UNKNOWN"
#print("\n", tripcave, "--- ppp", trippeople, len(triptext))
ltriptext = re.sub(r"</p>", "", triptext)
ltriptext = re.sub(r"\s*?\n\s*", " ", ltriptext)
ltriptext = re.sub(r"<p>", "</br></br>", ltriptext).strip()
ltriptext = re.sub(r"<p>", "\n\n", ltriptext).strip()
EnterLogIntoDbase(date = ldate, place = tripcave, title = triptitle, text = ltriptext,
trippeople=trippeople, expedition=expedition, logtime_underground=0,
entry_type="html")
if logbook_entry_count == 0:
print(" - No trip entrys found in logbook, check the syntax matches htmltxt format")
# main parser for 1991 - 2001. simpler because the data has been hacked so much to fit it
@@ -196,9 +211,12 @@ def Parseloghtml01(year, expedition, txt):
tripid = mtripid and mtripid.group(1) or ""
tripheader = re.sub(r"</?(?:[ab]|span)[^>]*>", "", tripheader)
#print " ", [tripheader]
#continue
tripdate, triptitle, trippeople = tripheader.split("|")
ldate = ParseDate(tripdate.strip(), year)
mtu = re.search(r'<p[^>]*>(T/?U.*)', triptext)
if mtu:
tu = mtu.group(1)
@@ -210,17 +228,21 @@ def Parseloghtml01(year, expedition, txt):
tripcave = triptitles[0].strip()
ltriptext = triptext
mtail = re.search(r'(?:<a href="[^"]*">[^<]*</a>|\s|/|-|&amp;|</?p>|\((?:same day|\d+)\))*$', ltriptext)
if mtail:
#print mtail.group(0)
ltriptext = ltriptext[:mtail.start(0)]
ltriptext = re.sub(r"</p>", "", ltriptext)
ltriptext = re.sub(r"\s*?\n\s*", " ", ltriptext)
ltriptext = re.sub(r"<p>|<br>", "\n\n", ltriptext).strip()
#ltriptext = re.sub("[^\s0-9a-zA-Z\-.,:;'!]", "NONASCII", ltriptext)
ltriptext = re.sub(r"</?u>", "_", ltriptext)
ltriptext = re.sub(r"</?i>", "''", ltriptext)
ltriptext = re.sub(r"</?b>", "'''", ltriptext)
#print ldate, trippeople.strip()
# could includ the tripid (url link for cross referencing)
EnterLogIntoDbase(date=ldate, place=tripcave, title=triptitle, text=ltriptext,
trippeople=trippeople, expedition=expedition, logtime_underground=0,
entry_type="html")
@@ -247,6 +269,7 @@ def Parseloghtml03(year, expedition, txt):
tripcave = triptitles[0]
else:
tripcave = "UNKNOWN"
#print tripcave, "--- ppp", triptitle, trippeople, len(triptext)
ltriptext = re.sub(r"</p>", "", triptext)
ltriptext = re.sub(r"\s*?\n\s*", " ", ltriptext)
ltriptext = re.sub(r"<p>", "\n\n", ltriptext).strip()
@@ -276,95 +299,53 @@ def SetDatesFromLogbookEntries(expedition):
def LoadLogbookForExpedition(expedition):
""" Parses all logbook entries for one expedition
"""
global logentries
logbook_parseable = False
logbook_cached = False
yearlinks = settings.LOGBOOK_PARSER_SETTINGS
expologbase = os.path.join(settings.EXPOWEB, "years")
if expedition.year in yearlinks:
logbookfile = os.path.join(expologbase, yearlinks[expedition.year][0])
parsefunc = yearlinks[expedition.year][1]
else:
logbookfile = os.path.join(expologbase, expedition.year, settings.DEFAULT_LOGBOOK_FILE)
parsefunc = settings.DEFAULT_LOGBOOK_PARSER
cache_filename = logbookfile + ".cache"
""" Parses all logbook entries for one expedition """
try:
bad_cache = False
now = time.time()
cache_t = os.path.getmtime(cache_filename)
if os.path.getmtime(logbookfile) - cache_t > 2: # at least 2 secs later
bad_cache= True
if now - cache_t > 30*24*60*60:
bad_cache= True
if bad_cache:
print(" - ! Cache is either stale or more than 30 days old. Deleting it.")
os.remove(cache_filename)
logentries=[]
print(" ! Removed stale or corrupt cache file")
raise
print(" - Reading cache: " + cache_filename, end='')
expowebbase = os.path.join(settings.EXPOWEB, "years")
yearlinks = settings.LOGBOOK_PARSER_SETTINGS
logbook_parseable = False
if expedition.year in yearlinks:
year_settings = yearlinks[expedition.year]
file_in = open(os.path.join(expowebbase, year_settings[0]))
txt = file_in.read().decode("latin1")
file_in.close()
parsefunc = year_settings[1]
logbook_parseable = True
print(" - Parsing logbook: " + year_settings[0] + "\n - Using parser: " + year_settings[1])
else:
try:
with open(cache_filename, "rb") as f:
logentries = pickle.load(f)
print(" -- Loaded ", len(logentries), " log entries")
logbook_cached = True
except:
print("\n ! Failed to load corrupt cache. Deleting it.\n")
os.remove(cache_filename)
logentries=[]
raise
except : # no cache found
#print(" - No cache \"" + cache_filename +"\"")
try:
file_in = open(logbookfile,'rb')
file_in = open(os.path.join(expowebbase, expedition.year, settings.DEFAULT_LOGBOOK_FILE))
txt = file_in.read().decode("latin1")
file_in.close()
logbook_parseable = True
print((" - Using: " + parsefunc + " to parse " + logbookfile))
print("No set parser found using default")
parsefunc = settings.DEFAULT_LOGBOOK_PARSER
except (IOError):
logbook_parseable = False
print((" ! Couldn't open logbook " + logbookfile))
print("Couldn't open default logbook file and nothing in settings for expo " + expedition.year)
if logbook_parseable:
parser = globals()[parsefunc]
parser(expedition.year, expedition, txt)
SetDatesFromLogbookEntries(expedition)
# and this has also stored all the log entries in logentries[]
if len(logentries) >0:
print(" - Cacheing " , len(logentries), " log entries")
with open(cache_filename, "wb") as fc:
pickle.dump(logentries, fc, 2)
else:
print(" ! NO TRIP entries found in logbook, check the syntax.")
logentries=[] # flush for next year
if logbook_cached:
i=0
for entrytuple in range(len(logentries)):
date, place, title, text, trippeople, expedition, logtime_underground, \
entry_type = logentries[i]
EnterLogIntoDbase(date, place, title, text, trippeople, expedition, logtime_underground,\
entry_type)
i +=1
#return "TOLOAD: " + year + " " + str(expedition.personexpedition_set.all()[1].logbookentry_set.count()) + " " + str(models.PersonTrip.objects.filter(personexpedition__expedition=expedition).count())
def LoadLogbooks():
""" This is the master function for parsing all logbooks into the Troggle database.
"""
DataIssue.objects.filter(parser='logbooks').delete()
expos = Expedition.objects.all()
nologbook = ["1976", "1977","1978","1979","1980","1980","1981","1983","1984",
"1985","1986","1987","1988","1989","1990",]
""" This is the master function for parsing all logbooks into the Troggle database. """
# Clear the logbook data issues as we are reloading
models.DataIssue.objects.filter(parser='logbooks').delete()
# Fetch all expos
expos = models.Expedition.objects.all()
for expo in expos:
if expo.year not in nologbook:
print((" - Logbook for: " + expo.year))
LoadLogbookForExpedition(expo)
print("\nLoading Logbook for: " + expo.year)
# Load logbook for expo
LoadLogbookForExpedition(expo)
dateRegex = re.compile(r'<span\s+class="date">(\d\d\d\d)-(\d\d)-(\d\d)</span>', re.S)
@@ -388,25 +369,25 @@ def parseAutoLogBookEntry(filename):
year, month, day = [int(x) for x in dateMatch.groups()]
date = datetime.date(year, month, day)
else:
errors.append(" - Date could not be found")
errors.append("Date could not be found")
expeditionYearMatch = expeditionYearRegex.search(contents)
if expeditionYearMatch:
try:
expedition = models.Expedition.objects.get(year = expeditionYearMatch.groups()[0])
personExpeditionNameLookup = GetPersonExpeditionNameLookup(expedition)
except Expedition.DoesNotExist:
errors.append(" - Expedition not in database")
except models.Expedition.DoesNotExist:
errors.append("Expedition not in database")
else:
errors.append(" - Expedition Year could not be parsed")
errors.append("Expediton Year could not be parsed")
titleMatch = titleRegex.search(contents)
if titleMatch:
title, = titleMatch.groups()
if len(title) > settings.MAX_LOGBOOK_ENTRY_TITLE_LENGTH:
errors.append(" - Title too long")
errors.append("Title too long")
else:
errors.append(" - Title could not be found")
errors.append("Title could not be found")
caveMatch = caveRegex.search(contents)
if caveMatch:
@@ -415,24 +396,24 @@ def parseAutoLogBookEntry(filename):
cave = models.getCaveByReference(caveRef)
except AssertionError:
cave = None
errors.append(" - Cave not found in database")
errors.append("Cave not found in database")
else:
cave = None
locationMatch = locationRegex.search(contents)
if locationMatch:
location, = locationMatch.groups()
location, = locationMatch.groups()
else:
location = None
if cave is None and location is None:
errors.append(" - Location nor cave could not be found")
errors.append("Location nor cave could not be found")
reportMatch = reportRegex.search(contents)
if reportMatch:
report, = reportMatch.groups()
else:
errors.append(" - Contents could not be found")
errors.append("Contents could not be found")
if errors:
return errors # Easiest to bail out at this point as we need to make sure that we know which expedition to look for people from.
people = []
@@ -443,29 +424,29 @@ def parseAutoLogBookEntry(filename):
if name.lower() in personExpeditionNameLookup:
personExpo = personExpeditionNameLookup[name.lower()]
else:
errors.append(" - Person could not be found in database")
errors.append("Person could not be found in database")
author = bool(author)
else:
errors.append(" - Persons name could not be found")
errors.append("Persons name could not be found")
TUMatch = TURegex.search(contents)
if TUMatch:
TU, = TUMatch.groups()
else:
errors.append(" - TU could not be found")
errors.append("TU could not be found")
if not errors:
people.append((name, author, TU))
if errors:
return errors # Bail out before committing to the database
logbookEntry = LogbookEntry(date = date,
return errors # Bail out before commiting to the database
logbookEntry = models.LogbookEntry(date = date,
expedition = expedition,
title = title, cave = cave, place = location,
title = title, cave = cave, place = location,
text = report, slug = slugify(title)[:50],
filename = filename)
logbookEntry.save()
for name, author, TU in people:
models.PersonTrip(personexpedition = personExpo,
time_underground = TU,
logbook_entry = logbookEntry,
models.PersonTrip(personexpedition = personExpo,
time_underground = TU,
logbook_entry = logbookEntry,
is_logbook_entry_author = author).save()
print(logbookEntry)

View File

@@ -7,69 +7,62 @@ from utils import save_carefully
from HTMLParser import HTMLParser
from unidecode import unidecode
# def saveMugShot(mugShotPath, mugShotFilename, person):
# if mugShotFilename.startswith(r'i/'): #if filename in cell has the directory attached (I think they all do), remove it
# mugShotFilename=mugShotFilename[2:]
# else:
# mugShotFilename=mugShotFilename # just in case one doesn't
# dummyObj=models.DPhoto(file=mugShotFilename)
# #Put a copy of the file in the right place. mugShotObj.file.path is determined by the django filesystemstorage specified in models.py
# if not os.path.exists(dummyObj.file.path):
# shutil.copy(mugShotPath, dummyObj.file.path)
# mugShotObj, created = save_carefully(
# models.DPhoto,
# lookupAttribs={'is_mugshot':True, 'file':mugShotFilename},
# nonLookupAttribs={'caption':"Mugshot for "+person.first_name+" "+person.last_name}
# )
# if created:
# mugShotObj.contains_person.add(person)
# mugShotObj.save()
def saveMugShot(mugShotPath, mugShotFilename, person):
if mugShotFilename.startswith(r'i/'): #if filename in cell has the directory attached (I think they all do), remove it
mugShotFilename=mugShotFilename[2:]
else:
mugShotFilename=mugShotFilename # just in case one doesn't
dummyObj=models.DPhoto(file=mugShotFilename)
#Put a copy of the file in the right place. mugShotObj.file.path is determined by the django filesystemstorage specified in models.py
if not os.path.exists(dummyObj.file.path):
shutil.copy(mugShotPath, dummyObj.file.path)
mugShotObj, created = save_carefully(
models.DPhoto,
lookupAttribs={'is_mugshot':True, 'file':mugShotFilename},
nonLookupAttribs={'caption':"Mugshot for "+person.first_name+" "+person.last_name}
)
if created:
mugShotObj.contains_person.add(person)
mugShotObj.save()
def parseMugShotAndBlurb(personline, header, person):
"""create mugshot Photo instance"""
mugShotFilename=personline[header["Mugshot"]]
mugShotPath = os.path.join(settings.EXPOWEB, "folk", mugShotFilename)
if mugShotPath[-3:]=='jpg': #if person just has an image, add it
#saveMugShot(mugShotPath=mugShotPath, mugShotFilename=mugShotFilename, person=person)
pass
saveMugShot(mugShotPath=mugShotPath, mugShotFilename=mugShotFilename, person=person)
elif mugShotPath[-3:]=='htm': #if person has an html page, find the image(s) and add it. Also, add the text from the html page to the "blurb" field in his model instance.
personPageOld=open(mugShotPath,'r').read()
if not person.blurb:
pblurb=re.search('<body>.*<hr',personPageOld,re.DOTALL)
if pblurb:
#this needs to be refined, take care of the HTML and make sure it doesn't match beyond the blurb.
#Only finds the first image, not all of them
person.blurb=re.search('<body>.*<hr',personPageOld,re.DOTALL).group()
else:
print "ERROR: --------------- Broken link or Blurb parse error in ", mugShotFilename
#for mugShotFilename in re.findall('i/.*?jpg',personPageOld,re.DOTALL):
# mugShotPath = os.path.join(settings.EXPOWEB, "folk", mugShotFilename)
# saveMugShot(mugShotPath=mugShotPath, mugShotFilename=mugShotFilename, person=person)
person.blurb=re.search('<body>.*<hr',personPageOld,re.DOTALL).group() #this needs to be refined, take care of the HTML and make sure it doesn't match beyond the blurb
for mugShotFilename in re.findall('i/.*?jpg',personPageOld,re.DOTALL):
mugShotPath = os.path.join(settings.EXPOWEB, "folk", mugShotFilename)
saveMugShot(mugShotPath=mugShotPath, mugShotFilename=mugShotFilename, person=person)
person.save()
def LoadPersonsExpos():
persontab = open(os.path.join(settings.EXPOWEB, "folk", "folk.csv"))
personreader = csv.reader(persontab)
headers = personreader.next()
header = dict(zip(headers, range(len(headers))))
# make expeditions
print(" - Loading expeditions")
print("Loading expeditions")
years = headers[5:]
for year in years:
lookupAttribs = {'year':year}
nonLookupAttribs = {'name':"CUCC expo %s" % year}
save_carefully(models.Expedition, lookupAttribs, nonLookupAttribs)
# make persons
print(" - Loading personexpeditions")
print("Loading personexpeditions")
for personline in personreader:
name = personline[header["Name"]]
@@ -94,11 +87,11 @@ def LoadPersonsExpos():
lastname = ""
lookupAttribs={'first_name':firstname, 'last_name':(lastname or "")}
nonLookupAttribs={'is_vfho':personline[header["VfHO member"]], 'fullname':fullname}
nonLookupAttribs={'is_vfho':bool(personline[header["VfHO member"]]), 'fullname':fullname}
person, created = save_carefully(models.Person, lookupAttribs, nonLookupAttribs)
parseMugShotAndBlurb(personline=personline, header=header, person=person)
# make person expedition from table
for year, attended in zip(headers, personline)[5:]:
expedition = models.Expedition.objects.get(year=year)
@@ -107,26 +100,6 @@ def LoadPersonsExpos():
nonLookupAttribs = {'nickname':nickname, 'is_guest':(personline[header["Guest"]] == "1")}
save_carefully(models.PersonExpedition, lookupAttribs, nonLookupAttribs)
# this fills in those people for whom 2008 was their first expo
#print "Loading personexpeditions 2008"
#expoers2008 = """Edvin Deadman,Kathryn Hopkins,Djuke Veldhuis,Becka Lawson,Julian Todd,Natalie Uomini,Aaron Curtis,Tony Rooke,Ollie Stevens,Frank Tully,Martin Jahnke,Mark Shinwell,Jess Stirrups,Nial Peters,Serena Povia,Olly Madge,Steve Jones,Pete Harley,Eeva Makiranta,Keith Curtis""".split(",")
#expomissing = set(expoers2008)
#for name in expomissing:
# firstname, lastname = name.split()
# is_guest = name in ["Eeva Makiranta", "Keith Curtis"]
# print "2008:", name
# persons = list(models.Person.objects.filter(first_name=firstname, last_name=lastname))
# if not persons:
# person = models.Person(first_name=firstname, last_name = lastname, is_vfho = False, mug_shot = "")
# #person.Sethref()
# person.save()
# else:
# person = persons[0]
# expedition = models.Expedition.objects.get(year="2008")
# personexpedition = models.PersonExpedition(person=person, expedition=expedition, nickname="", is_guest=is_guest)
# personexpedition.save()
# used in other referencing parser functions
# expedition name lookup cached for speed (it's a very big list)
Gpersonexpeditionnamelookup = { }
@@ -135,11 +108,11 @@ def GetPersonExpeditionNameLookup(expedition):
res = Gpersonexpeditionnamelookup.get(expedition.name)
if res:
return res
res = { }
duplicates = set()
#print("Calculating GetPersonExpeditionNameLookup for " + expedition.year)
print("Calculating GetPersonExpeditionNameLookup for " + expedition.year)
personexpeditions = models.PersonExpedition.objects.filter(expedition=expedition)
htmlparser = HTMLParser()
for personexpedition in personexpeditions:
@@ -166,16 +139,16 @@ def GetPersonExpeditionNameLookup(expedition):
possnames.append(personexpedition.nickname.lower() + " " + l[0])
if str(personexpedition.nickname.lower() + l[0]) not in possnames:
possnames.append(personexpedition.nickname.lower() + l[0])
for possname in possnames:
if possname in res:
duplicates.add(possname)
else:
res[possname] = personexpedition
for possname in duplicates:
del res[possname]
Gpersonexpeditionnamelookup[expedition.name] = res
return res

View File

@@ -1,5 +1,7 @@
'''
This module is the part of troggle that parses descriptions of cave parts (subcaves) from the legacy html files and saves them in the troggle database as instances of the model Subcave. Unfortunately, this parser can not be very flexible because the legacy format is poorly structured.
This module is the part of troggle that parses descriptions of cave parts (subcaves) from the legacy html
files and saves them in the troggle database as instances of the model Subcave.
Unfortunately, this parser can not be very flexible because the legacy format is poorly structured.
'''
import sys, os
@@ -29,12 +31,12 @@ def importSubcaves(cave):
link[0])
subcaveFile=open(subcaveFilePath,'r')
description=subcaveFile.read().decode('iso-8859-1').encode('utf-8')
lookupAttribs={'title':link[1], 'cave':cave}
nonLookupAttribs={'description':description}
newSubcave=save_carefully(Subcave,lookupAttribs=lookupAttribs,nonLookupAttribs=nonLookupAttribs)
logging.info("Added " + unicode(newSubcave) + " to " + unicode(cave))
logging.info("Added " + unicode(newSubcave) + " to " + unicode(cave))
except IOError:
logging.info("Subcave import couldn't open "+subcaveFilePath)

427
parsers/survex.py Executable file → Normal file
View File

@@ -1,67 +1,47 @@
from __future__ import absolute_import, division, print_function
import os
import re
import sys
import time
from datetime import datetime, timedelta
from subprocess import PIPE, Popen, call
from django.utils.timezone import get_current_timezone, make_aware
import troggle.settings as settings
import troggle.core.models as models
import troggle.core.models_survex as models_survex
from troggle.parsers.people import GetPersonExpeditionNameLookup
from troggle.core.views_caves import MapLocations
import troggle.settings as settings
"""A 'survex block' is a *begin...*end set of cave data.
A 'survexscansfolder' is what we today call a "survey scans folder" or a "wallet".
"""
from subprocess import call, Popen, PIPE
from troggle.parsers.people import GetPersonExpeditionNameLookup
from django.utils.timezone import get_current_timezone
from django.utils.timezone import make_aware
import re
import os
from datetime import datetime
line_leg_regex = re.compile(r"[\d\-+.]+$")
survexlegsalllength = 0.0
survexlegsnumber = 0
def LoadSurvexLineLeg(survexblock, stardata, sline, comment, cave):
global survexlegsalllength
global survexlegsnumber
# The try catches here need replacing as they are relatively expensive
# The try catches here need replacing as they are relativly expensive
ls = sline.lower().split()
ssfrom = survexblock.MakeSurvexStation(ls[stardata["from"]])
ssto = survexblock.MakeSurvexStation(ls[stardata["to"]])
survexleg = models.SurvexLeg(block=survexblock, stationfrom=ssfrom, stationto=ssto)
# this next fails for two surface survey svx files which use / for decimal point
# e.g. '29/09' in the tape measurement, or use decimals but in brackets, e.g. (06.05)
if stardata["type"] == "normal":
try:
survexleg.tape = float(ls[stardata["tape"]])
survexlegsnumber += 1
except ValueError:
print("! Tape misread in", survexblock.survexfile.path)
print(" Stardata:", stardata)
print(" Line:", ls)
message = ' ! Value Error: Tape misread in line %s in %s' % (ls, survexblock.survexfile.path)
models.DataIssue.objects.create(parser='survex', message=message)
survexleg.tape = 0
print("Tape misread in", survexblock.survexfile.path)
print("Stardata:", stardata)
print("Line:", ls)
survexleg.tape = 1000
try:
lclino = ls[stardata["clino"]]
except:
print("! Clino misread in", survexblock.survexfile.path)
print(" Stardata:", stardata)
print(" Line:", ls)
message = ' ! Value Error: Clino misread in line %s in %s' % (ls, survexblock.survexfile.path)
models.DataIssue.objects.create(parser='survex', message=message)
print("Clino misread in", survexblock.survexfile.path)
print("Stardata:", stardata)
print("Line:", ls)
lclino = error
try:
lcompass = ls[stardata["compass"]]
except:
print("! Compass misread in", survexblock.survexfile.path)
print(" Stardata:", stardata)
print(" Line:", ls)
message = ' ! Value Error: Compass misread in line %s in %s' % (ls, survexblock.survexfile.path)
models.DataIssue.objects.create(parser='survex', message=message)
print("Compass misread in", survexblock.survexfile.path)
print("Stardata:", stardata)
print("Line:", ls)
lcompass = error
if lclino == "up":
survexleg.compass = 0.0
@@ -73,11 +53,9 @@ def LoadSurvexLineLeg(survexblock, stardata, sline, comment, cave):
try:
survexleg.compass = float(lcompass)
except ValueError:
print("! Compass misread in", survexblock.survexfile.path)
print(" Stardata:", stardata)
print(" Line:", ls)
message = ' ! Value Error: line %s in %s' % (ls, survexblock.survexfile.path)
models.DataIssue.objects.create(parser='survex', message=message)
print("Compass misread in", survexblock.survexfile.path)
print("Stardata:", stardata)
print("Line:", ls)
survexleg.compass = 1000
survexleg.clino = -90.0
else:
@@ -90,20 +68,15 @@ def LoadSurvexLineLeg(survexblock, stardata, sline, comment, cave):
survexleg.cave = cave
# only save proper legs
# No need to save as we are measuring lengths only on parsing now.
# delete the object so that django autosaving doesn't save it.
survexleg = None
#survexleg.save()
survexleg.save()
itape = stardata.get("tape")
if itape:
try:
survexblock.totalleglength += float(ls[itape])
survexlegsalllength += float(ls[itape])
except ValueError:
print("! Length not added")
# No need to save as we are measuring lengths only on parsing now.
#survexblock.save()
print("Length not added")
survexblock.save()
def LoadSurvexEquate(survexblock, sline):
@@ -121,102 +94,62 @@ stardatadefault = {"type":"normal", "t":"leg", "from":0, "to":1, "tape":2, "comp
stardataparamconvert = {"length":"tape", "bearing":"compass", "gradient":"clino"}
regex_comment = re.compile(r"([^;]*?)\s*(?:;\s*(.*))?\n?$")
regex_ref = re.compile(r'.*?ref.*?(\d+)\s*#\s*(X)?\s*(\d+)')
regex_ref = re.compile(r'.*?ref.*?(\d+)\s*#\s*(\d+)')
regex_star = re.compile(r'\s*\*[\s,]*(\w+)\s*(.*?)\s*(?:;.*)?$')
# years from 1960 to 2039
regex_starref = re.compile(r'^\s*\*ref[\s.:]*((?:19[6789]\d)|(?:20[0123]\d))\s*#?\s*(X)?\s*(.*?\d+.*?)$(?i)')
# regex_starref = re.compile("""?x # VERBOSE mode - can't get this to work
# ^\s*\*ref # look for *ref at start of line
# [\s.:]* # some spaces, stops or colons
# ((?:19[6789]\d)|(?:20[0123]\d)) # a date from 1960 to 2039 - captured as one field
# \s*# # spaces then hash separator
# ?\s*(X) # optional X - captured
# ?\s*(.*?\d+.*?) # maybe a space, then at least one digit in the string - captured
# $(?i)""", re.X) # the end (do the whole thing case insensitively)
regex_team = re.compile(r"(Insts|Notes|Tape|Dog|Useless|Pics|Helper|Disto|Consultant)\s+(.*)$(?i)")
regex_team_member = re.compile(r" and | / |, | & | \+ |^both$|^none$(?i)")
regex_qm = re.compile(r'^\s*QM(\d)\s+?([a-dA-DxX])\s+([\w\-]+)\.(\d+)\s+(([\w\-]+)\.(\d+)|\-)\s+(.+)$')
insp = ""
callcount = 0
def RecursiveLoad(survexblock, survexfile, fin, textlines):
"""Follows the *include links in all the survex files from the root file 1623.svx
and reads in the survex blocks, other data and the wallet references (survexscansfolder) as it
goes. This part of the data import process is where the maximum memory is used and where it
crashes on memory-constrained machines.
"""
iblankbegins = 0
text = [ ]
stardata = stardatadefault
teammembers = [ ]
global insp
global callcount
global survexlegsnumber
# uncomment to print out all files during parsing
print(insp+" - Reading file: " + survexblock.survexfile.path + " <> " + survexfile.path)
# uncomment to print out all files during parsing
print(" - Reading file: " + survexblock.survexfile.path)
stamp = datetime.now()
lineno = 0
sys.stderr.flush();
callcount +=1
if callcount >=10:
callcount=0
print(".", file=sys.stderr,end='')
# Try to find the cave in the DB if not use the string as before
path_match = re.search(r"caves-(\d\d\d\d)/(\d+|\d\d\d\d-?\w+-\d+)/", survexblock.survexfile.path)
if path_match:
pos_cave = '%s-%s' % (path_match.group(1), path_match.group(2))
# print(insp+'Match')
# print(insp+os_cave)
# print('Match')
# print(pos_cave)
cave = models.getCaveByReference(pos_cave)
if cave:
survexfile.cave = cave
svxlines = ''
svxlines = fin.read().splitlines()
# print(insp+'Cave - preloop ' + str(survexfile.cave))
# print(insp+survexblock)
# print('Cave - preloop ' + str(survexfile.cave))
# print(survexblock)
for svxline in svxlines:
# print(insp+survexblock)
# print(survexblock)
# print(insp+svxline)
# print(svxline)
# if not svxline:
# print(insp+' - Not survex')
# print(' - Not survex')
# return
# textlines.append(svxline)
lineno += 1
# print(insp+' - Line: %d' % lineno)
# print(' - Line: %d' % lineno)
# break the line at the comment
sline, comment = regex_comment.match(svxline.strip()).groups()
# detect ref line pointing to the scans directory
mref = comment and regex_ref.match(comment)
if mref:
yr, letterx, wallet = mref.groups()
if not letterx:
letterx = ""
else:
letterx = "X"
if len(wallet)<2:
wallet = "0" + wallet
refscan = "%s#%s%s" % (yr, letterx, wallet )
#print(insp+' - Wallet ;ref - %s - looking for survexscansfolder' % refscan)
refscan = "%s#%s" % (mref.group(1), mref.group(2))
survexscansfolders = models.SurvexScansFolder.objects.filter(walletname=refscan)
if survexscansfolders:
survexblock.survexscansfolder = survexscansfolders[0]
#survexblock.refscandir = "%s/%s%%23%s" % (mref.group(1), mref.group(1), mref.group(2))
survexblock.save()
# print(insp+' - Wallet ; ref - %s - found in survexscansfolders' % refscan)
else:
message = ' ! Wallet ; ref - %s - NOT found in survexscansfolders %s-%s-%s' % (refscan,yr,letterx,wallet)
print(insp+message)
models.DataIssue.objects.create(parser='survex', message=message)
continue
# This whole section should be moved if we can have *QM become a proper survex command
# Spec of QM in SVX files, currently commented out need to add to survex
@@ -226,7 +159,7 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
# ;QM1 a hobnob_hallway_2.42 - junction of keyhole passage
qmline = comment and regex_qm.match(comment)
if qmline:
# print(insp+qmline.groups())
print(qmline.groups())
#(u'1', u'B', u'miraclemaze', u'1.17', u'-', None, u'\tcontinuation of rift')
qm_no = qmline.group(1)
qm_grade = qmline.group(2)
@@ -236,74 +169,49 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
qm_resolve_station = qmline.group(7)
qm_notes = qmline.group(8)
# print(insp+'Cave - %s' % survexfile.cave)
# print(insp+'QM no %d' % int(qm_no))
# print(insp+'QM grade %s' % qm_grade)
# print(insp+'QM section %s' % qm_from_section)
# print(insp+'QM station %s' % qm_from_station)
# print(insp+'QM res section %s' % qm_resolve_section)
# print(insp+'QM res station %s' % qm_resolve_station)
# print(insp+'QM notes %s' % qm_notes)
print('Cave - %s' % survexfile.cave)
print('QM no %d' % int(qm_no))
print('QM grade %s' % qm_grade)
print('QM section %s' % qm_from_section)
print('QM station %s' % qm_from_station)
print('QM res section %s' % qm_resolve_section)
print('QM res station %s' % qm_resolve_station)
print('QM notes %s' % qm_notes)
# If the QM isn't resolved (has a resolving station) then load it
if not qm_resolve_section or qm_resolve_section is not '-' or qm_resolve_section is not 'None':
from_section = models.SurvexBlock.objects.filter(name=qm_from_section)
# If we can find a section (survex note chunck, named)
if len(from_section) > 0:
# print(insp+from_section[0])
print(from_section[0])
from_station = models.SurvexStation.objects.filter(block=from_section[0], name=qm_from_station)
# If we can find a from station then we have the nearest station and can import it
if len(from_station) > 0:
# print(insp+from_station[0])
print(from_station[0])
qm = models.QM.objects.create(number=qm_no,
nearest_station=from_station[0],
grade=qm_grade.upper(),
location_description=qm_notes)
else:
# print(insp+' - QM found but resolved')
pass
print('QM found but resolved')
#print(insp+'Cave -sline ' + str(cave))
#print('Cave -sline ' + str(cave))
if not sline:
continue
# detect the star ref command
mstar = regex_starref.match(sline)
if mstar:
yr,letterx,wallet = mstar.groups()
if not letterx:
letterx = ""
else:
letterx = "X"
if len(wallet)<2:
wallet = "0" + wallet
assert (int(yr)>1960 and int(yr)<2039), "Wallet year out of bounds: %s" % yr
assert (int(wallet)<100), "Wallet number more than 100: %s" % wallet
refscan = "%s#%s%s" % (yr, letterx, wallet)
survexscansfolders = models.SurvexScansFolder.objects.filter(walletname=refscan)
if survexscansfolders:
survexblock.survexscansfolder = survexscansfolders[0]
survexblock.save()
# print(insp+' - Wallet *REF - %s - found in survexscansfolders' % refscan)
else:
message = ' ! Wallet *REF - %s - NOT found in survexscansfolders %s-%s-%s' % (refscan,yr,letterx,wallet)
print(insp+message)
models.DataIssue.objects.create(parser='survex', message=message)
continue
# detect the star command
mstar = regex_star.match(sline)
if not mstar:
if "from" in stardata:
# print(insp+'Cave ' + str(survexfile.cave))
# print(insp+survexblock)
# print('Cave ' + str(survexfile.cave))
# print(survexblock)
LoadSurvexLineLeg(survexblock, stardata, sline, comment, survexfile.cave)
# print(insp+' - From: ')
# print(insp+stardata)
# print(' - From: ')
#print(stardata)
pass
elif stardata["type"] == "passage":
LoadSurvexLinePassage(survexblock, stardata, sline, comment)
# print(insp+' - Passage: ')
# print(' - Passage: ')
#Missing "station" in stardata.
continue
@@ -312,26 +220,24 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
cmd = cmd.lower()
if re.match("include$(?i)", cmd):
includepath = os.path.join(os.path.split(survexfile.path)[0], re.sub(r"\.svx$", "", line))
print(insp+' - Include path found including - ' + includepath)
print(' - Include file found including - ' + includepath)
# Try to find the cave in the DB if not use the string as before
path_match = re.search(r"caves-(\d\d\d\d)/(\d+|\d\d\d\d-?\w+-\d+)/", includepath)
if path_match:
pos_cave = '%s-%s' % (path_match.group(1), path_match.group(2))
# print(insp+pos_cave)
# print(pos_cave)
cave = models.getCaveByReference(pos_cave)
if cave:
survexfile.cave = cave
else:
print(insp+' - No match in DB (i) for %s, so loading..' % includepath)
print('No match for %s' % includepath)
includesurvexfile = models.SurvexFile(path=includepath)
includesurvexfile.save()
includesurvexfile.SetDirectory()
if includesurvexfile.exists():
survexblock.save()
fininclude = includesurvexfile.OpenFile()
insp += "> "
RecursiveLoad(survexblock, includesurvexfile, fininclude, textlines)
insp = insp[2:]
elif re.match("begin$(?i)", cmd):
if line:
@@ -340,26 +246,23 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
path_match = re.search(r"caves-(\d\d\d\d)/(\d+|\d\d\d\d-?\w+-\d+)/", newsvxpath)
if path_match:
pos_cave = '%s-%s' % (path_match.group(1), path_match.group(2))
# print(insp+pos_cave)
print(pos_cave)
cave = models.getCaveByReference(pos_cave)
if cave:
survexfile.cave = cave
else:
print(insp+' - No match (b) for %s' % newsvxpath)
print('No match for %s' % newsvxpath)
previousnlegs = survexlegsnumber
name = line.lower()
print(insp+' - Begin found for: ' + name)
# print(insp+'Block cave: ' + str(survexfile.cave))
print(' - Begin found for: ' + name)
# print('Block cave: ' + str(survexfile.cave))
survexblockdown = models.SurvexBlock(name=name, begin_char=fin.tell(), parent=survexblock, survexpath=survexblock.survexpath+"."+name, cave=survexfile.cave, survexfile=survexfile, totalleglength=0.0)
survexblockdown.save()
survexblock.save()
survexblock = survexblockdown
# print(insp+survexblockdown)
# print(survexblockdown)
textlinesdown = [ ]
insp += "> "
RecursiveLoad(survexblockdown, survexfile, fin, textlinesdown)
insp = insp[2:]
else:
iblankbegins += 1
@@ -367,21 +270,17 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
if iblankbegins:
iblankbegins -= 1
else:
#survexblock.text = "".join(textlines)
# .text not used, using it for number of legs per block
legsinblock = survexlegsnumber - previousnlegs
print("LEGS: {} (previous: {}, now:{})".format(legsinblock,previousnlegs,survexlegsnumber))
survexblock.text = str(legsinblock)
survexblock.text = "".join(textlines)
survexblock.save()
# print(insp+' - End found: ')
# print(' - End found: ')
endstamp = datetime.now()
timetaken = endstamp - stamp
# print(insp+' - Time to process: ' + str(timetaken))
# print(' - Time to process: ' + str(timetaken))
return
elif re.match("date$(?i)", cmd):
if len(line) == 10:
#print(insp+' - Date found: ' + line)
#print(' - Date found: ' + line)
survexblock.date = make_aware(datetime.strptime(re.sub(r"\.", "-", line), '%Y-%m-%d'), get_current_timezone())
expeditions = models.Expedition.objects.filter(year=line[:4])
if expeditions:
@@ -392,7 +291,7 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
elif re.match("team$(?i)", cmd):
pass
# print(insp+' - Team found: ')
# print(' - Team found: ')
mteammember = regex_team.match(line)
if mteammember:
for tm in regex_team_member.split(mteammember.group(2)):
@@ -407,7 +306,7 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
personrole.save()
elif cmd == "title":
#print(insp+' - Title found: ')
#print(' - Title found: ')
survextitle = models.SurvexTitle(survexblock=survexblock, title=line.strip('"'), cave=survexfile.cave)
survextitle.save()
pass
@@ -417,11 +316,11 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
pass
elif cmd == "data":
#print(insp+' - Data found: ')
#print(' - Data found: ')
ls = line.lower().split()
stardata = { "type":ls[0] }
#print(insp+' - Star data: ', stardata)
#print(insp+ls)
#print(' - Star data: ', stardata)
#print(ls)
for i in range(0, len(ls)):
stardata[stardataparamconvert.get(ls[i], ls[i])] = i - 1
if ls[0] in ["normal", "cartesian", "nosurvey"]:
@@ -432,30 +331,25 @@ def RecursiveLoad(survexblock, survexfile, fin, textlines):
assert ls[0] == "passage", line
elif cmd == "equate":
#print(insp+' - Equate found: ')
#print(' - Equate found: ')
LoadSurvexEquate(survexblock, line)
elif cmd == "fix":
#print(insp+' - Fix found: ')
#print(' - Fix found: ')
survexblock.MakeSurvexStation(line.split()[0])
else:
#print(insp+' - Stuff')
#print(' - Stuff')
if cmd not in ["sd", "include", "units", "entrance", "data", "flags", "title", "export", "instrument",
"calibrate", "set", "infer", "alias", "cs", "declination", "case"]:
message = "! Bad svx command in line:%s %s %s %s" % (cmd, line, survexblock, survexblock.survexfile.path)
print(insp+message)
models.DataIssue.objects.create(parser='survex', message=message)
"calibrate", "set", "infer", "alias", "ref", "cs", "declination", "case"]:
print("Unrecognised command in line:", cmd, line, survexblock, survexblock.survexfile.path)
endstamp = datetime.now()
timetaken = endstamp - stamp
# print(insp+' - Time to process: ' + str(timetaken))
# print(' - Time to process: ' + str(timetaken))
def LoadAllSurvexBlocks():
global survexlegsalllength
global survexlegsnumber
print(' - Flushing All Survex Blocks...')
print('Loading All Survex Blocks...')
models.SurvexBlock.objects.all().delete()
models.SurvexFile.objects.all().delete()
@@ -467,21 +361,12 @@ def LoadAllSurvexBlocks():
models.SurvexStation.objects.all().delete()
print(" - Data flushed")
# Clear the data issues as we are reloading
models.DataIssue.objects.filter(parser='survex').delete()
print(' - Loading All Survex Blocks...')
print(' - redirecting stdout to loadsurvexblks.log...')
stdout_orig = sys.stdout
# Redirect sys.stdout to the file
sys.stdout = open('loadsurvexblks.log', 'w')
survexfile = models.SurvexFile(path=settings.SURVEX_TOPNAME, cave=None)
survexfile.save()
survexfile.SetDirectory()
#Load all
# this is the first so id=1
survexblockroot = models.SurvexBlock(name="root", survexpath="", begin_char=0, cave=None, survexfile=survexfile, totalleglength=0.0)
survexblockroot.save()
fin = survexfile.OpenFile()
@@ -489,150 +374,30 @@ def LoadAllSurvexBlocks():
# The real work starts here
RecursiveLoad(survexblockroot, survexfile, fin, textlines)
fin.close()
survexblockroot.totalleglength = survexlegsalllength
survexblockroot.text = str(survexlegsnumber)
#survexblockroot.text = "".join(textlines) these are all blank
survexblockroot.text = "".join(textlines)
survexblockroot.save()
# Close the file
sys.stdout.close()
print("+", file=sys.stderr)
sys.stderr.flush();
# Restore sys.stdout to our old saved file handler
sys.stdout = stdout_orig
print(" - total number of survex legs: {}".format(survexlegsnumber))
print(" - total leg lengths loaded: {}m".format(survexlegsalllength))
print(' - Loaded All Survex Blocks.')
poslineregex = re.compile(r"^\(\s*([+-]?\d*\.\d*),\s*([+-]?\d*\.\d*),\s*([+-]?\d*\.\d*)\s*\)\s*([^\s]+)$")
def LoadPos():
"""Run cavern to produce a complete .3d file, then run 3dtopos to produce a table of
all survey point positions. Then lookup each position by name to see if we have it in the database
and if we do, then save the x/y/z coordinates.
If we don't have it in the database, print an error message and discard it.
"""
topdata = settings.SURVEX_DATA + settings.SURVEX_TOPNAME
print(' - Generating a list of Pos from %s.svx and then loading...' % (topdata))
# Be careful with the cache file.
# If LoadPos has been run before,
# but without cave import being run before,
# then *everything* may be in the fresh 'not found' cache file.
cachefile = settings.SURVEX_DATA + "posnotfound.cache"
notfoundbefore = {}
if os.path.isfile(cachefile):
# this is not a good test. 1623.svx may never change but *included files may have done.
# When the *include is unrolled, we will be able to get a proper timestamp to use
# and can increase the timeout from 3 days to 30 days.
updtsvx = os.path.getmtime(topdata + ".svx")
updtcache = os.path.getmtime(cachefile)
age = updtcache - updtsvx
print(' svx: %s cache: %s not-found cache is fresher by: %s' % (updtsvx, updtcache, str(timedelta(seconds=age) )))
now = time.time()
if now - updtcache > 3*24*60*60:
print( " cache is more than 3 days old. Deleting.")
os.remove(cachefile)
elif age < 0 :
print(" cache is stale. Deleting.")
os.remove(cachefile)
else:
print(" cache is fresh. Reading...")
try:
with open(cachefile, "r") as f:
for line in f:
l = line.rstrip()
if l in notfoundbefore:
notfoundbefore[l] +=1 # should not be duplicates
print(" DUPLICATE ", line, notfoundbefore[l])
else:
notfoundbefore[l] =1
except:
print(" FAILURE READ opening cache file %s" % (cachefile))
raise
notfoundnow =[]
found = 0
skip = {}
print("\n") # extra line because cavern overwrites the text buffer somehow
# cavern defaults to using same cwd as supplied input file
call([settings.CAVERN, "--output=%s.3d" % (topdata), "%s.svx" % (topdata)])
call([settings.THREEDTOPOS, '%s.3d' % (topdata)], cwd = settings.SURVEX_DATA)
print(" - This next bit takes a while. Matching ~32,000 survey positions. Be patient...")
mappoints = {}
for pt in MapLocations().points():
svxid, number, point_type, label = pt
mappoints[svxid]=True
print('Loading Pos....')
posfile = open("%s.pos" % (topdata))
call([settings.CAVERN, "--output=%s%s.3d" % (settings.SURVEX_DATA, settings.SURVEX_TOPNAME), "%s%s.svx" % (settings.SURVEX_DATA, settings.SURVEX_TOPNAME)])
call([settings.THREEDTOPOS, '%s%s.3d' % (settings.SURVEX_DATA, settings.SURVEX_TOPNAME)], cwd = settings.SURVEX_DATA)
posfile = open("%s%s.pos" % (settings.SURVEX_DATA, settings.SURVEX_TOPNAME))
posfile.readline() #Drop header
survexblockroot = models_survex.SurvexBlock.objects.get(id=1)
for line in posfile.readlines():
r = poslineregex.match(line)
if r:
x, y, z, id = r.groups()
if id in notfoundbefore:
skip[id] = 1
else:
for sid in mappoints:
if id.endswith(sid):
notfoundnow.append(id)
# Now that we don't import any stations, we create it rather than look it up
# ss = models_survex.SurvexStation.objects.lookup(id)
# need to set block_id which means doing a search on all the survex blocks..
# remove dot at end and add one at beginning
blockpath = "." + id[:-len(sid)].strip(".")
try:
sbqs = models_survex.SurvexBlock.objects.filter(survexpath=blockpath)
if len(sbqs)==1:
sb = sbqs[0]
if len(sbqs)>1:
message = ' ! MULTIPLE SurvexBlocks matching Entrance point {} {}'.format(blockpath, sid)
print(message)
models.DataIssue.objects.create(parser='survex', message=message)
sb = sbqs[0]
elif len(sbqs)<=0:
message = ' ! ZERO SurvexBlocks matching Entrance point {} {}'.format(blockpath, sid)
print(message)
models.DataIssue.objects.create(parser='survex', message=message)
sb = survexblockroot
except:
message = ' ! FAIL in getting SurvexBlock matching Entrance point {} {}'.format(blockpath, sid)
print(message)
models.DataIssue.objects.create(parser='survex', message=message)
try:
ss = models_survex.SurvexStation(name=id, block=sb)
ss.x = float(x)
ss.y = float(y)
ss.z = float(z)
ss.save()
found += 1
except:
message = ' ! FAIL to create SurvexStation Entrance point {} {}'.format(blockpath, sid)
print(message)
models.DataIssue.objects.create(parser='survex', message=message)
raise
#print(" - %s failed lookups of SurvexStation.objects. %s found. %s skipped." % (len(notfoundnow),found, len(skip)))
if found > 10: # i.e. a previous cave import has been done
try:
with open(cachefile, "w") as f:
c = len(notfoundnow)+len(skip)
for i in notfoundnow:
pass #f.write("%s\n" % i)
for j in skip:
pass #f.write("%s\n" % j) # NB skip not notfoundbefore
print((' Not-found cache file written: %s entries' % c))
except:
print(" FAILURE WRITE opening cache file %s" % (cachefile))
raise
x, y, z, name = r.groups()
try:
ss = models.SurvexStation.objects.lookup(name)
ss.x = float(x)
ss.y = float(y)
ss.z = float(z)
ss.save()
except:
print("%s not parsed in survex" % name)

View File

@@ -1,21 +1,11 @@
from __future__ import (absolute_import, division,
print_function, unicode_literals)
import sys
import os
import types
import logging
import stat
import sys, os, types, logging, stat
import settings
from troggle.core.models import *
from PIL import Image
import csv
import re
import datetime
#from PIL import Image
from utils import save_carefully
from functools import reduce
import settings
from troggle.core.models import *
def get_or_create_placeholder(year):
""" All surveys must be related to a logbookentry. We don't have a way to
@@ -29,89 +19,138 @@ def get_or_create_placeholder(year):
placeholder_logbook_entry, newly_created = save_carefully(LogbookEntry, lookupAttribs, nonLookupAttribs)
return placeholder_logbook_entry
# dead
def readSurveysFromCSV():
try: # could probably combine these two
surveytab = open(os.path.join(settings.SURVEY_SCANS, "Surveys.csv"))
except IOError:
import cStringIO, urllib
surveytab = cStringIO.StringIO(urllib.urlopen(settings.SURVEY_SCANS + "/Surveys.csv").read())
dialect=csv.Sniffer().sniff(surveytab.read())
surveytab.seek(0,0)
surveyreader = csv.reader(surveytab,dialect=dialect)
headers = surveyreader.next()
header = dict(zip(headers, range(len(headers)))) #set up a dictionary where the indexes are header names and the values are column numbers
# test if the expeditions have been added yet
if Expedition.objects.count()==0:
print("There are no expeditions in the database. Please run the logbook parser.")
sys.exit()
logging.info("Deleting all scanned images")
ScannedImage.objects.all().delete()
logging.info("Deleting all survey objects")
Survey.objects.all().delete()
logging.info("Beginning to import surveys from "+str(os.path.join(settings.SURVEYS, "Surveys.csv"))+"\n"+"-"*60+"\n")
for survey in surveyreader:
# I hate this, but some surveys have a letter eg 2000#34a. The next line deals with that.
walletNumberLetter = re.match(r'(?P<number>\d*)(?P<letter>[a-zA-Z]*)',survey[header['Survey Number']])
# print(walletNumberLetter.groups())
year=survey[header['Year']]
surveyobj = Survey(
expedition = Expedition.objects.filter(year=year)[0],
wallet_number = walletNumberLetter.group('number'),
logbook_entry = get_or_create_placeholder(year),
comments = survey[header['Comments']],
location = survey[header['Location']]
)
surveyobj.wallet_letter = walletNumberLetter.group('letter')
if survey[header['Finished']]=='Yes':
#try and find the sketch_scan
pass
surveyobj.save()
logging.info("added survey " + survey[header['Year']] + "#" + surveyobj.wallet_number + "\r")
# dead
def listdir(*directories):
try:
return os.listdir(os.path.join(settings.SURVEYS, *directories))
except:
import urllib.request, urllib.parse, urllib.error
import urllib
url = settings.SURVEYS + reduce(lambda x, y: x + "/" + y, ["listdir"] + list(directories))
folders = urllib.request.urlopen(url.replace("#", "%23")).readlines()
folders = urllib.urlopen(url.replace("#", "%23")).readlines()
return [folder.rstrip(r"/") for folder in folders]
# add survey scans
# def parseSurveyScans(expedition, logfile=None):
# # yearFileList = listdir(expedition.year)
# try:
# yearPath=os.path.join(settings.SURVEY_SCANS, "surveyscans", expedition.year)
# yearFileList=os.listdir(yearPath)
# print(yearFileList)
# for surveyFolder in yearFileList:
# try:
# surveyNumber=re.match(rb'\d\d\d\d#(X?)0*(\d+)',surveyFolder).groups()
# #scanList = listdir(expedition.year, surveyFolder)
# scanList=os.listdir(os.path.join(yearPath,surveyFolder))
# except AttributeError:
# print(("Ignoring file in year folder: " + surveyFolder + "\r"))
# continue
def parseSurveyScans(expedition, logfile=None):
# yearFileList = listdir(expedition.year)
try:
yearPath=os.path.join(settings.SURVEY_SCANS, "surveyscans", expedition.year)
yearFileList=os.listdir(yearPath)
print(yearFileList)
for surveyFolder in yearFileList:
try:
surveyNumber=re.match(r'\d\d\d\d#(X?)0*(\d+)',surveyFolder).groups()
#scanList = listdir(expedition.year, surveyFolder)
scanList=os.listdir(os.path.join(yearPath,surveyFolder))
except AttributeError:
print("Folder: " + surveyFolder + " ignored\r")
continue
# for scan in scanList:
# # Why does this insist on renaming all the scanned image files?
# # It produces duplicates names and all images have type .jpg in the scanObj.
# # It seems to rely on end users being particularly diligent in filenames which is NGtH
# try:
# #scanChopped=re.match(rb'(?i).*(notes|elev|plan|extend|elevation)-?(\d*)\.(png|jpg|jpeg|pdf)',scan).groups()
# scanChopped=re.match(rb'(?i)([a-z_-]*\d?[a-z_-]*)(\d*)\.(png|jpg|jpeg|pdf|top|dxf|svg|tdr|th2|xml|txt)',scan).groups()
# scanType,scanNumber,scanFormat=scanChopped
# except AttributeError:
# print(("Ignored (bad name format): " + surveyFolder + '/' + scan + "\r"))
# continue
# scanTest = scanType
# scanType = 'notes'
# match = re.search(rb'(?i)(elev|extend)',scanTest)
# if match:
# scanType = 'elevation'
for scan in scanList:
try:
scanChopped=re.match(r'(?i).*(notes|elev|plan|elevation|extend)(\d*)\.(png|jpg|jpeg)',scan).groups()
scanType,scanNumber,scanFormat=scanChopped
except AttributeError:
print("File: " + scan + " ignored\r")
continue
if scanType == 'elev' or scanType == 'extend':
scanType = 'elevation'
# match = re.search(rb'(?i)(plan)',scanTest)
# if match:
# scanType = 'plan'
if scanNumber=='':
scanNumber=1
# if scanNumber=='':
# scanNumber=1
if type(surveyNumber)==types.TupleType:
surveyLetter=surveyNumber[0]
surveyNumber=surveyNumber[1]
try:
placeholder=get_or_create_placeholder(year=int(expedition.year))
survey=Survey.objects.get_or_create(wallet_number=surveyNumber, wallet_letter=surveyLetter, expedition=expedition, defaults={'logbook_entry':placeholder})[0]
except Survey.MultipleObjectsReturned:
survey=Survey.objects.filter(wallet_number=surveyNumber, wallet_letter=surveyLetter, expedition=expedition)[0]
file_=os.path.join(yearPath, surveyFolder, scan)
scanObj = ScannedImage(
file=file_,
contents=scanType,
number_in_wallet=scanNumber,
survey=survey,
new_since_parsing=False,
)
print("Added scanned image at " + str(scanObj))
#if scanFormat=="png":
#if isInterlacedPNG(os.path.join(settings.SURVEY_SCANS, "surveyscans", file_)):
# print file_+ " is an interlaced PNG. No can do."
#continue
scanObj.save()
except (IOError, OSError):
yearPath=os.path.join(settings.SURVEY_SCANS, "surveyscans", expedition.year)
print("No folder found for " + expedition.year + " at:- " + yearPath)
# if isinstance(surveyNumber, tuple):
# surveyLetter=surveyNumber[0]
# surveyNumber=surveyNumber[1]
# try:
# placeholder=get_or_create_placeholder(year=int(expedition.year))
# survey=Survey.objects.get_or_create(wallet_number=surveyNumber, wallet_letter=surveyLetter, expedition=expedition, defaults={'logbook_entry':placeholder})[0]
# except Survey.MultipleObjectsReturned:
# survey=Survey.objects.filter(wallet_number=surveyNumber, wallet_letter=surveyLetter, expedition=expedition)[0]
# file_=os.path.join(yearPath, surveyFolder, scan)
# scanObj = ScannedImage(
# file=file_,
# contents=scanType,
# number_in_wallet=scanNumber,
# survey=survey,
# new_since_parsing=False,
# )
# print(("Added scanned image at " + str(scanObj)))
# #if scanFormat=="png":
# #if isInterlacedPNG(os.path.join(settings.SURVEY_SCANS, "surveyscans", file_)):
# # print file_+ " is an interlaced PNG. No can do."
# #continue
# scanObj.save()
# except (IOError, OSError):
# yearPath=os.path.join(settings.SURVEY_SCANS, "surveyscans", expedition.year)
# print((" ! No folder found for " + expedition.year + " at:- " + yearPath))
def parseSurveys(logfile=None):
try:
readSurveysFromCSV()
except (IOError, OSError):
print("Survey CSV not found..")
pass
for expedition in Expedition.objects.filter(year__gte=2000): #expos since 2000, because paths and filenames were nonstandard before then
parseSurveyScans(expedition)
# dead
# def isInterlacedPNG(filePath): #We need to check for interlaced PNGs because the thumbnail engine can't handle them (uses PIL)
# file=Image.open(filePath)
# print(filePath)
# if 'interlace' in file.info:
# return file.info['interlace']
# else:
# return False
def isInterlacedPNG(filePath): #We need to check for interlaced PNGs because the thumbnail engine can't handle them (uses PIL)
file=Image.open(filePath)
print(filePath)
if 'interlace' in file.info:
return file.info['interlace']
else:
return False
# handles url or file, so we can refer to a set of scans on another server
@@ -119,7 +158,7 @@ def GetListDir(sdir):
res = [ ]
if sdir[:7] == "http://":
assert False, "Not written"
s = urllib.request.urlopen(sdir)
s = urllib.urlopen(sdir)
else:
for f in os.listdir(sdir):
if f[0] != ".":
@@ -130,69 +169,61 @@ def GetListDir(sdir):
def LoadListScansFile(survexscansfolder):
gld = [ ]
# flatten out any directories in these wallet folders - should not be any
# flatten out any directories in these book files
for (fyf, ffyf, fisdiryf) in GetListDir(survexscansfolder.fpath):
if fisdiryf:
gld.extend(GetListDir(ffyf))
else:
gld.append((fyf, ffyf, fisdiryf))
c=0
for (fyf, ffyf, fisdiryf) in gld:
#assert not fisdiryf, ffyf
if re.search(r"\.(?:png|jpg|jpeg|pdf|svg|gif)(?i)$", fyf):
if re.search(r"\.(?:png|jpg|jpeg)(?i)$", fyf):
survexscansingle = SurvexScanSingle(ffile=ffyf, name=fyf, survexscansfolder=survexscansfolder)
survexscansingle.save()
c+=1
if c>=10:
print(".", end='')
c = 0
# this iterates through the scans directories (either here or on the remote server)
# and builds up the models we can access later
def LoadListScans():
print(' - Loading Survey Scans')
print('Loading Survey Scans...')
SurvexScanSingle.objects.all().delete()
SurvexScansFolder.objects.all().delete()
print(' - deleting all scansFolder and scansSingle objects')
# first do the smkhs (large kh survey scans) directory
survexscansfoldersmkhs = SurvexScansFolder(fpath=os.path.join(settings.SURVEY_SCANS, "../surveys/smkhs"), walletname="smkhs")
print("smkhs", end=' ')
survexscansfoldersmkhs = SurvexScansFolder(fpath=os.path.join(settings.SURVEY_SCANS, "smkhs"), walletname="smkhs")
if os.path.isdir(survexscansfoldersmkhs.fpath):
survexscansfoldersmkhs.save()
LoadListScansFile(survexscansfoldersmkhs)
# iterate into the surveyscans directory
print(' - ', end=' ')
for f, ff, fisdir in GetListDir(settings.SURVEY_SCANS):
for f, ff, fisdir in GetListDir(os.path.join(settings.SURVEY_SCANS, "surveyscans")):
if not fisdir:
continue
# do the year folders
if re.match(r"\d\d\d\d$", f):
print("%s" % f, end=' ')
for fy, ffy, fisdiry in GetListDir(ff):
if fisdiry:
assert fisdiry, ffy
survexscansfolder = SurvexScansFolder(fpath=ffy, walletname=fy)
survexscansfolder.save()
LoadListScansFile(survexscansfolder)
# do the
# do the
elif f != "thumbs":
survexscansfolder = SurvexScansFolder(fpath=ff, walletname=f)
survexscansfolder.save()
LoadListScansFile(survexscansfolder)
def FindTunnelScan(tunnelfile, path):
scansfolder, scansfile = None, None
mscansdir = re.search(r"(\d\d\d\d#X?\d+\w?|1995-96kh|92-94Surveybookkh|1991surveybook|smkhs)/(.*?(?:png|jpg|pdf|jpeg))$", path)
mscansdir = re.search(r"(\d\d\d\d#X?\d+\w?|1995-96kh|92-94Surveybookkh|1991surveybook|smkhs)/(.*?(?:png|jpg))$", path)
if mscansdir:
scansfolderl = SurvexScansFolder.objects.filter(walletname=mscansdir.group(1))
if len(scansfolderl):
@@ -201,21 +232,18 @@ def FindTunnelScan(tunnelfile, path):
if scansfolder:
scansfilel = scansfolder.survexscansingle_set.filter(name=mscansdir.group(2))
if len(scansfilel):
if len(scansfilel) > 1:
print("BORK more than one image filename matches filter query. ", scansfilel[0])
print("BORK ", tunnelfile.tunnelpath, path)
print("BORK ", mscansdir.group(1), mscansdir.group(2), len(scansfilel))
#assert len(scansfilel) == 1
print(scansfilel, len(scansfilel))
assert len(scansfilel) == 1
scansfile = scansfilel[0]
if scansfolder:
tunnelfile.survexscansfolders.add(scansfolder)
if scansfile:
tunnelfile.survexscans.add(scansfile)
elif path and not re.search(r"\.(?:png|jpg|pdf|jpeg)$(?i)", path):
elif path and not re.search(r"\.(?:png|jpg|jpeg)$(?i)", path):
name = os.path.split(path)[1]
#print("debug-tunnelfileobjects ", tunnelfile.tunnelpath, path, name)
print("ttt", tunnelfile.tunnelpath, path, name)
rtunnelfilel = TunnelFile.objects.filter(tunnelname=name)
if len(rtunnelfilel):
assert len(rtunnelfilel) == 1, ("two paths with name of", path, "need more discrimination coded")
@@ -229,27 +257,25 @@ def FindTunnelScan(tunnelfile, path):
def SetTunnelfileInfo(tunnelfile):
ff = os.path.join(settings.TUNNEL_DATA, tunnelfile.tunnelpath)
tunnelfile.filesize = os.stat(ff)[stat.ST_SIZE]
fin = open(ff,'rb')
fin = open(ff)
ttext = fin.read()
fin.close()
if tunnelfile.filesize <= 0:
print("DEBUG - zero length xml file", ff)
return
mtype = re.search(r"<(fontcolours|sketch)", ttext)
assert mtype, ff
tunnelfile.bfontcolours = (mtype.group(1)=="fontcolours")
tunnelfile.npaths = len(re.findall(r"<skpath", ttext))
mtype = re.search("<(fontcolours|sketch)", ttext)
#assert mtype, ff
if mtype:
tunnelfile.bfontcolours = (mtype.group(1)=="fontcolours")
tunnelfile.npaths = len(re.findall("<skpath", ttext))
tunnelfile.save()
# <tunnelxml tunnelversion="version2009-06-21 Matienzo" tunnelproject="ireby" tunneluser="goatchurch" tunneldate="2009-06-29 23:22:17">
# <pcarea area_signal="frame" sfscaledown="12.282584" sfrotatedeg="-90.76982" sfxtrans="11.676667377221136" sfytrans="-15.677173422877454" sfsketch="204description/scans/plan(38).png" sfstyle="" nodeconnzsetrelative="0.0">
for path, style in re.findall(r'<pcarea area_signal="frame".*?sfsketch="([^"]*)" sfstyle="([^"]*)"', ttext):
for path, style in re.findall('<pcarea area_signal="frame".*?sfsketch="([^"]*)" sfstyle="([^"]*)"', ttext):
FindTunnelScan(tunnelfile, path)
# should also scan and look for survex blocks that might have been included
# and also survex titles as well.
# and also survex titles as well.
tunnelfile.save()
@@ -269,6 +295,6 @@ def LoadTunnelFiles():
elif f[-4:] == ".xml":
tunnelfile = TunnelFile(tunnelpath=lf, tunnelname=os.path.split(f[:-4])[1])
tunnelfile.save()
for tunnelfile in TunnelFile.objects.all():
SetTunnelfileInfo(tunnelfile)

View File

@@ -1,60 +0,0 @@
#!/usr/bin/python
from settings import *
import sys
import os
import string
import re
import urlparse
import django
pathsdict={
"ADMIN_MEDIA_PREFIX" : ADMIN_MEDIA_PREFIX,
"ADMIN_MEDIA_PREFIX" : ADMIN_MEDIA_PREFIX,
"CAVEDESCRIPTIONSX" : CAVEDESCRIPTIONS,
"DIR_ROOT" : DIR_ROOT,
#"EMAIL_HOST" : EMAIL_HOST,
#"EMAIL_HOST_USER" : EMAIL_HOST_USER,
"ENTRANCEDESCRIPTIONS" : ENTRANCEDESCRIPTIONS,
"EXPOUSER_EMAIL" : EXPOUSER_EMAIL,
"EXPOUSERPASS" :"<redacted>",
"EXPOUSER" : EXPOUSER,
"EXPOWEB" : EXPOWEB,
"EXPOWEB_URL" : EXPOWEB_URL,
"FILES" : FILES,
"JSLIB_URL" : JSLIB_URL,
"LOGFILE" : LOGFILE,
"LOGIN_REDIRECT_URL" : LOGIN_REDIRECT_URL,
"MEDIA_ADMIN_DIR" : MEDIA_ADMIN_DIR,
"MEDIA_ROOT" : MEDIA_ROOT,
"MEDIA_URL" : MEDIA_URL,
#"PHOTOS_ROOT" : PHOTOS_ROOT,
"PHOTOS_URL" : PHOTOS_URL,
"PYTHON_PATH" : PYTHON_PATH,
"REPOS_ROOT_PATH" : REPOS_ROOT_PATH,
"ROOT_URLCONF" : ROOT_URLCONF,
"STATIC_ROOT" : STATIC_ROOT,
"STATIC_URL" : STATIC_URL,
"SURVEX_DATA" : SURVEX_DATA,
"SURVEY_SCANS" : SURVEY_SCANS,
"SURVEYS" : SURVEYS,
"SURVEYS_URL" : SURVEYS_URL,
"SVX_URL" : SVX_URL,
"TEMPLATE_DIRS" : TEMPLATE_DIRS,
"THREEDCACHEDIR" : THREEDCACHEDIR,
"TINY_MCE_MEDIA_ROOT" : TINY_MCE_MEDIA_ROOT,
"TINY_MCE_MEDIA_URL" : TINY_MCE_MEDIA_URL,
"TUNNEL_DATA" : TUNNEL_DATA,
"URL_ROOT" : URL_ROOT
}
sep="\r\t\t\t" # ugh nasty - terminal output only
sep2="\r\t\t\t\t\t\t\t" # ugh nasty - terminal output only
bycodes = sorted(pathsdict)
for p in bycodes:
print p, sep , pathsdict[p]
byvals = sorted(pathsdict, key=pathsdict.__getitem__)
for p in byvals:
print pathsdict[p] , sep2, p

View File

@@ -27,7 +27,7 @@ from django.conf.urls import *
from profiles import views
urlpatterns = patterns('',
urlpatterns = [
url(r'^select/$',
views.select_profile,
name='profiles_select_profile'),
@@ -43,4 +43,4 @@ urlpatterns = patterns('',
url(r'^$',
views.profile_list,
name='profiles_profile_list'),
)
]

View File

@@ -14,8 +14,7 @@ try:
except ImportError: # django >= 1.7
SiteProfileNotAvailable = type('SiteProfileNotAvailable', (Exception,), {})
from django.db.models import get_model
from django.apps import apps
def get_profile_model():
"""
@@ -28,7 +27,7 @@ def get_profile_model():
if (not hasattr(settings, 'AUTH_PROFILE_MODULE')) or \
(not settings.AUTH_PROFILE_MODULE):
raise SiteProfileNotAvailable
profile_mod = get_model(*settings.AUTH_PROFILE_MODULE.split('.'))
profile_mod = apps.get_model(*settings.AUTH_PROFILE_MODULE.split('.'))
if profile_mod is None:
raise SiteProfileNotAvailable
return profile_mod

View File

@@ -1,7 +0,0 @@
Django==1.7
django-extensions==2.2.9
django-registration==2.0
django-tinymce==2.0.1
six==1.14.0
Unidecode==1.1.1
Pillow==7.1.2

90
settings.py Executable file → Normal file
View File

@@ -1,5 +1,4 @@
from localsettings import *
#inital localsettings call so that urljoins work
from localsettings import * #inital localsettings call so that urljoins work
import os
import urlparse
import django
@@ -9,7 +8,6 @@ BASE_DIR = os.path.dirname(os.path.dirname(__file__))
# Django settings for troggle project.
DEBUG = True
TEMPLATE_DEBUG = DEBUG
ALLOWED_HOSTS = [u'expo.survex.com']
@@ -45,7 +43,7 @@ NOTABLECAVESHREFS = [ "161", "204", "258", "76", "107", "264" ]
# trailing slash.
# Examples: "http://foo.com/media/", "/media/".
ADMIN_MEDIA_PREFIX = '/troggle/media-admin/'
#PHOTOS_ROOT = os.path.join(EXPOWEB, 'mugshot-data')
PHOTOS_ROOT = os.path.join(EXPOWEB, 'photos')
CAVEDESCRIPTIONS = os.path.join(EXPOWEB, "cave_data")
ENTRANCEDESCRIPTIONS = os.path.join(EXPOWEB, "entrance_data")
@@ -57,39 +55,40 @@ SVX_URL = urlparse.urljoin(URL_ROOT , '/survex/')
# top-level survex file basename (without .svx)
SURVEX_TOPNAME = "1623"
KAT_AREAS = ['1623', '1624', '1626', '1627']
DEFAULT_LOGBOOK_PARSER = "Parseloghtmltxt"
DEFAULT_LOGBOOK_FILE = "logbook.html"
LOGBOOK_PARSER_SETTINGS = {
"2019": ("2019/logbook.html", "Parseloghtmltxt"),
"2018": ("2018/logbook.html", "Parseloghtmltxt"),
"2017": ("2017/logbook.html", "Parseloghtmltxt"),
"2016": ("2016/logbook.html", "Parseloghtmltxt"),
"2015": ("2015/logbook.html", "Parseloghtmltxt"),
"2014": ("2014/logbook.html", "Parseloghtmltxt"),
"2013": ("2013/logbook.html", "Parseloghtmltxt"),
"2012": ("2012/logbook.html", "Parseloghtmltxt"),
"2011": ("2011/logbook.html", "Parseloghtmltxt"),
"2010": ("2010/logbook.html", "Parseloghtmltxt"),
"2009": ("2009/2009logbook.txt", "Parselogwikitxt"),
"2008": ("2008/2008logbook.txt", "Parselogwikitxt"),
"2007": ("2007/logbook.html", "Parseloghtmltxt"),
"2006": ("2006/logbook/logbook_06.txt", "Parselogwikitxt"),
"2005": ("2005/logbook.html", "Parseloghtmltxt"),
"2004": ("2004/logbook.html", "Parseloghtmltxt"),
"2003": ("2003/logbook.html", "Parseloghtml03"),
"2002": ("2002/logbook.html", "Parseloghtmltxt"),
"2001": ("2001/log.htm", "Parseloghtml01"),
"2000": ("2000/log.htm", "Parseloghtml01"),
"1999": ("1999/log.htm", "Parseloghtml01"),
"1998": ("1998/log.htm", "Parseloghtml01"),
"1997": ("1997/log.htm", "Parseloghtml01"),
"2014": ("2014/logbook.html", "Parseloghtmltxt"),
"2013": ("2013/logbook.html", "Parseloghtmltxt"),
"2012": ("2012/logbook.html", "Parseloghtmltxt"),
"2011": ("2011/logbook.html", "Parseloghtmltxt"),
"2010": ("2010/logbook.html", "Parselogwikitxt"),
"2009": ("2009/2009logbook.txt", "Parselogwikitxt"),
"2008": ("2008/2008logbook.txt", "Parselogwikitxt"),
"2007": ("2007/logbook.html", "Parseloghtmltxt"),
"2006": ("2006/logbook/logbook_06.txt", "Parselogwikitxt"),
"2005": ("2005/logbook.html", "Parseloghtmltxt"),
"2004": ("2004/logbook.html", "Parseloghtmltxt"),
"2003": ("2003/logbook.html", "Parseloghtml03"),
"2002": ("2002/logbook.html", "Parseloghtmltxt"),
"2001": ("2001/log.htm", "Parseloghtml01"),
"2000": ("2000/log.htm", "Parseloghtml01"),
"1999": ("1999/log.htm", "Parseloghtml01"),
"1998": ("1998/log.htm", "Parseloghtml01"),
"1997": ("1997/log.htm", "Parseloghtml01"),
"1996": ("1996/log.htm", "Parseloghtml01"),
"1995": ("1995/log.htm", "Parseloghtml01"),
"1994": ("1994/log.htm", "Parseloghtml01"),
"1993": ("1993/log.htm", "Parseloghtml01"),
"1992": ("1992/log.htm", "Parseloghtml01"),
"1991": ("1991/log.htm", "Parseloghtml01"),
"1995": ("1995/log.htm", "Parseloghtml01"),
"1994": ("1994/log.htm", "Parseloghtml01"),
"1993": ("1993/log.htm", "Parseloghtml01"),
"1992": ("1992/log.htm", "Parseloghtml01"),
"1991": ("1991/log.htm", "Parseloghtml01"),
}
APPEND_SLASH = False
@@ -98,20 +97,34 @@ SMART_APPEND_SLASH = True
# Make this unique, and don't share it with anybody.
SECRET_KEY = 'a#vaeozn0)uz_9t_%v5n#tj)m+%ace6b_0(^fj!355qki*v)j2'
# List of callables that know how to import templates from various sources.
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
# 'django.template.loaders.eggs.load_template_source',
)
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
os.path.join(PYTHON_PATH, 'templates')
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.contrib.auth.context_processors.auth',
'django.template.context_processors.debug',
'django.template.context_processors.i18n',
'django.template.context_processors.media',
'django.template.context_processors.static',
'django.template.context_processors.tz',
'django.contrib.messages.context_processors.messages',
'django.template.context_processors.request',
#'core.context.troggle_context'
]
},
},
]
if django.VERSION[0] == 1 and django.VERSION[1] < 4:
authmodule = 'django.core.context_processors.auth'
else:
authmodule = 'django.contrib.auth.context_processors.auth'
TEMPLATE_CONTEXT_PROCESSORS = ( authmodule, "core.context.troggle_context", )
LOGIN_REDIRECT_URL = '/'
INSTALLED_APPS = (
@@ -124,14 +137,13 @@ INSTALLED_APPS = (
'django.contrib.messages',
'django.contrib.staticfiles',
#'troggle.photologue',
#'troggle.reversion',
#'django_evolution',
'tinymce',
'registration',
'troggle.profiles',
'troggle.core',
'troggle.flatpages',
#'troggle.imagekit',
'imagekit',
'django_extensions',
)
MIDDLEWARE_CLASSES = (

View File

@@ -1,22 +1,22 @@
<!DOCTYPE html>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN">
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<link rel="stylesheet" type="text/css" href="{{ settings.MEDIA_URL }}css/main3.css" title="eyeCandy"/>
<link rel="alternate stylesheet" type="text/css" href="{{ settings.MEDIA_URL }}css/mainplain.css" title="plain"/>
<link rel="stylesheet" type="text/css" href="{{ settings.MEDIA_URL }}css/dropdownNavStyle.css" />
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<link rel="stylesheet" type="text/css" href="{{ MEDIA_URL }}css/main3.css" title="eyeCandy"/>
<link rel="alternate stylesheet" type="text/css" href="{{ MEDIA_URL }}css/mainplain.css" title="plain"/>
<link rel="stylesheet" type="text/css" href="{{ MEDIA_URL }}css/dropdownNavStyle.css" />
<title>{% block title %}Troggle{% endblock %}</title>
<!-- <script src="{{ settings.JSLIB_URL }}jquery/jquery.min.js" type="text/javascript"></script> -->
<script src="{{ settings.MEDIA_URL }}js/jquery.quicksearch.js" type="text/javascript"></script>
<script src="{{ settings.MEDIA_URL }}js/base.js" type="text/javascript"></script>
<script src="{{ settings.MEDIA_URL }}js/jquery.dropdownPlain.js" type="text/javascript"></script>
<script src="{{ MEDIA_URL }}js/jquery.quicksearch.js" type="text/javascript"></script>
<script src="{{ MEDIA_URL }}js/base.js" type="text/javascript"></script>
<script src="{{ MEDIA_URL }}js/jquery.dropdownPlain.js" type="text/javascript"></script>
{% block head %}{% endblock %}
</head>
<body onLoad="contentHeight();">
<div id="header">
<h1>CUCC Expeditions to Austria: 1976 - 2020</h1>
<h1>CUCC Expeditions to Austria: 1976 - 2018</h1>
<div id="editLinks"> {% block loginInfo %}
<a href="{{settings.EXPOWEB_URL}}">Website home</a> |
{% if user.username %}
@@ -35,14 +35,14 @@
<a href="{% url "survexcaveslist" %}">All Survex</a> |
<a href="{% url "surveyscansfolders" %}">Scans</a> |
<a href="{% url "tunneldata" %}">Tunneldata</a> |
<a href="{% url "survexcavessingle" "caves-1623/290/290.svx" %}">290</a> |
<a href="{% url "survexcavessingle" "caves-1623/291/291.svx" %}">291</a> |
<a href="{% url "survexcavessingle" "caves-1626/359/359.svx" %}">359</a> |
<a href="{% url "survexcavessingle" "caves-1623/258/258.svx" %}">258</a> |
<a href="{% url "survexcavessingle" "caves-1623/264/264.svx" %}">264</a> |
<a href="{% url "survexcavessingle" 107 %}">107</a> |
<a href="{% url "survexcavessingle" 161 %}">161</a> |
<a href="{% url "survexcavessingle" 204 %}">204</a> |
<a href="{% url "survexcavessingle" 258 %}">258</a> |
<a href="{% url "survexcavessingle" 264 %}">264</a> |
<a href="{% url "expedition" 2016 %}">Expo2016</a> |
<a href="{% url "expedition" 2017 %}">Expo2017</a> |
<a href="{% url "expedition" 2018 %}">Expo2018</a> |
<a href="{% url "expedition" 2019 %}">Expo2019</a> |
<a href="{% url "expedition" 2020 %}">Expo2020</a> |
<a href="/admin/">Django admin</a>
</div>
@@ -64,8 +64,8 @@
<div id="related">
{% block related %}
<script language="javascript">
$('#related').remove()
/*This is a hack to stop a line appearing because of the empty div border*/
$('#related').remove()
/*This is a hack to stop a line appearing because of the empty div border*/
</script>
{% endblock %}
</div>
@@ -81,7 +81,7 @@
<li><a href="#">External links</a>
<ul class="sub_menu">
<li><a id="cuccLink" href="https://camcaving.uk">CUCC website</a></li>
<li><a id="cuccLink" href="http://www.srcf.ucam.org/caving/wiki/Main_Page">CUCC website</a></li>
<li><a id="expoWebsiteLink" href="http://expo.survex.com">Expedition website</a></li>
</ul>
</li>

View File

@@ -17,7 +17,7 @@ div.cv-panel {
}
div.cv-compass, div.cv-ahi {
position: absolute;
position: absolute;
bottom: 95px;
right: 5px;
margin: 0;
@@ -31,7 +31,7 @@ div.cv-compass, div.cv-ahi {
background-color: black;
color: white;
}
div.cv-ahi {
right: 95px;
}
@@ -152,7 +152,7 @@ div.linear-scale-caption {
position: absolute;
top: 64px;
left: 0px;
height: auto;
height: auto;
margin-top:0;
bottom: 44px;
background-color: #222222;
@@ -220,7 +220,7 @@ div.linear-scale-caption {
}
#frame .tab {
position: absolute;
right: 0px;lass="cavedisplay"
right: 0px;
width: 40px;
height: 40px;
box-sizing: border-box;
@@ -408,8 +408,8 @@ div#scene {
</style>
<script type="text/javascript" src="/javascript/CaveView/js/CaveView.js" ></script>
<script type="text/javascript" src="/javascript/CaveView/lib/proj4.js" ></script>
<script type="text/javascript" src="/CaveView/js/CaveView.js" ></script>
<script type="text/javascript" src="/CaveView/lib/proj4.js" ></script>
<script type="text/javascript" >
@@ -421,7 +421,7 @@ div#scene {
CV.UI.init( 'scene', {
home: '/javascript/CaveView/',
surveyDirectory: '/cave/3d/',
terrainDirectory: '/loser/surface/terrain/'
terrainDirectory: '/loser/surface/terrain/'
} );
// load a single survey to display
@@ -516,14 +516,17 @@ div#scene {
{% if ent.entrance.exact_station %}
<dt>Exact Station</dt><dd>{{ ent.entrance.exact_station|safe }} {{ ent.entrance.exact_location.y|safe }}, {{ ent.entrance.exact_location.x|safe }}, {{ ent.entrance.exact_location.z|safe }}m</dd>
{% endif %}
{% if ent.entrance.other_station %}
{% if ent.entrance.find_location %}
<dt>Coordinates</dt><dd>{{ ent.entrance.find_location|safe }}</dd>
{% endif %}
{% if ent.entrance.other_station %}
<dt>Other Station</dt><dd>{{ ent.entrance.other_station|safe }}
{% if ent.entrance.other_description %}
- {{ ent.entrance.other_description|safe }}
{% endif %} {{ ent.entrance.other_location.y|safe }}, {{ ent.entrance.other_location.x|safe }}, {{ ent.entrance.other_location.z|safe }}m
</dd>
{% endif %}
</dl>
</dl>
</li>
{% endfor %}
</ul>

View File

@@ -1,4 +1,4 @@
<html lang="en">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />

View File

@@ -15,20 +15,7 @@
{% endfor %}
</ul>
<h3>1626</h3>
<ul class="searchable">
{% for cave in caves1626 %}
<li> <a href="{{ cave.url }}">{% if cave.kataster_number %}{{ cave.kataster_number }} {{cave.official_name|safe}}</a> {% if cave.unofficial_number %}({{cave.unofficial_number }}){% endif %}{% else %}{{cave.unofficial_number }} {{cave.official_name|safe}}</a> {% endif %}
</li>
{% endfor %}
</ul>
<p style="text-align:right">
<a href="{% url "newcave" %}">New Cave</a>
</p>
<h3>1623</h3>
<h3>1623</h3>
<table class="searchable">
{% for cave in caves1623 %}
@@ -38,7 +25,17 @@
{% endfor %}
</table>
<p style="text-align:right">
<h3>1626</h3>
<ul class="searchable">
{% for cave in caves1626 %}
<li> <a href="{{ cave.url }}">{% if cave.kataster_number %}{{ cave.kataster_number }} {{cave.official_name|safe}}</a> {% if cave.unofficial_number %}({{cave.unofficial_number }}){% endif %}{% else %}{{cave.unofficial_number }} {{cave.official_name|safe}}</a> {% endif %}
</li>
{% endfor %}
</ul>
<a href="{% url "newcave" %}">New Cave</a>
</p>
{% endblock %}

View File

@@ -16,7 +16,7 @@
{% if error %}
<div class="noticeBox">
{{ error }}
{{ error }}
<a href="#" class="closeDiv">dismiss this message</a>
</div>
{% endif %}
@@ -96,44 +96,61 @@
</tr>
<tr>
<td>
surveys to Surveys.csv
<td>
surveys to Surveys.csv
</td>
<td>
<td>
</td>
<td>
<form name="export" method="get" action={% url "downloadlogbook" %}>
<p>Download a logbook file which is dynamically generated by Troggle.</p>
<p>Download a logbook file which is dynamically generated by Troggle.</p>
<p>
Expedition year:
<select name="year">
{% for expedition in expeditions %}
<option value="{{expedition}}"> {{expedition}} </option>
<option value="{{expedition}}"> {{expedition}} </option>
{% endfor %}
</select>
</p>
<p>
Output style:
<select name="extension">
<option value="txt">.txt file with MediaWiki markup - 2008 style</option>
<option value="html">.html file - 2005 style</option>
<select name="extension">
<option value="txt">.txt file with MediaWiki markup - 2008 style</option>
<option value="html">.html file - 2005 style</option>
</select>
</p>
<p>
<input name="download_logbook" type="submit" value="Download logbook" />
</p>
</form>
</td>
</td>
</tr>
<tr>
<td>
surveys to Surveys.csv
</td>
<td>
<form name="export" method="post" action="">
<p>Overwrite the existing Surveys.csv file with one generated by Troggle.</p>
<input disabled name="export_surveys" type="submit" value="Update {{settings.SURVEYS}}noinfo/Surveys.csv" />
</form>
</td>
<td>
<form name="export" method="get" action={% url "downloadsurveys" %}>
<p>Download a Surveys.csv file which is dynamically generated by Troggle.</p>
<input disabled name="download_surveys" type="submit" value="Download Surveys.csv" />
</form>
</td>
</tr>
<tr>
<td>qms to qms.csv</td><td>
<form name="export_qms" method="get" action="downloadqms">
<form name="export_qms" method="get" action="downloadqms">
<!--This is for choosing caves by area (drilldown).
<select id="qmcaveareachooser" class="searchable" >
@@ -141,12 +158,12 @@
-->
Choose a cave.
Choose a cave.
<select name="cave_id" id="qmcavechooser">
{% for cave in caves %}
<option value="{{cave.kataster_number}}">{{cave}}
</option>
</option>
{% endfor %}
</select>
@@ -157,4 +174,4 @@
</table>
</form>
{% endblock %}
{% endblock %}

View File

@@ -1,3 +1,4 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN">
<!-- Only put one cave in this file -->
<!-- If you edit this file, make sure you update the websites database -->
<html lang="en">

View File

@@ -1,3 +1,4 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN">
<!-- Only put one entrance in this file -->
<!-- If you edit this file, make sure you update the websites database -->
<html lang="en">

View File

@@ -1,5 +1,4 @@
{% autoescape off %}
<!DOCTYPE html>
<html>
<head>
<style type="text/css">.author {text-decoration:underline}</style>

View File

@@ -2,15 +2,15 @@
{% load wiki_markup %}
{% load link %}
{% block title %}Expedition {{expedition.name}}{% endblock %}
{% block editLink %}<a href={{expedition.get_admin_url}}>Edit expedition {{expedition|wiki_to_html_short}}</a>{% endblock %}
{% block title %}Expedition {{this_expedition.name}}{% endblock %}
{% block editLink %}<a href={{this_expedition.get_admin_url}}>Edit expedition {{expedition|wiki_to_html_short}}</a>{% endblock %}
{% block related %}
{% endblock %}
{% block content %}
<h2>{{expedition.name}}</h2>
<h2>{{this_expedition.name}}</h2>
<p><b>Other years:</b>
{% for otherexpedition in expeditions %}
@@ -29,7 +29,7 @@ an "S" for a survey trip. The colours are the same for people on the same trip.
<table class="expeditionpersonlist">
<tr>
<th>Caver</th>
{% for expeditionday in expedition.expeditionday_set.all %}
{% for expeditionday in this_expedition.expeditionday_set.all %}
<th>
{{expeditionday.date.day}}
</th>
@@ -63,7 +63,7 @@ an "S" for a survey trip. The colours are the same for people on the same trip.
<form action="" method="GET"><input type="submit" name="reload" value="Reload"></form>
<h3>Logbooks and survey trips per day</h3>
<a href="{% url "newLogBookEntry" expeditionyear=expedition.year %}">New logbook entry</a>
<a href="{% url "newLogBookEntry" expeditionyear=this_expedition.year %}">New logbook entry</a>
<table class="expeditionlogbooks">
<tr><th>Date</th><th>Logged trips</th><th>Surveys</th></tr>
{% regroup dateditems|dictsort:"date" by date as dates %}

4
templates/experimental.html Executable file → Normal file
View File

@@ -8,9 +8,7 @@
<h1>Expo Experimental</h1>
<p>Number of survey legs: {{nsurvexlegs}}<br />
Total length: {{totalsurvexlength}} m on importing survex files.<br />
Total length: {{addupsurvexlength}} m adding up all the years below.</p>
<p>Number of survey legs: {{nsurvexlegs}}, total length: {{totalsurvexlength}}</p>
<table>
<tr><th>Year</th><th>Surveys</th><th>Survey Legs</th><th>Total length</th></tr>

View File

@@ -1,10 +1,11 @@
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
<title>{% block title %}{% endblock %}
</title>
<link rel="stylesheet" type="text/css" href="../css/main2.css" />
</head>
<body>
<div id="mainmenu">
@@ -12,19 +13,17 @@
<li><a href="/index.htm">Expo website home</a></li>
<li><a href="/intro.html">Introduction</a></li>
<li><a href="/infodx.htm">Main index</a></li>
<li><a href="/caves">Cave index</a></li>
<li><a href="/indxal.htm">Cave index</a></li>
{% if cavepage %}
<ul>
<li><a href="{% url "survexcaveslist" %}">All Survex</a></li>
<li><a href="{% url "surveyscansfolders" %}">Scans</a></li>
<li><a href="{% url "tunneldata" %}">Tunneldata</a></li>
<li><a href="{% url "survexcavessingle" "caves-1623/290/290.svx" %}">290</a></li>
<li><a href="{% url "survexcavessingle" "caves-1623/291/291.svx" %}">291</a></li>
<li><a href="{% url "survexcavessingle" "caves-1626/359/359.svx" %}">359</a></li>
<li><a href="{% url "survexcavessingle" "caves-1623/258/258.svx" %}">258</a></li>
<li><a href="{% url "survexcavessingle" "caves-1623/264/264.svx" %}">264</a></li>
<li><a href="{% url "expedition" 2018 %}">Expo2018</a></li>
<li><a href="{% url "expedition" 2019 %}">Expo2019</a></li>
<li><a href="{% url "survexcavessingle" 161 %}">161</a></li>
<li><a href="{% url "survexcavessingle" 204 %}">204</a></li>
<li><a href="{% url "survexcavessingle" 258 %}">258</a></li>
<li><a href="{% url "expedition" 2012 %}">Expo2012</a></li>
<li><a href="{% url "expedition" 2013 %}">Expo2013</a></li>
<li><a href="/admin">Django admin</a></li>
</ul>
{% endif %}

View File

@@ -38,7 +38,7 @@
<div id="col1">
<h3>Welcome</h3>
<p class="indent">
This is Troggle, the online system for Cambridge University Caving Club's Expeditions to Austria.
This is Troggle, the information portal for Cambridge University Caving Club's Expeditions to Austria.
</p>
<p class="indent">
@@ -46,7 +46,7 @@ Here you will find information about the {{expedition.objects.count}} expedition
</p>
<p class="indent">
If you are an expedition member, please sign up using the link to the top right.
If you are an expedition member, please sign up using the link to the top right and begin editing.
</p>
{% endblock content %}

View File

@@ -2,14 +2,11 @@
<ul id="links">
<li><a href="/index.htm">Home</a></li>
<li><a href="/infodx.htm">Main Index</a></li>
<li><a href="/troggle">Troggle</a></li>
<li><a href="/areas.htm">Areas</a></li>
<li><a href="/indxal.htm">Caves</a></li>
<li><a href="/handbook/index.htm">Handbook</a></li>
<li><a href="/pubs.htm">Reports</a></li>
<li><a href="/areas.htm">Areas</a></li>
<li><a href="/caves">Caves</a></li>
<li><a href="/expedition/2019">Troggle</a></li>
<li><form name=P method=get action="/search" target="_top">
<input id="omega-autofocus" type=search name=P value="testing" size=8 autofocus>
<input type=submit value="Search"></li>
{% if editable %}<li><a href="{% url "editflatpage" path %}" class="editlink"><strong>Edit this page</strong></a></li>{% endif %}
{% if cave_editable %}<li><a href="{% url "edit_cave" cave_editable %}" class="editlink"><strong>Edit this cave</strong></a></li>{% endif %}
</ul>

View File

@@ -1,45 +0,0 @@
{% extends "base.html" %}
{% load wiki_markup %}
{% load link %}
{% block title %}Troggle paths report{% endblock %}
{% block content %}
<h1>Expo Troggle paths report</h1>
<p>
<table style="font-family: Consolas, Lucida Console, monospace;">
<tr><th>Code</th><th>Path</th></tr>
{% for c,p in bycodeslist %}
<tr>
<td>
{{c}}
</td>
<td>
{{p}}
</td>
</tr>
{% endfor %}
</table>
<p>
<table style="font-family: Consolas, Lucida Console, monospace;">
<tr><th>Path</th><th>Code</th></tr>
{% for c,p in bypathslist %}
<tr>
<td>
{{p}}
</td>
<td>
{{c}}
</td>
</tr>
{% endfor %}
</table>
<p>
There are {{ ncodes }} different path codes defined.
{% endblock %}

View File

@@ -18,8 +18,8 @@
{% if pic.is_mugshot %}
<div class="figure">
<p> <img src="{{ pic.thumbnail_image.url }}" class="thumbnail" />
<p> {{ pic.caption }}</p>
<p> <a href="{{ pic.get_admin_url }}">edit {{pic}}</a> </>
<p> {{ pic.caption }} </p>
<p> <a href="{{ pic.get_admin_url }}">edit {{pic}}</a>
</p>
</p>
</div>
@@ -32,7 +32,7 @@
<ul>
{% for personexpedition in person.personexpedition_set.all %}
<li> <a href="{{ personexpedition.get_absolute_url }}">{{personexpedition.expedition.year}}</a>
<span style="padding-left:{{personexpedition.persontrip_set.all|length}}0px; background-color:red"></span>
<span style="padding-left:{{ personexpedition.persontrip_set.all|length }}0px; background-color:red"></span>
{{personexpedition.persontrip_set.all|length}} trips
</li>
{% endfor %}

View File

@@ -17,7 +17,6 @@
</tr>
{% endfor %}
</table>
<p>This is based purely on attendance, not on activities, surveying or usefulness of any kind. But as Woody Allen said: "90% of success is just turning up". It should really be called "Notably recent expoers" as the metric is just a geometric "recency" (1/2 for attending last year, 1/3 for the year before, etc., added up. Display cuttoff is 1/3.).
<h2>All expoers</h2>

View File

@@ -4,9 +4,7 @@
{% block title %} QM: {{qm|wiki_to_html_short}} {% endblock %}
{% block editLink %}| <a href={{qm.get_admin_url}}>Edit QM {{qm|wiki_to_html_short}}</a>{% endblock %}
{% block editLink %}| <a href="{{qm.get_admin_url}}/">Edit QM {{qm|wiki_to_html_short}}</a>{% endblock %}
{% block contentheader %}
<table id="cavepage">

View File

@@ -2,11 +2,11 @@
{% load wiki_markup %}
{% load survex_markup %}
{% block title %}Survey Scans Folder{% endblock %}
{% block title %}Survex Scans Folder{% endblock %}
{% block content %}
<h3>Survey Scans in: {{survexscansfolder.walletname}}</h3>
<h3>Survex Scans in: {{survexscansfolder.walletname}}</h3>
<table>
{% for survexscansingle in survexscansfolder.survexscansingle_set.all %}
<tr>
@@ -20,7 +20,7 @@
{% endfor %}
</table>
<h3>Survex surveys referring to this wallet</h3>
<h3>Surveys referring to this wallet</h3>
<table>
{% for survexblock in survexscansfolder.survexblock_set.all %}

View File

@@ -2,15 +2,11 @@
{% load wiki_markup %}
{% load survex_markup %}
{% block title %}All Survey scans folders (wallets){% endblock %}
{% block title %}All Survex scans folders{% endblock %}
{% block content %}
<h3>All Survey scans folders (wallets)</h3>
<p>Each wallet contains the scanned original in-cave survey notes and sketches of
plans and elevations. It also contains scans of centre-line survex output on which
hand-drawn passage sections are drawn. These hand-drawn passages will eventually be
traced to produce Tunnel or Therion drawings and eventually the final complete cave survey.
<h3>All Survex scans folders</h3>
<table>
<tr><th>Scans folder</th><th>Files</th><th>Survex blocks</th></tr>
{% for survexscansfolder in survexscansfolders %}

View File

@@ -5,7 +5,7 @@
{% block title %}CUCC Virtual Survey Binder: {{ current_expedition }}{{ current_survey }}{%endblock%}
{% block head %}
<link rel="stylesheet" type="text/css" href="{{ settings.MEDIA_URL }}css/nav.css" />
<link rel="stylesheet" type="text/css" href="{{ MEDIA_URL }}css/nav.css" />
<script language="javascript">
blankColor = "rgb(153, 153, 153)"
@@ -164,7 +164,7 @@
</p>
</div>
{% endfor %}
<div class="figure"> <a href="{{ settings.URL_ROOT }}/admin/expo/scannedimage/add/"> <img src="{{ settings.URL_ROOT }}{{ settings.ADMIN_MEDIA_PREFIX }}img/admin/icon_addlink.gif" /> Add a new scanned notes page. </a> </div>
<div class="figure"> <a href="{{ URL_ROOT }}/admin/expo/scannedimage/add/"> <img src="{{ URL_ROOT }}{{ ADMIN_MEDIA_PREFIX }}img/admin/icon_addlink.gif" /> Add a new scanned notes page. </a> </div>
</div>
<br class="clearfloat" />
<div id="survexFileContent" class="behind"> survex file editor, keeping file in original structure <br />

View File

@@ -4,7 +4,7 @@
{% block title %}{{ title }}{% endblock %}
{% block head %}
<script src="{{ settings.MEDIA_URL }}js/base.js" type="text/javascript"></script>
<script src="{{ MEDIA_URL }}js/base.js" type="text/javascript"></script>
<script type="text/javascript" src="{{settings.JSLIB_URL}}jquery-form/jquery.form.min.js"></script>
<script type="text/javascript" src="{{settings.JSLIB_URL}}codemirror/codemirror.min.js"></script>

View File

@@ -34,6 +34,6 @@ add wikilinks
{% endblock content %}
{% block margins %}
<img class="leftMargin eyeCandy fadeIn" src="{{ settings.MEDIA_URL }}eieshole.jpg">
<img class="rightMargin eyeCandy fadeIn" src="{{ settings.MEDIA_URL }}goesser.jpg">
<img class="leftMargin eyeCandy fadeIn" src="{{ MEDIA_URL }}eieshole.jpg">
<img class="rightMargin eyeCandy fadeIn" src="{{ MEDIA_URL }}goesser.jpg">
{% endblock margins %}

View File

@@ -6,13 +6,14 @@
{% block content %}
<h3>All Tunnel files - references to wallets and survey scans</h3>
<h3>All Tunnel files</h3>
<table>
<tr><th>File</th><th>Font</th><th>Size</th><th>Paths</th><th>Scans folder</th><th>Scan files</th><th>Frames</th></tr>
<tr><th>File</th><th>Font</th><th>SurvexBlocks</th><th>Size</th><th>Paths</th><th>Scans folder</th><th>Scan files</th><th>Frames</th></tr>
{% for tunnelfile in tunnelfiles %}
<tr>
<td><a href="{% url "tunnelfile" tunnelfile.tunnelpath %}">{{tunnelfile.tunnelpath}}</a></td>
<td>{{tunnelfile.bfontcolours}}</td>
<td></td>
<td>{{tunnelfile.filesize}}</td>
<td>{{tunnelfile.npaths}}</td>

147
urls.py Executable file → Normal file
View File

@@ -1,39 +1,45 @@
from django.conf.urls import *
from django.conf import settings
from django.conf.urls.static import static
from django.views.static import serve
from core.views import * # flat import
from core.views_other import *
from core.views_caves import *
from core.views_survex import *
from core.models import *
from flatpages.views import *
from django.views.generic.edit import UpdateView
from django.contrib import admin
from django.views.generic.list import ListView
from django.contrib import admin
admin.autodiscover()
#admin.autodiscover()
# type url probably means it's used.
# HOW DOES THIS WORK:
# url( <regular expression that matches the thing in the web browser>,
# HOW DOES THIS WORK:
# url( <regular expression that matches the thing in the web browser>,
# <reference to python function in 'core' folder>,
# <name optional argument for URL reversing (doesn't do much)>)
# <name optional argument for URL reversing (doesn't do much)>)
actualurlpatterns = patterns('',
url(r'^troggle$', views_other.frontpage, name="frontpage"),
actualurlpatterns = [
url(r'^testingurl/?$' , views_caves.millenialcaves, name="testing"),
url(r'^millenialcaves/?$', views_caves.millenialcaves, name="millenialcaves"),
url(r'^troggle$', views_other.frontpage, name="frontpage"),
url(r'^todo/$', views_other.todo, name="todo"),
url(r'^caves$', views_caves.caveindex, name="caveindex"),
url(r'^caves/?$', views_caves.caveindex, name="caveindex"),
url(r'^people/?$', views_logbooks.personindex, name="personindex"),
url(r'^newqmnumber/?$', views_other.ajax_QM_number, ),
url(r'^lbo_suggestions/?$', logbook_entry_suggestions),
url(r'^lbo_suggestions/?$', logbook_entry_suggestions),
#(r'^person/(?P<person_id>\d*)/?$', views_logbooks.person),
url(r'^person/(?P<first_name>[A-Z]*[a-z\-\'&;]*)[^a-zA-Z]*(?P<last_name>[a-z\-\']*[^a-zA-Z]*[A-Z]*[a-z\-&;]*)/?', views_logbooks.person, name="person"),
#url(r'^person/(\w+_\w+)$', views_logbooks.person, name="person"),
url(r'^expedition/(\d+)$', views_logbooks.expedition, name="expedition"),
url(r'^expeditions/?$', views_logbooks.ExpeditionListView.as_view(), name="expeditions"),
url(r'^personexpedition/(?P<first_name>[A-Z]*[a-z&;]*)[^a-zA-Z]*(?P<last_name>[A-Z]*[a-zA-Z&;]*)/(?P<year>\d+)/?$', views_logbooks.personexpedition, name="personexpedition"),
@@ -44,17 +50,17 @@ actualurlpatterns = patterns('',
url(r'^newfile', views_other.newFile, name="newFile"),
url(r'^getEntrances/(?P<caveslug>.*)', views_caves.get_entrances, name = "get_entrances"),
url(r'^getQMs/(?P<caveslug>.*)', views_caves.get_qms, name = "get_qms"), # no template "get_qms"?
url(r'^getQMs/(?P<caveslug>.*)', views_caves.get_qms, name = "get_qms"),
url(r'^getPeople/(?P<expeditionslug>.*)', views_logbooks.get_people, name = "get_people"),
url(r'^getLogBookEntries/(?P<expeditionslug>.*)', views_logbooks.get_logbook_entries, name = "get_logbook_entries"),
url(r'^cave/new/$', views_caves.edit_cave, name="newcave"),
url(r'^cave/(?P<cave_id>[^/]+)/?$', views_caves.cave, name="cave"),
url(r'^caveslug/([^/]+)/?$', views_caves.caveSlug, name="caveSlug"),
url(r'^cave/entrance/([^/]+)/?$', views_caves.caveEntrance),
url(r'^cave/description/([^/]+)/?$', views_caves.caveDescription),
url(r'^cave/qms/([^/]+)/?$', views_caves.caveQMs), # blank page
url(r'^cave/qms/([^/]+)/?$', views_caves.caveQMs),
url(r'^cave/logbook/([^/]+)/?$', views_caves.caveLogbook),
url(r'^entrance/(?P<caveslug>[^/]+)/(?P<slug>[^/]+)/edit/', views_caves.editEntrance, name = "editentrance"),
url(r'^entrance/new/(?P<caveslug>[^/]+)/', views_caves.editEntrance, name = "newentrance"),
@@ -70,101 +76,88 @@ actualurlpatterns = patterns('',
url(r'^cave/(?P<slug>[^/]+)/edit/$', views_caves.edit_cave, name="edit_cave"),
#(r'^cavesearch', caveSearch),
url(r'^cave/(?P<cave_id>[^/]+)/(?P<year>\d\d\d\d)-(?P<qm_id>\d*)(?P<grade>[ABCDX]?)?$', views_caves.qm, name="qm"),
url(r'^prospecting_guide/$', views_caves.prospecting),
# url(r'^cave/(?P<cave_id>[^/]+)/(?P<year>\d\d\d\d)-(?P<qm_id>\d*)(?P<grade>[ABCDX]?)?$', views_caves.qm, name="qm"),
url(r'^cave/qm/(?P<qm_id>[^/]+)?$', views_caves.qm, name="qm"),
url(r'^prospecting_guide/$', views_caves.prospecting),
url(r'^logbooksearch/(.*)/?$', views_logbooks.logbookSearch),
url(r'^statistics/?$', views_other.stats, name="stats"),
url(r'^survey/?$', surveyindex, name="survey"),
url(r'^survey/(?P<year>\d\d\d\d)\#(?P<wallet_number>\d*)$', survey, name="survey"),
# Is all this lot out of date ? Maybe the logbooks work?
url(r'^controlpanel/?$', views_other.controlPanel, name="controlpanel"),
url(r'^CAVETAB2\.CSV/?$', views_other.downloadCavetab, name="downloadcavetab"),
url(r'^Surveys\.csv/?$', views_other.downloadSurveys, name="downloadsurveys"),
url(r'^logbook(?P<year>\d\d\d\d)\.(?P<extension>.*)/?$',views_other.downloadLogbook),
url(r'^logbook/?$',views_other.downloadLogbook, name="downloadlogbook"),
url(r'^cave/(?P<cave_id>[^/]+)/qm\.csv/?$', views_other.downloadQMs, name="downloadqms"),
(r'^downloadqms$', views_other.downloadQMs),
url(r'^cave/(?P<cave_id>[^/]+)/qm\.csv/?$', views_other.downloadQMs, name="downloadqms"),
url(r'^downloadqms$', views_other.downloadQMs),
url(r'^eyecandy$', views_other.eyecandy),
(r'^admin/doc/?', include('django.contrib.admindocs.urls')),
url(r'^admin/doc/?', include('django.contrib.admindocs.urls')),
#url(r'^admin/(.*)', admin.site.get_urls, name="admin"),
(r'^admin/', include(admin.site.urls)),
url(r'^admin/', include(admin.site.urls)),
# don't know why this needs troggle/ in here. nice to get it out
url(r'^troggle/media-admin/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ADMIN_DIR, 'show_indexes':True}),
# url(r'^troggle/media-admin/(?P<path>.*)$', static, {'document_root': settings.MEDIA_ADMIN_DIR, 'show_indexes':True}),
(r'^accounts/', include('registration.backends.default.urls')),
(r'^profiles/', include('profiles.urls')),
url(r'^accounts/', include('registration.backends.default.urls')),
url(r'^profiles/', include('profiles.urls')),
# (r'^personform/(.*)$', personForm),
(r'^expofiles/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': settings.EXPOFILES, 'show_indexes': True}),
(r'^static/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': settings.STATIC_ROOT, 'show_indexes': True}),
(r'^site_media/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': settings.MEDIA_ROOT, 'show_indexes': True}),
(r'^tinymce_media/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': settings.TINY_MCE_MEDIA_ROOT, 'show_indexes': True}),
url(r'^site_media/(?P<path>.*)$', serve, {'document_root': settings.MEDIA_ROOT, 'show_indexes': True}),
url(r'^survexblock/(.+)$', views_caves.survexblock, name="survexblock"),
url(r'^survexfile/(?P<survex_file>.*?)\.svx$', views_survex.svx, name="svx"),
url(r'^survexfile/(?P<survex_file>.*?)\.3d$', views_survex.threed, name="threed"),
url(r'^survexfile/(?P<survex_file>.*?)\.log$', views_survex.svxraw),
url(r'^survexfile/(?P<survex_file>.*?)\.err$', views_survex.err),
url(r'^survexfile/caves/$', views_survex.survexcaveslist, name="survexcaveslist"),
url(r'^survexfile/(?P<survex_cave>.*)$', views_survex.survexcavesingle, name="survexcavessingle"),
url(r'^survexfileraw/(?P<survex_file>.*?)\.svx$', views_survex.svxraw, name="svxraw"),
(r'^survey_files/listdir/(?P<path>.*)$', view_surveys.listdir),
(r'^survey_files/download/(?P<path>.*)$', view_surveys.download),
url(r'^survexfile/caves/$', views_survex.survexcaveslist, name="survexcaveslist"),
url(r'^survexfile/caves/(?P<survex_cave>.*)$', views_survex.survexcavesingle, name="survexcavessingle"),
url(r'^survexfileraw/(?P<survex_file>.*?)\.svx$', views_survex.svxraw, name="svxraw"),
url(r'^survey_files/listdir/(?P<path>.*)$', view_surveys.listdir),
url(r'^survey_files/download/(?P<path>.*)$', view_surveys.download),
#(r'^survey_files/upload/(?P<path>.*)$', view_surveys.upload),
#(r'^survey_scans/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.SURVEY_SCANS, 'show_indexes':True}),
url(r'^survey_scans/$', view_surveys.surveyscansfolders, name="surveyscansfolders"),
url(r'^survey_scans/(?P<path>[^/]+)/$', view_surveys.surveyscansfolder, name="surveyscansfolder"),
url(r'^survey_scans/(?P<path>[^/]+)/(?P<file>[^/]+(?:png|jpg|jpeg|pdf|PNG|JPG|JPEG|PDF))$',
view_surveys.surveyscansingle, name="surveyscansingle"),
url(r'^tunneldata/$', view_surveys.tunneldata, name="tunneldata"),
url(r'^tunneldataraw/(?P<path>.+?\.xml)$', view_surveys.tunnelfile, name="tunnelfile"),
url(r'^tunneldataraw/(?P<path>.+?\.xml)/upload$',view_surveys.tunnelfileupload, name="tunnelfileupload"),
#url(r'^tunneldatainfo/(?P<path>.+?\.xml)$', view_surveys.tunnelfileinfo, name="tunnelfileinfo"),
#(r'^photos/(?P<path>.*)$', 'django.views.static.serve',
#{'document_root': settings.PHOTOS_ROOT, 'show_indexes':True}),
url(r'^survey_scans/$', view_surveys.surveyscansfolders, name="surveyscansfolders"),
url(r'^survey_scans/(?P<path>[^/]+)/$', view_surveys.surveyscansfolder, name="surveyscansfolder"),
url(r'^survey_scans/(?P<path>[^/]+)/(?P<file>[^/]+(?:png|jpg|jpeg))$',
view_surveys.surveyscansingle, name="surveyscansingle"),
url(r'^tunneldata/$', view_surveys.tunneldata, name="tunneldata"),
url(r'^tunneldataraw/(?P<path>.+?\.xml)$', view_surveys.tunnelfile, name="tunnelfile"),
url(r'^tunneldataraw/(?P<path>.+?\.xml)/upload$',view_surveys.tunnelfileupload, name="tunnelfileupload"),
#url(r'^tunneldatainfo/(?P<path>.+?\.xml)$', view_surveys.tunnelfileinfo, name="tunnelfileinfo"),
# url(r'^photos/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.PHOTOS_ROOT, 'show_indexes':True}),
url(r'^prospecting/(?P<name>[^.]+).png$', prospecting_image, name="prospecting_image"),
# (r'^gallery/(?P<path>.*)$', 'django.views.static.serve',
# {'document_root': settings.PHOTOS_ROOT, 'show_indexes':True}),
# (r'^gallery/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.PHOTOS_ROOT, 'show_indexes':True}),
# for those silly ideas
url(r'^experimental.*$', views_logbooks.experimental, name="experimental"),
url(r'^pathsreport.*$', views_logbooks.pathsreport, name="pathsreport"),
#url(r'^trip_report/?$',views_other.tripreport,name="trip_report")
url(r'^(.*)_edit$', 'flatpages.views.editflatpage', name="editflatpage"),
url(r'^(.*)$', 'flatpages.views.flatpage', name="flatpage"),
)
url(r'^(.*)_edit$', editflatpage, name="editflatpage"),
url(r'^(.*)$', flatpage, name="flatpage"),
]
#Allow prefix to all urls
urlpatterns = patterns ('',
('^%s' % settings.DIR_ROOT, include(actualurlpatterns))
)
urlpatterns = [
url('^%s' % settings.DIR_ROOT, include(actualurlpatterns))
] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)

View File

@@ -23,12 +23,12 @@ def randomLogbookSentence():
#Choose again if there are no sentances (this happens if it is a placeholder entry)
while len(re.findall('[A-Z].*?\.',randSent['entry'].text))==0:
randSent['entry']=LogbookEntry.objects.order_by('?')[0]
#Choose a random sentence from that entry. Store the sentence as randSent['sentence'], and the number of that sentence in the entry as randSent['number']
sentenceList=re.findall('[A-Z].*?\.',randSent['entry'].text)
randSent['number']=random.randrange(0,len(sentenceList))
randSent['sentence']=sentenceList[randSent['number']]
return randSent
@@ -37,29 +37,29 @@ def save_carefully(objectType, lookupAttribs={}, nonLookupAttribs={}):
-if instance does not exist in DB: add instance to DB, return (new instance, True)
-if instance exists in DB and was modified using Troggle: do nothing, return (existing instance, False)
-if instance exists in DB and was not modified using Troggle: overwrite instance, return (instance, False)
The checking is accomplished using Django's get_or_create and the new_since_parsing boolean field
defined in core.models.TroggleModel.
"""
instance, created=objectType.objects.get_or_create(defaults=nonLookupAttribs, **lookupAttribs)
if not created and not instance.new_since_parsing:
for k, v in nonLookupAttribs.items(): #overwrite the existing attributes from the logbook text (except date and title)
for k, v in list(nonLookupAttribs.items()): #overwrite the existing attributes from the logbook text (except date and title)
setattr(instance, k, v)
instance.save()
if created:
logging.info(str(instance) + ' was just added to the database for the first time. \n')
if not created and instance.new_since_parsing:
logging.info(str(instance) + " has been modified using Troggle, so the current script left it as is. \n")
if not created and not instance.new_since_parsing:
logging.info(str(instance) + " existed in the database unchanged since last parse. It was overwritten by the current script. \n")
return (instance, created)
re_body = re.compile(r"\<body[^>]*\>(.*)\</body\>", re.DOTALL)
re_title = re.compile(r"\<title[^>]*\>(.*)\</title\>", re.DOTALL)
@@ -80,7 +80,7 @@ def get_single_match(regex, text):
def href_to_wikilinks(matchobj):
"""
Given an html link, checks for possible valid wikilinks.
Returns the first valid wikilink. Valid means the target
object actually exists.
"""
@@ -91,7 +91,7 @@ def href_to_wikilinks(matchobj):
return matchobj.group()
#except:
#print 'fail'
re_subs = [(re.compile(r"\<b[^>]*\>(.*?)\</b\>", re.DOTALL), r"'''\1'''"),
(re.compile(r"\<i\>(.*?)\</i\>", re.DOTALL), r"''\1''"),
@@ -107,12 +107,12 @@ re_subs = [(re.compile(r"\<b[^>]*\>(.*?)\</b\>", re.DOTALL), r"'''\1'''"),
(re.compile(r"\<a\s+href=['\"]#([^'\"]*)['\"]\s*\>(.*?)\</a\>", re.DOTALL), r"[[cavedescription:\1|\2]]"), #assumes that all links with target ids are cave descriptions. Not great.
(re.compile(r"\[\<a\s+href=['\"][^'\"]*['\"]\s+id=['\"][^'\"]*['\"]\s*\>([^\s]*).*?\</a\>\]", re.DOTALL), r"[[qm:\1]]"),
(re.compile(r'<a\shref="?(?P<target>.*)"?>(?P<text>.*)</a>'),href_to_wikilinks),
]
def html_to_wiki(text, codec = "utf-8"):
if type(text) == str:
text = unicode(text, codec)
text = str(text, codec)
text = re.sub("</p>", r"", text)
text = re.sub("<p>$", r"", text)
text = re.sub("<p>", r"\n\n", text)