<li>I reckon python has another 10-20 years at least.
<li>I reckon SQL databases have another 30 years at least.
</ul>
<p>I don't think writing our own object/SQL code is sensible:
there is such a lot going on we would create a large volume of software even if we stick close to the metal.
[I could well be wrong. That is Option 1.]
<h3>Option 2</h2>
<p>
We keep the same architecture as now, and incrementally replace modules that use django/SQL with direct object storage of collections using pickle(), shelve() and json().
The more modules we replace, the easier it becomes for new people to work on it - but also the easier it becomes to migrate it to newer django versions. Or the easier it becomes to move entirely from django to Jinja2 [or Mako] + a URL-router
[e.g. <ahref="https://werkzeug.palletsprojects.com/en/1.0.x/routing/">werkzeug</a> or routes] + a HTTP-request/response system.
[This could be harder than it looks if cross-referencing and pointers between collections become unmaintainable - a risk we need to watch. But other people are using Redis for this sort of thing. ]
We also use a noSQL db with a direct and easy mapping to python collections. The obvious candidates are
<ahref="https://www.mongodb.com/">MongoDb</a> or the
<ahref="https://en.wikipedia.org/wiki/Zope_Object_Database">Zope Object Database</a> (ZODB). MongoDb is famous and programmers may want to work on it to get the experience, but ZODB is much closer to python. But ZODB is now rather old, and the Django package django-zodb has not been updated for 10 years. And MongoDb has a bad impedence mismatch: <ahref="https://daniel.feldroy.com/when-to-use-mongodb-with-django.html">Short answer is you don't use MongoDB with Django</a> which creates a lot of extra pointless work. If we ever need atomic transactions we should use a database and not try to fudge things ourselves, but not either of those.
[This needs to be explored, but I suspect we don't gain much compared with the effort of forcing maintainers to learn a new query language. Shelve() is already adequate.]
<p>We migrate to an <ahref="https://www.fullstackpython.com/sanic.html">"improved Django"</a> or a "Django-lite". Django is a massive system, and it is moving with agility towards being more asynchronous, but there are already competing projects which do much the same thing, but in a cleaner way and (being 15 years younger) without the historical baggage and cutting out a lot of now-uneeded complexity. This looks like being a very hopeful possibility. <ahref="https://sanicframework.org/en/">Sanic</a> and <ahref="https://pgjones.gitlab.io/quart/">Quart</a> look like being the first of a new generation.
<p>The driver for these new Django clones is the
<ahref="http://masnun.rocks/2016/11/17/exploring-asyncio-uvloop-sanic-motor/">asynchronous capabilities</a> in python 3.4 and the
<ahref="https://www.encode.io/articles/hello-asgi">ASGI interface</a> for web workers which replaces WSGI. These will have much the same effect on python as Node.js had on JavaScript.
<p>But we should not be in too much of a hurry. It will take Sanic (and similar) years to get to a state where things don't
break between versions horribly every 6 months. We lived through
<p>ASGI, which Dango supports from v3.2 too, has the interesting effect that we no longer need a webserver like Apache or nginx to buffer requests. We can use very lightweight <ahref="https://www.uvicorn.org/">uvicorn</a> instead.
<h3>Things that could be a bit sticky 1 - multi-user safety</h3>
<p>Multi-user synchronous use could be a bit tricky without a solid multi-user database sitting behind the python code. So removing all the SQL database use may not be what we want to do after all.
<p>Under all conceivable circumstances we would continue to use WGSI or
<ahref="https://asgi.readthedocs.io/en/latest/introduction.html">ASGI</a> to connect our python code to a user-facing
webserver (apache, nginx, gunicorn). Every time a webpage is served, it is done by a separate thread in the webserver and essentially a
new instance of Django is created to serve it. Django relies on its multi-user SQL database (MariaDB, postgresql) to ensure that competing
updates by two instantiations of itself to the same stored object are correctly atomic. But even today, if two people try to update the
same handbook <em>webpage</em>, or the same survex <em>file</em>, at the same time we expect horrible corruption of the data. Even today,
with the SQL database, writing <em>files</em> is not coded in a properly multi-user manner. We should write some file lock/serializer code
to make this safe.
<p>The move by <ahref="https://arunrocks.com/a-guide-to-asgi-in-django-30-and-its-performance/">Django
from single-threaded WSGI to asynchronous ASGI</a> began with v3.0 and for 'views' almost complete in 3.2.
This makes the server more responsive,
but doesn't really change anything from the perspective of our need to stop users overwriting each others' work. If we just store
everything in in-memory dictionaries we may need to write our own asyncio python to do that synchronization. That would be a Bad Thing as
<p>There is not yet a front-end (javascript or <ahref="https://en.wikipedia.org/wiki/WebAssembly">WebAssembly</a>) framework on the client, i.e. a phone app or webpage, which is stable enough for us to commit
<p>Modern JavaScript frameworks support dynamic 'single-page websites' where all the component parts are fetched and replaced
asynchronously (this used to be called <ahref=""https://en.wikipedia.org/wiki/Ajax_%28programming%29>AJAX</a> when it first appeared in
1999). This is fundamentally different from how Django was originally designed: using public URLs connected to code which produces a
complete webpage based on a single template. Django <ahref="https://engineertodeveloper.com/how-to-use-ajax-with-django/">can interoperate
</a> with dynamic systems but support will <ahref=
"https://speakerdeck.com/andrewgodwin/just-add-await-retrofitting-async-into-django?slide=76">become increasingly baroque</a> I imagine.
<h3>Things that could be a bit sticky 3 - GIS</h3>
<p>
New functionality: e.g. making the whole thing GIS-centric is a possibility.
A GIS db could make a lot of sense. Expo has GIS expertise and we have a lot of badly-integrated GPS data, so this needs a lot of thinking to be done and we should get on with that.
<p>We will also need an API now-ish, whatever we do, so that keen kids can write their own special-purpose front-ends using new cool toys. Which will keep them out of our hair. We can do this easily with Django templates that generate JSON, which is <ahref="https://www.cuyc.org.uk/committee/events_json_short/">what CUYC do</a>. We already have some of this: <ahref="exportjson.html">JSON export</a>.
<p>Andy Waddington, who wrote the first expo website in 1996, mentioned that he could never get the hang of Django at all, and working with SQL databases would require some serious book-revision:
<p>
So a useful goal, I think, is to make 'troggle2' accessible to a generic python programmer with no specialist skills in any databases or frameworks. Put against that is the argument that that might double the volume of code to be maintained, which would be worse. Nevertheless, an aim to keep in mind.
But even 'just Python' is not that easy. Python is a much bigger language now than it used to be, with some increasingly esoteric corners, such as the new asyncio framework..