+# The new internal structure of the users system looks like this:
+#
+# The users.v4 file formatted as a file of lines containing: <callsign>\t{json serialised version of user record}\n
+#
+# You can look at it with any text tools or your favourite editor :-)
+#
+# In terms of internal structure, the main user hash remains as %u, keyed on callsign as before.
+#
+# The value is a one or two element array [position] or [position, ref], depending on whether the record has been "get()ed"
+# [i.e. got from disk] or not. The 'position' is simply the start of each line in the file. The function "get()" simply returns
+# the stored reference in array[1], if present, or seeks to the position from array[0], reads a line, json_decodes it,
+# stores that reference into array[1] and returns that. That reference will be used from that time onwards.
+#
+# The routine writeoutjson() will (very) lazily write out a copy of %u WITHOUT STORING ANY EXTRA CURRENTLY UNREFERENCED CALLSIGN
+# records to users.v4.n. It, in effect, does a sort of random accessed merge of the current user file and any "in memory"
+# versions of any user record. This can be done with a spawned command because it will just be reading %u and merging
+# loaded records, not altering the current users.v4 file in any way.
+#
+# %u -> $u{call} -> [position of json line in users.v4 (, reference -> {call=>'G1TLH', ...} if this record is in use)].
+#
+# On my machine, it takes about 250mS to read the entire users.v4 file of 190,000 records and to create a
+# $u{callsign}->[record position in users.v4] for every callsign in the users.v4 file. Loading ~19,000 records
+# (read from disk, decode json, store reference) takes about 110mS (or 580nS/record).
+#
+# A periodic dump of users.v4.n, with said ~19,000 records in memory takes about 750mS to write (this can be speeded up,
+# by at least a half, if it becomes a problem!). As this periodic dump will be spawned off, it will not interrupt the data
+# stream.
+#
+# This is the first rewrite of DXUsers since inception. In the mojo branch we will no longer use Storable but use JSON instead.
+# We will now be storing all the keys in memory and will use opportunistic loading of actual records in "get()". So out of
+# say 200,000 known users it is unlikely that we will have more than 10% (more likely less) of the user records in memory.
+# This will mean that there will be a increase in memory requirement, but it is modest. I estimate it's unlikely be more
+# than 30 or so MB.
+#
+# At the moment that means that the working users.v4 is "immutable".
+#
+# In normal operation, when first calling 'init()', the keys and positions will be read from the newer of users.v4.n and
+# users.v4. If there is no users.v4.n, then users.v4 will be used. As time wears on, %u will then accrete active user records.
+# Once an hour the current %u will be saved to users.v4.n.
+#
+# If it becomes too much of a problem then we are likely to chuck off "close()d" users onto the end of the current users.v4
+# leaving existing users intact, but updating the pointer to the (now cleared out) user ref to the new location. This will
+# be a sort of write behind log file. The users.v4 file is still immutable for the starting positions, but any chucked off
+# records (or even "updates") will be written to the end of that file. If this has to be reread at any time, then the last
+# entry for any callsign "wins". But this will only happen if I think the memory requirements over time become too much.
+#
+# As there is no functional difference between the users.v4 and export_user generated "user_json" file(s), other than the latter
+# will be in sorted order with the record elements in "canonical" order. There will now longer be any code to execute to
+# "restore the users file". Simply copy one of the "user_json" files to users.v4, remove users.v4.n and restart.
+#
+# Hopefully though, this will put to rest the need to do all that messing about ever again... Pigs may well be seen flying over
+# your node as well :-)