1. download the database at http://www.dxcluster.org/download/usdbraw.gz
2. save it somewhere (/tmp, anywhere).
3. Decompress it using your favorite program (gunzip / winzip) [call it
- usdbraw if winzip).
+ usdbraw if winzip]{BE WARNED, some browsers may decompress it on the
+ fly for you, you can tell; if it is 5Mb it is compressed - 16Mb not}.
4. remove any /spider/data/user.v1 files lying around (at least for this
first time.
5. cd /spider/perl
7. Wait, I suggest some cups of tea are in order.
8. Wait a bit more.
-You will be able do this while the node is running. There is a
+You don't need Compress::Zlib anymore, I assumed it was universal.
+
+You will be able do this while the node is running somewhen. There is a
planned method of keeping the US DB up to date with smaller (ie < 15Mb) patch
files once a week but you will have to wait a bit for the code to bed down
first. You can filter on routes, spots and announces using 'call_state' or
Once you have run the create_usdb.pl you will need to restart.
If you don't need this, then don't run create_usdb.pl it will simply be
-a waste of time. The run-time version is 24Mb and has 840,000 odd entries
+a waste of time. The run-time version is 30Mb and has 840,000 odd entries
in it. This does not replace or supplant sh/qrz (sorry Charlie [who put me
up to this]).