Compare commits
73 Commits
1.0
...
3-sse-endp
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2fead92dc5 | ||
|
|
e8ca488001 | ||
|
|
61fc0b9d0f | ||
|
|
70dc1b495c | ||
|
|
7fe478e040 | ||
|
|
926cf5caaf | ||
|
|
ae1caaa40f | ||
|
|
6116d19580 | ||
|
|
86beb27ebf | ||
|
|
d463403018 | ||
|
|
23a6e08777 | ||
|
|
61784e8af6 | ||
|
|
fd246fc17b | ||
|
|
fb935138a1 | ||
|
|
1f66da062b | ||
|
|
70a7bd4814 | ||
|
|
fd2986f310 | ||
|
|
befaceb2f5 | ||
|
|
81da836bae | ||
|
|
c95c6bb347 | ||
|
|
968576f74c | ||
|
|
2a5e8301af | ||
|
|
040ef3ec00 | ||
|
|
ac9e2ff054 | ||
|
|
6eaaca3a6f | ||
|
|
097c75eadd | ||
|
|
27db248398 | ||
|
|
b00b4130c5 | ||
|
|
b3be6b5ca4 | ||
|
|
210a0564aa | ||
|
|
03af6858b4 | ||
|
|
e86d6b8c28 | ||
|
|
9d130712d8 | ||
|
|
8a82f81ec4 | ||
|
|
ca31d23b4a | ||
|
|
8a4f23ac72 | ||
|
|
3da8c80ad6 | ||
|
|
0fa8b44c9c | ||
|
|
4aa7b91092 | ||
|
|
e7469db99e | ||
|
|
9d9f4609f0 | ||
|
|
368e69bf00 | ||
|
|
9bdd0ab1de | ||
|
|
255719f3b5 | ||
|
|
f21ea0ae5d | ||
|
|
2be2af176c | ||
|
|
583735c99f | ||
|
|
0c8973bbc6 | ||
|
|
296cdb3795 | ||
|
|
6c9f3136b8 | ||
|
|
4e427f26c3 | ||
|
|
714151a6b4 | ||
|
|
0ccc2bd15d | ||
|
|
5724c4c7ea | ||
|
|
94c0cad769 | ||
|
|
452e4beb29 | ||
|
|
b132fe8a39 | ||
|
|
e525aaed92 | ||
|
|
92b7110356 | ||
|
|
114eacb9dc | ||
|
|
2a90b17b6b | ||
|
|
ae075f3ac7 | ||
|
|
efa9806c64 | ||
|
|
03829831c0 | ||
|
|
4f83468309 | ||
|
|
2165ebc103 | ||
|
|
cf46017917 | ||
|
|
c30e1616d3 | ||
|
|
422c917073 | ||
|
|
cad1f5cfdf | ||
|
|
78f8cd26f0 | ||
|
|
d6cc2673dd | ||
|
|
8f553a59f8 |
95
README.md
@@ -10,17 +10,64 @@ The API is deliberately well-defined with an OpenAPI specification and auto-gene
|
|||||||
|
|
||||||
Spothole itself is also open source, Public Domain licenced code that anyone can take and modify.
|
Spothole itself is also open source, Public Domain licenced code that anyone can take and modify.
|
||||||
|
|
||||||
Supported data sources include DX Clusters, the Reverse Beacon Network (RBN), the APRS Internet Service (APRS-IS), POTA, SOTA, WWFF, GMA, WWBOTA, HEMA, Parks 'n' Peaks, ZLOTA, WOTA, BOTA, the UK Packet Repeater Network, and NG3K.
|
Supported data sources include DX Clusters, the Reverse Beacon Network (RBN), the APRS Internet Service (APRS-IS), POTA, SOTA, WWFF, GMA, WWBOTA, HEMA, Parks 'n' Peaks, ZLOTA, WOTA, BOTA, the UK Packet Repeater Network, NG3K, and any site based on the xOTA software by nischu.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### Accessing the public version
|
## Accessing the public version
|
||||||
|
|
||||||
You can access the public version's web interface at [https://spothole.app](https://spothole.app), and see [https://spothole.app/apidocs](https://spothole.app/apidocs) for the API details.
|
You can access the public version's web interface at [https://spothole.app](https://spothole.app), and see [https://spothole.app/apidocs](https://spothole.app/apidocs) for the API details.
|
||||||
|
|
||||||
### Running your own copy
|
This is a Progressive Web App, so you can also "install" it to your Android or iOS device by accessing it in Chrome or Safari respectively, and following the menu-driven process for installing PWAs.
|
||||||
|
|
||||||
|
## Embedding Spothole in another website
|
||||||
|
|
||||||
|
You can embed Spothole in another website, e.g. for use as part of a ham radio custom dashboard.
|
||||||
|
|
||||||
|
URL parameters can be used to trigger an "embedded" mode which hides the headers, footers and settings. In this mode, you provide configuration for the various filter and display options via additional URL parameters. Any settings that the user has set for Spothole are ignored. This is so that the embedding site can select, for example, their choice of dark mode or SIG filters, which will not impact how Spothole appears when the user accesses it directly. Effectively, it becomes separate to their normal Spothole settings.
|
||||||
|
|
||||||
|
Setting `embedded` to true is important for the rest of the settings to be applied; otherwise, the user's defaults will be used in preference to the URL params.
|
||||||
|
|
||||||
|
These are supplied with the URL to the page you want to embed, for example for an embedded version of the band map in dark mode, use `https://spothole.com/bands?embedded=true&dark-mode=true`. For an embedded version of the main spots/home page in the system light/dark mode, use `https://spothole.com/?embedded=true`. For dark mode showing 70cm TOTA spots only, use `https://spothole.com/?embedded=true&dark-mode=true&filter-sigs=TOTA&filter-bands=70cm`. Providing no URL params causes the page to be loaded in the normal way it would when accessed directly in the user's browser.
|
||||||
|
|
||||||
|
The supported parameters are as follows. Generally these match the equivalent parameters in the real Spothole API, where a mapping exists.
|
||||||
|
|
||||||
|
| Name | Allowed Values | Default | Example | Description |
|
||||||
|
|----------------|-----------------------|---------|-------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
|
| `embedded` | `true`, `false` | `false` | `?embedded=true` | Enables embedded mode. |
|
||||||
|
| `dark-mode` | `true`, `false` | `false` | `?dark-mode=true` | Enables dark mode. |
|
||||||
|
| `time-zone` | `UTC`, `local` | `UTC` | `?time-zone=local` | Sets times to be in UTC or local time. |
|
||||||
|
| `limit` | 10, 25, 50, 100 | 50 | `?limit=50` | Sets the number of spots that will be displayed on the main spots page |
|
||||||
|
| `limit` | 25, 50, 100, 200, 500 | 100 | `?limit=100` | Sets the number of alerts that will be displayed on the alerts page |
|
||||||
|
| `max_age` | 300, 600, 1800, 3600 | 1800 | `?max_age=1800` | Sets the maximum age of spots displayed on the map and bands pages, in seconds. |
|
||||||
|
| `band` | Comma-separated list | (all) | `?band=20m,40m` | Sets the list of bands that will be shown on the spots, bands and map pages. Available options match the labels of the buttons in the standard web interface. |
|
||||||
|
| `sig` | Comma-separated list | (all) | `?sig=POTA,SOTA,NO_SIG` | Sets the list of SIGs that will be shown on the spots, bands and map pages. Available options match the labels of the buttons in the standard web interface. |
|
||||||
|
| `source` | Comma-separated list | (all) | `?source=Cluster` | Sets the list of sources that will be shown on any spot or alert pages. Available options match the labels of the buttons in the standard web interface. |
|
||||||
|
| `mode_type` | Comma-separated list | (all) | `?mode_type=PHONE,CW` | Sets the list of mode types that will be shown on the spots, bands and map pages. Available options match the labels of the buttons in the standard web interface. |
|
||||||
|
| `dx_continent` | Comma-separated list | (all) | `?dx_continent=NA,SA` | Sets the list of DX Continents that will be shown on any spot or alert pages. Available options match the labels of the buttons in the standard web interface. |
|
||||||
|
| `de_continent` | Comma-separated list | (all) | `?de_continent=EU` | Sets the list of DE Continents that will be shown on the spots, bands and map pages. Available options match the labels of the buttons in the standard web interface. |
|
||||||
|
|
||||||
|
More will be added soon to allow customisation of filters and other display properties.
|
||||||
|
|
||||||
|
## Writing your own client
|
||||||
|
|
||||||
|
One of the key strengths of Spothole is that the API is well-defined and open to anyone to use. This means you can build your own software that uses data from Spothole.
|
||||||
|
|
||||||
|
As well as the main API endpoints to fetch spots and alerts, with various possible query parameters, there are also Server-Sent Events (SSE) API endpoints to receive a live feed, plus various utility lookup endpoints for things like callsign and park data.
|
||||||
|
|
||||||
|
Various approaches exist to writing your own client, but in general:
|
||||||
|
|
||||||
|
* Refer to the API docs. These are built on an OpenAPI definition file (`/webassets/apidocs/openapi.yml`), which you can automatically use to generate a client skeleton using various software.
|
||||||
|
* Call the main "spots" or "alerts" API endpoints to get the data you want. Apply filters if necessary.
|
||||||
|
* Call the "options" API to get an idea of which bands, modes etc. the server knows about. You might want to do that first before calling the spots/alerts APIs, to allow you to populate your filters correctly.
|
||||||
|
* Refer to the provided HTML/JS interface for a reference on different approaches. For example, the "map" and "bands" pages simply query the main spot API on a timer, whereas the main/spots page combines this approach with using the Server-Sent Events (SSE) endpoint to update live.
|
||||||
|
* Let me know if you get stuck, I'm happy to help!
|
||||||
|
|
||||||
|
## Running your own copy
|
||||||
|
|
||||||
|
If you want to run a copy of Spothole with different configuration settings than the main instance, you can download it and run it on your own local machine or server.
|
||||||
|
|
||||||
To download and set up Spothole on a Debian server, run the following commands. Other operating systems will likely be similar.
|
To download and set up Spothole on a Debian server, run the following commands. Other operating systems will likely be similar.
|
||||||
|
|
||||||
@@ -34,7 +81,7 @@ deactivate
|
|||||||
cp config-example.yml config.yml
|
cp config-example.yml config.yml
|
||||||
```
|
```
|
||||||
|
|
||||||
Then edit `config.yml` in your text editor of choice to set up the software as you like it.
|
Then edit `config.yml` in your text editor of choice to set up the software as you like it. Mostly, this will involve enabling or disabling the various providers of spot and alert data.
|
||||||
|
|
||||||
`config.yml` has some entries for QRZ.com username & password, and Clublog API keys. If provided, these allow Spothole to retrieve more information about DX spots, such as the country their callsign corresponds to. The software will work just fine without them, but you may find a few country flags etc. are less accurate or missing.
|
`config.yml` has some entries for QRZ.com username & password, and Clublog API keys. If provided, these allow Spothole to retrieve more information about DX spots, such as the country their callsign corresponds to. The software will work just fine without them, but you may find a few country flags etc. are less accurate or missing.
|
||||||
|
|
||||||
@@ -57,6 +104,8 @@ If you see some errors on startup, check your configuration, e.g. in case you ha
|
|||||||
|
|
||||||
### systemd configuration
|
### systemd configuration
|
||||||
|
|
||||||
|
If you want Spothole to run automatically on startup on a Linux distribution that uses `systemd`, follow the instructions here. For distros that don't use `systemd`, or Windows/OSX/etc., you can find generic instructions for your OS online.
|
||||||
|
|
||||||
Create a file at `/etc/systemd/system/spothole.service`. Give it the following content, adjusting for the user you want to run it as and the directory in which you have installed it:
|
Create a file at `/etc/systemd/system/spothole.service`. Give it the following content, adjusting for the user you want to run it as and the directory in which you have installed it:
|
||||||
|
|
||||||
```
|
```
|
||||||
@@ -87,7 +136,9 @@ Check the service has started up correctly with `sudo journalctl -u spothole -f`
|
|||||||
|
|
||||||
### nginx Reverse Proxy configuration
|
### nginx Reverse Proxy configuration
|
||||||
|
|
||||||
It's best not to serve Spothole directly on port 80, as that requires root privileges and prevents us using HTTPS, amongst other reasons. To set up nginx as a reverse proxy that sits in front of Spothole, first ensure it's installed e.g. `sudo apt install nginx`, and enabled e.g. `sudo systemd enable nginx`.
|
Web servers generally serve their pages from port 80. However, it's best not to serve Spothole's web interface directly on port 80, as that requires root privileges on a Linux system. It also and prevents us using HTTPS to serve a secure site, since Spothole itself doesn't directly support acting as an HTTPS server. The normal solution to this is to use a "reverse proxy" setup, where a general web server handles HTTP and HTTP requests (to port 80 & 443 respectively), then passes on the request to the back-end application (in this case Spothole). nginx is a common choice for this general web server.
|
||||||
|
|
||||||
|
To set up nginx as a reverse proxy that sits in front of Spothole, first ensure it's installed e.g. `sudo apt install nginx`, and enabled e.g. `sudo systemd enable nginx`.
|
||||||
|
|
||||||
Create a file at `/etc/nginx/sites-available/` called `spothole`. Give it the following contents, replacing `spothole.app` with the domain name on which you want to run Spothole. If you changed the port on which Spothole runs, update that on the "proxy_pass" line too.
|
Create a file at `/etc/nginx/sites-available/` called `spothole`. Give it the following contents, replacing `spothole.app` with the domain name on which you want to run Spothole. If you changed the port on which Spothole runs, update that on the "proxy_pass" line too.
|
||||||
|
|
||||||
@@ -135,17 +186,11 @@ You should now be able to access the web interface by going to the domain from y
|
|||||||
|
|
||||||
Once that's working, [install certbot](https://certbot.eff.org/instructions?ws=nginx&os=snap) onto your server. Run it as root, and when prompted pick your domain name from the list. After a few seconds, it should successfully provision a certificate and modify your nginx config files automatically. You should then be able to access the site via HTTPS.
|
Once that's working, [install certbot](https://certbot.eff.org/instructions?ws=nginx&os=snap) onto your server. Run it as root, and when prompted pick your domain name from the list. After a few seconds, it should successfully provision a certificate and modify your nginx config files automatically. You should then be able to access the site via HTTPS.
|
||||||
|
|
||||||
### Writing your own client
|
## Modifying the source code
|
||||||
|
|
||||||
Various approaches exist to writing your own client, but in general:
|
Spothole is Public Domain licenced, so you can grab the source code and start modifying it for your own needs. Contributions of code back to the main repository are encouraged, but completely optional.
|
||||||
|
|
||||||
* Refer to the API docs. These are built on an OpenAPI definition file (`/webassets/apidocs/openapi.yml`), which you can automatically use to generate a client skeleton using various software.
|
### Code structure
|
||||||
* Call the main "spots" API to get the data you want. Apply filters if necessary.
|
|
||||||
* Call the "options" API to get an idea of which bands, modes etc. the server knows about. You might want to do that first before calling the spots API.
|
|
||||||
* Refer to the provided HTML/JS interface for a reference
|
|
||||||
* Let me know if you get stuck, I'm happy to help!
|
|
||||||
|
|
||||||
### Structure of the source code
|
|
||||||
|
|
||||||
To navigate your way around the source code, this list may help.
|
To navigate your way around the source code, this list may help.
|
||||||
|
|
||||||
@@ -159,7 +204,7 @@ To navigate your way around the source code, this list may help.
|
|||||||
|
|
||||||
*Templates*
|
*Templates*
|
||||||
|
|
||||||
* `/views` - Templates used for constructing Spothole's user-targeted HTML pages
|
* `/templates` - Templates used for constructing Spothole's user-targeted HTML pages
|
||||||
|
|
||||||
*HTML/JS/CSS front-end code*
|
*HTML/JS/CSS front-end code*
|
||||||
|
|
||||||
@@ -178,28 +223,32 @@ To navigate your way around the source code, this list may help.
|
|||||||
|
|
||||||
### Extending the server
|
### Extending the server
|
||||||
|
|
||||||
Spothole is designed to be easily extensible. If you want to write your own provider, simply add a module to the `providers` package containing your class. (Currently, in order to be loaded correctly, the module (file) name should be the same as the class name, but lower case.)
|
Spothole is designed to be easily extensible. If you want to write your own spot provider, for example, simply add a module to the `spotproviders` package containing your class. (Currently, in order to be loaded correctly, the module (file) name should be the same as the class name, but lower case.)
|
||||||
|
|
||||||
Your class should extend "Provider"; if it operates by polling an HTTP Server on a timer, it can instead extend "HTTPProvider" where some of the work is done for you.
|
Your class should extend "SpotProvider"; if it operates by polling an HTTP Server on a timer, it can instead extend "HTTPSpotProvider" where some of the work is done for you.
|
||||||
|
|
||||||
The class will need to implement a constructor that takes in the `provider_config` and provides it to the superclass constructor, while also taking any other config parameters it needs.
|
The class will need to implement a constructor that takes in the `provider_config` and provides it to the superclass constructor, while also taking any other config parameters it needs.
|
||||||
|
|
||||||
If you're extending the base `Provider` class, you will need to implement `start()` and `stop()` methods that start and stop a separate thread which handles the provider's processing needs. The thread should call `submit()` or `submit_batch()` when it has one or more spots to report.
|
If you're extending the base `SpotProvider` class, you will need to implement `start()` and `stop()` methods that start and stop a separate thread which handles the provider's processing needs. The thread should call `submit()` or `submit_batch()` when it has one or more spots to report.
|
||||||
|
|
||||||
If you're extending the `HTTPProvider` class, you will need to provide a URI to query and an interval to the superclass constructor. You'll then need to implement the `http_response_to_spots()` method which is called when new data is retrieved. Your implementation should then call `submit()` or `submit_batch()` when it has one or more spots to report.
|
If you're extending the `HTTPSpotProvider` class, you will need to provide a URI to query and an interval to the superclass constructor. You'll then need to implement the `http_response_to_spots()` method which is called when new data is retrieved. Your implementation should then call `submit()` or `submit_batch()` when it has one or more spots to report.
|
||||||
|
|
||||||
When constructing spots, use the comments in the Spot class and the existing implementations as an example. All parameters are optional, but you will at least want to provide a `time` (which must be timezone-aware) and a `dx_call`.
|
When constructing spots, use the comments in the Spot class and the existing implementations as an example. All parameters are optional, but you will at least want to provide a `time` (which must be timezone-aware) and a `dx_call`.
|
||||||
|
|
||||||
Finally, simply add the appropriate config to the `providers` section of `config.yml`, and your provider should be instantiated on startup.
|
Finally, simply add the appropriate config to the `spot_providers` section of `config.yml`, and your provider should be instantiated on startup.
|
||||||
|
|
||||||
### Thanks
|
The same approach as above is also used for alert providers.
|
||||||
|
|
||||||
|
## Thanks
|
||||||
|
|
||||||
As well as being my work, I have also gratefully received feature patches from Steven, M1SDH.
|
As well as being my work, I have also gratefully received feature patches from Steven, M1SDH.
|
||||||
|
|
||||||
The project contains a self-hosted copy of Font Awesome's free library, in the `/webasset/fa/` directory. This is subject to Font Awesome's licence and is not covered by the overall licence declared in the `LICENSE` file. This approach was taken in preference to using their hosted kits due to the popularity of this project exceeding the page view limit for their free hosted offering.
|
The project contains a self-hosted copy of Font Awesome's free library, in the `/webassets/fa/` directory. This is subject to Font Awesome's licence and is not covered by the overall licence declared in the `LICENSE` file. This approach was taken in preference to using their hosted kits due to the popularity of this project exceeding the page view limit for their free hosted offering.
|
||||||
|
|
||||||
|
The project contains a set of flag icons generated using the "Noto Color Emoji" font on a Debian system, in the `/webassets/img/flags/` directory.
|
||||||
|
|
||||||
The software uses a number of Python libraries as listed in `requirements.txt`, and a number of JavaScript libraries such as jQuery, Leaflet and Bootstrap. This project would not have been possible without these libraries, so many thanks to their developers.
|
The software uses a number of Python libraries as listed in `requirements.txt`, and a number of JavaScript libraries such as jQuery, Leaflet and Bootstrap. This project would not have been possible without these libraries, so many thanks to their developers.
|
||||||
|
|
||||||
Particular thanks go to QRZCQ country-files.com for providing country lookup data for amateur radio, and to the developers of `pyhamtools` for making it easy to use this data as well as QRZ.com and Clublog lookup.
|
Particular thanks go to country-files.com for providing country lookup data for amateur radio, to K0SWE for [this JSON-formatted DXCC data](https://github.com/k0swe/dxcc-json/), and to the developers of `pyhamtools` for making it easy to use country-files.com data as well as QRZ.com and Clublog lookup.
|
||||||
|
|
||||||
The project's name was suggested by Harm, DK4HAA. Thanks!
|
The project's name was suggested by Harm, DK4HAA. Thanks!
|
||||||
|
|||||||
@@ -1,9 +1,8 @@
|
|||||||
from datetime import datetime, timedelta
|
from datetime import datetime
|
||||||
|
|
||||||
import pytz
|
import pytz
|
||||||
|
|
||||||
from core.config import SERVER_OWNER_CALLSIGN, MAX_ALERT_AGE
|
from core.config import MAX_ALERT_AGE
|
||||||
from core.constants import SOFTWARE_NAME, SOFTWARE_VERSION
|
|
||||||
|
|
||||||
|
|
||||||
# Generic alert provider class. Subclasses of this query the individual APIs for alerts.
|
# Generic alert provider class. Subclasses of this query the individual APIs for alerts.
|
||||||
@@ -16,10 +15,12 @@ class AlertProvider:
|
|||||||
self.last_update_time = datetime.min.replace(tzinfo=pytz.UTC)
|
self.last_update_time = datetime.min.replace(tzinfo=pytz.UTC)
|
||||||
self.status = "Not Started" if self.enabled else "Disabled"
|
self.status = "Not Started" if self.enabled else "Disabled"
|
||||||
self.alerts = None
|
self.alerts = None
|
||||||
|
self.web_server = None
|
||||||
|
|
||||||
# Set up the provider, e.g. giving it the alert list to work from
|
# Set up the provider, e.g. giving it the alert list to work from
|
||||||
def setup(self, alerts):
|
def setup(self, alerts, web_server):
|
||||||
self.alerts = alerts
|
self.alerts = alerts
|
||||||
|
self.web_server = web_server
|
||||||
|
|
||||||
# Start the provider. This should return immediately after spawning threads to access the remote resources
|
# Start the provider. This should return immediately after spawning threads to access the remote resources
|
||||||
def start(self):
|
def start(self):
|
||||||
@@ -29,12 +30,20 @@ class AlertProvider:
|
|||||||
# because alerts could be created at any point for any time in the future. Rely on hashcode-based id matching
|
# because alerts could be created at any point for any time in the future. Rely on hashcode-based id matching
|
||||||
# to deal with duplicates.
|
# to deal with duplicates.
|
||||||
def submit_batch(self, alerts):
|
def submit_batch(self, alerts):
|
||||||
|
# Sort the batch so that earliest ones go in first. This helps keep the ordering correct when alerts are fired
|
||||||
|
# off to SSE listeners.
|
||||||
|
alerts = sorted(alerts, key=lambda alert: (alert.start_time if alert and alert.start_time else 0))
|
||||||
for alert in alerts:
|
for alert in alerts:
|
||||||
# Fill in any blanks
|
# Fill in any blanks and add to the list
|
||||||
alert.infer_missing()
|
alert.infer_missing()
|
||||||
# Add to the list, provided it heas not already expired.
|
self.add_alert(alert)
|
||||||
if not alert.expired():
|
|
||||||
self.alerts.add(alert.id, alert, expire=MAX_ALERT_AGE)
|
def add_alert(self, alert):
|
||||||
|
if not alert.expired():
|
||||||
|
self.alerts.add(alert.id, alert, expire=MAX_ALERT_AGE)
|
||||||
|
# Ping the web server in case we have any SSE connections that need to see this immediately
|
||||||
|
if self.web_server:
|
||||||
|
self.web_server.notify_new_alert(alert)
|
||||||
|
|
||||||
# Stop any threads and prepare for application shutdown
|
# Stop any threads and prepare for application shutdown
|
||||||
def stop(self):
|
def stop(self):
|
||||||
|
|||||||
@@ -2,15 +2,15 @@ from datetime import datetime, timedelta
|
|||||||
|
|
||||||
import pytz
|
import pytz
|
||||||
from bs4 import BeautifulSoup
|
from bs4 import BeautifulSoup
|
||||||
|
|
||||||
from alertproviders.http_alert_provider import HTTPAlertProvider
|
from alertproviders.http_alert_provider import HTTPAlertProvider
|
||||||
from core.sig_utils import get_icon_for_sig
|
|
||||||
from data.alert import Alert
|
from data.alert import Alert
|
||||||
from data.sig_ref import SIGRef
|
from data.sig_ref import SIGRef
|
||||||
|
|
||||||
|
|
||||||
# Alert provider for Beaches on the Air
|
# Alert provider for Beaches on the Air
|
||||||
class BOTA(HTTPAlertProvider):
|
class BOTA(HTTPAlertProvider):
|
||||||
POLL_INTERVAL_SEC = 3600
|
POLL_INTERVAL_SEC = 1800
|
||||||
ALERTS_URL = "https://www.beachesontheair.com/"
|
ALERTS_URL = "https://www.beachesontheair.com/"
|
||||||
|
|
||||||
def __init__(self, provider_config):
|
def __init__(self, provider_config):
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ from data.alert import Alert
|
|||||||
|
|
||||||
# Alert provider NG3K DXpedition list
|
# Alert provider NG3K DXpedition list
|
||||||
class NG3K(HTTPAlertProvider):
|
class NG3K(HTTPAlertProvider):
|
||||||
POLL_INTERVAL_SEC = 3600
|
POLL_INTERVAL_SEC = 1800
|
||||||
ALERTS_URL = "https://www.ng3k.com/adxo.xml"
|
ALERTS_URL = "https://www.ng3k.com/adxo.xml"
|
||||||
AS_CALL_PATTERN = re.compile("as ([a-z0-9/]+)", re.IGNORECASE)
|
AS_CALL_PATTERN = re.compile("as ([a-z0-9/]+)", re.IGNORECASE)
|
||||||
|
|
||||||
|
|||||||
@@ -4,14 +4,13 @@ from datetime import datetime
|
|||||||
import pytz
|
import pytz
|
||||||
|
|
||||||
from alertproviders.http_alert_provider import HTTPAlertProvider
|
from alertproviders.http_alert_provider import HTTPAlertProvider
|
||||||
from core.sig_utils import get_icon_for_sig
|
|
||||||
from data.alert import Alert
|
from data.alert import Alert
|
||||||
from data.sig_ref import SIGRef
|
from data.sig_ref import SIGRef
|
||||||
|
|
||||||
|
|
||||||
# Alert provider for Parks n Peaks
|
# Alert provider for Parks n Peaks
|
||||||
class ParksNPeaks(HTTPAlertProvider):
|
class ParksNPeaks(HTTPAlertProvider):
|
||||||
POLL_INTERVAL_SEC = 3600
|
POLL_INTERVAL_SEC = 1800
|
||||||
ALERTS_URL = "http://parksnpeaks.org/api/ALERTS/"
|
ALERTS_URL = "http://parksnpeaks.org/api/ALERTS/"
|
||||||
|
|
||||||
def __init__(self, provider_config):
|
def __init__(self, provider_config):
|
||||||
|
|||||||
@@ -3,14 +3,13 @@ from datetime import datetime
|
|||||||
import pytz
|
import pytz
|
||||||
|
|
||||||
from alertproviders.http_alert_provider import HTTPAlertProvider
|
from alertproviders.http_alert_provider import HTTPAlertProvider
|
||||||
from core.sig_utils import get_icon_for_sig
|
|
||||||
from data.alert import Alert
|
from data.alert import Alert
|
||||||
from data.sig_ref import SIGRef
|
from data.sig_ref import SIGRef
|
||||||
|
|
||||||
|
|
||||||
# Alert provider for Parks on the Air
|
# Alert provider for Parks on the Air
|
||||||
class POTA(HTTPAlertProvider):
|
class POTA(HTTPAlertProvider):
|
||||||
POLL_INTERVAL_SEC = 3600
|
POLL_INTERVAL_SEC = 1800
|
||||||
ALERTS_URL = "https://api.pota.app/activation"
|
ALERTS_URL = "https://api.pota.app/activation"
|
||||||
|
|
||||||
def __init__(self, provider_config):
|
def __init__(self, provider_config):
|
||||||
|
|||||||
@@ -3,14 +3,13 @@ from datetime import datetime
|
|||||||
import pytz
|
import pytz
|
||||||
|
|
||||||
from alertproviders.http_alert_provider import HTTPAlertProvider
|
from alertproviders.http_alert_provider import HTTPAlertProvider
|
||||||
from core.sig_utils import get_icon_for_sig
|
|
||||||
from data.alert import Alert
|
from data.alert import Alert
|
||||||
from data.sig_ref import SIGRef
|
from data.sig_ref import SIGRef
|
||||||
|
|
||||||
|
|
||||||
# Alert provider for Summits on the Air
|
# Alert provider for Summits on the Air
|
||||||
class SOTA(HTTPAlertProvider):
|
class SOTA(HTTPAlertProvider):
|
||||||
POLL_INTERVAL_SEC = 3600
|
POLL_INTERVAL_SEC = 1800
|
||||||
ALERTS_URL = "https://api-db2.sota.org.uk/api/alerts/365/all/all"
|
ALERTS_URL = "https://api-db2.sota.org.uk/api/alerts/365/all/all"
|
||||||
|
|
||||||
def __init__(self, provider_config):
|
def __init__(self, provider_config):
|
||||||
|
|||||||
@@ -4,14 +4,13 @@ import pytz
|
|||||||
from rss_parser import RSSParser
|
from rss_parser import RSSParser
|
||||||
|
|
||||||
from alertproviders.http_alert_provider import HTTPAlertProvider
|
from alertproviders.http_alert_provider import HTTPAlertProvider
|
||||||
from core.sig_utils import get_icon_for_sig
|
|
||||||
from data.alert import Alert
|
from data.alert import Alert
|
||||||
from data.sig_ref import SIGRef
|
from data.sig_ref import SIGRef
|
||||||
|
|
||||||
|
|
||||||
# Alert provider for Wainwrights on the Air
|
# Alert provider for Wainwrights on the Air
|
||||||
class WOTA(HTTPAlertProvider):
|
class WOTA(HTTPAlertProvider):
|
||||||
POLL_INTERVAL_SEC = 3600
|
POLL_INTERVAL_SEC = 1800
|
||||||
ALERTS_URL = "https://www.wota.org.uk/alerts_rss.php"
|
ALERTS_URL = "https://www.wota.org.uk/alerts_rss.php"
|
||||||
RSS_DATE_TIME_FORMAT = "%a, %d %b %Y %H:%M:%S %z"
|
RSS_DATE_TIME_FORMAT = "%a, %d %b %Y %H:%M:%S %z"
|
||||||
|
|
||||||
|
|||||||
@@ -3,14 +3,13 @@ from datetime import datetime
|
|||||||
import pytz
|
import pytz
|
||||||
|
|
||||||
from alertproviders.http_alert_provider import HTTPAlertProvider
|
from alertproviders.http_alert_provider import HTTPAlertProvider
|
||||||
from core.sig_utils import get_icon_for_sig
|
|
||||||
from data.alert import Alert
|
from data.alert import Alert
|
||||||
from data.sig_ref import SIGRef
|
from data.sig_ref import SIGRef
|
||||||
|
|
||||||
|
|
||||||
# Alert provider for Worldwide Flora and Fauna
|
# Alert provider for Worldwide Flora and Fauna
|
||||||
class WWFF(HTTPAlertProvider):
|
class WWFF(HTTPAlertProvider):
|
||||||
POLL_INTERVAL_SEC = 3600
|
POLL_INTERVAL_SEC = 1800
|
||||||
ALERTS_URL = "https://spots.wwff.co/static/agendas.json"
|
ALERTS_URL = "https://spots.wwff.co/static/agendas.json"
|
||||||
|
|
||||||
def __init__(self, provider_config):
|
def __init__(self, provider_config):
|
||||||
|
|||||||
@@ -81,6 +81,18 @@ spot-providers:
|
|||||||
class: "UKPacketNet"
|
class: "UKPacketNet"
|
||||||
name: "UK Packet Radio Net"
|
name: "UK Packet Radio Net"
|
||||||
enabled: false
|
enabled: false
|
||||||
|
-
|
||||||
|
class: "XOTA"
|
||||||
|
name: "39C3 TOTA"
|
||||||
|
enabled: false
|
||||||
|
url: "wss://dev.39c3.totawatch.de/api/spot/live"
|
||||||
|
# Fixed SIG/latitude/longitude for all spots from a provider is currently only a feature for the "XOTA" provider,
|
||||||
|
# the software found at https://github.com/nischu/xOTA/. This is because this is a generic backend for xOTA
|
||||||
|
# programmes and so different URLs provide different programmes.
|
||||||
|
sig: "TOTA"
|
||||||
|
latitude: 53.5622678
|
||||||
|
longitude: 9.9855205
|
||||||
|
|
||||||
|
|
||||||
# Alert providers to use. Same setup as the spot providers list above.
|
# Alert providers to use. Same setup as the spot providers list above.
|
||||||
alert-providers:
|
alert-providers:
|
||||||
@@ -135,4 +147,13 @@ hamqth-password: ""
|
|||||||
clublog-api-key: ""
|
clublog-api-key: ""
|
||||||
|
|
||||||
# Allow submitting spots to the Spothole API?
|
# Allow submitting spots to the Spothole API?
|
||||||
allow-spotting: true
|
allow-spotting: true
|
||||||
|
|
||||||
|
# Options for the web UI.
|
||||||
|
web-ui-options:
|
||||||
|
spot-count: [10, 25, 50, 100]
|
||||||
|
spot-count-default: 50
|
||||||
|
max-spot-age: [5, 10, 30, 60]
|
||||||
|
max-spot-age-default: 30
|
||||||
|
alert-count: [25, 50, 100, 200, 500]
|
||||||
|
alert-count-default: 100
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
import logging
|
import logging
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime
|
||||||
from threading import Timer
|
from threading import Timer
|
||||||
from time import sleep
|
from time import sleep
|
||||||
|
|
||||||
@@ -10,9 +10,10 @@ import pytz
|
|||||||
class CleanupTimer:
|
class CleanupTimer:
|
||||||
|
|
||||||
# Constructor
|
# Constructor
|
||||||
def __init__(self, spots, alerts, cleanup_interval):
|
def __init__(self, spots, alerts, web_server, cleanup_interval):
|
||||||
self.spots = spots
|
self.spots = spots
|
||||||
self.alerts = alerts
|
self.alerts = alerts
|
||||||
|
self.web_server = web_server
|
||||||
self.cleanup_interval = cleanup_interval
|
self.cleanup_interval = cleanup_interval
|
||||||
self.cleanup_timer = None
|
self.cleanup_timer = None
|
||||||
self.last_cleanup_time = datetime.min.replace(tzinfo=pytz.UTC)
|
self.last_cleanup_time = datetime.min.replace(tzinfo=pytz.UTC)
|
||||||
@@ -29,16 +30,30 @@ class CleanupTimer:
|
|||||||
# Perform cleanup and reschedule next timer
|
# Perform cleanup and reschedule next timer
|
||||||
def cleanup(self):
|
def cleanup(self):
|
||||||
try:
|
try:
|
||||||
# Perform cleanup
|
# Perform cleanup via letting the data expire
|
||||||
self.spots.expire()
|
self.spots.expire()
|
||||||
self.alerts.expire()
|
self.alerts.expire()
|
||||||
|
|
||||||
# Alerts can persist in the system for a while, so we want to explicitly clean up any alerts that have
|
# Explicitly clean up any spots and alerts that have expired
|
||||||
# expired
|
for id in list(self.spots.iterkeys()):
|
||||||
|
try:
|
||||||
|
spot = self.spots[id]
|
||||||
|
if spot.expired():
|
||||||
|
self.spots.delete(id)
|
||||||
|
except KeyError:
|
||||||
|
# Must have already been deleted, OK with that
|
||||||
|
pass
|
||||||
for id in list(self.alerts.iterkeys()):
|
for id in list(self.alerts.iterkeys()):
|
||||||
alert = self.alerts[id]
|
try:
|
||||||
if alert.expired():
|
alert = self.alerts[id]
|
||||||
self.alerts.delete(id)
|
if alert.expired():
|
||||||
|
self.alerts.delete(id)
|
||||||
|
except KeyError:
|
||||||
|
# Must have already been deleted, OK with that
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Clean up web server SSE spot/alert queues
|
||||||
|
self.web_server.clean_up_sse_queues()
|
||||||
|
|
||||||
self.status = "OK"
|
self.status = "OK"
|
||||||
self.last_cleanup_time = datetime.now(pytz.UTC)
|
self.last_cleanup_time = datetime.now(pytz.UTC)
|
||||||
|
|||||||
@@ -16,4 +16,5 @@ MAX_SPOT_AGE = config["max-spot-age-sec"]
|
|||||||
MAX_ALERT_AGE = config["max-alert-age-sec"]
|
MAX_ALERT_AGE = config["max-alert-age-sec"]
|
||||||
SERVER_OWNER_CALLSIGN = config["server-owner-callsign"]
|
SERVER_OWNER_CALLSIGN = config["server-owner-callsign"]
|
||||||
WEB_SERVER_PORT = config["web-server-port"]
|
WEB_SERVER_PORT = config["web-server-port"]
|
||||||
ALLOW_SPOTTING = config["allow-spotting"]
|
ALLOW_SPOTTING = config["allow-spotting"]
|
||||||
|
WEB_UI_OPTIONS = config["web-ui-options"]
|
||||||
1439
core/constants.py
@@ -1,5 +1,7 @@
|
|||||||
import gzip
|
import gzip
|
||||||
|
import json
|
||||||
import logging
|
import logging
|
||||||
|
import re
|
||||||
import urllib.parse
|
import urllib.parse
|
||||||
from datetime import timedelta
|
from datetime import timedelta
|
||||||
|
|
||||||
@@ -14,7 +16,7 @@ from requests_cache import CachedSession
|
|||||||
from core.cache_utils import SEMI_STATIC_URL_DATA_CACHE
|
from core.cache_utils import SEMI_STATIC_URL_DATA_CACHE
|
||||||
from core.config import config
|
from core.config import config
|
||||||
from core.constants import BANDS, UNKNOWN_BAND, CW_MODES, PHONE_MODES, DATA_MODES, ALL_MODES, \
|
from core.constants import BANDS, UNKNOWN_BAND, CW_MODES, PHONE_MODES, DATA_MODES, ALL_MODES, \
|
||||||
QRZCQ_CALLSIGN_LOOKUP_DATA, HTTP_HEADERS, HAMQTH_PRG
|
HTTP_HEADERS, HAMQTH_PRG
|
||||||
|
|
||||||
|
|
||||||
# Singleton class that provides lookup functionality.
|
# Singleton class that provides lookup functionality.
|
||||||
@@ -46,6 +48,8 @@ class LookupHelper:
|
|||||||
self.CALL_INFO_BASIC = None
|
self.CALL_INFO_BASIC = None
|
||||||
self.LOOKUP_LIB_BASIC = None
|
self.LOOKUP_LIB_BASIC = None
|
||||||
self.COUNTRY_FILES_CTY_PLIST_DOWNLOAD_LOCATION = None
|
self.COUNTRY_FILES_CTY_PLIST_DOWNLOAD_LOCATION = None
|
||||||
|
self.DXCC_JSON_DOWNLOAD_LOCATION = None
|
||||||
|
self.DXCC_DATA = None
|
||||||
|
|
||||||
def start(self):
|
def start(self):
|
||||||
# Lookup helpers from pyhamtools. We use five (!) of these. The simplest is country-files.com, which downloads
|
# Lookup helpers from pyhamtools. We use five (!) of these. The simplest is country-files.com, which downloads
|
||||||
@@ -84,6 +88,19 @@ class LookupHelper:
|
|||||||
filename=self.CLUBLOG_XML_DOWNLOAD_LOCATION)
|
filename=self.CLUBLOG_XML_DOWNLOAD_LOCATION)
|
||||||
self.CLUBLOG_CALLSIGN_DATA_CACHE = Cache('cache/clublog_callsign_lookup_cache')
|
self.CLUBLOG_CALLSIGN_DATA_CACHE = Cache('cache/clublog_callsign_lookup_cache')
|
||||||
|
|
||||||
|
# We also get a lookup of DXCC data from K0SWE to use for additional lookups of e.g. flags.
|
||||||
|
self.DXCC_JSON_DOWNLOAD_LOCATION = "cache/dxcc.json"
|
||||||
|
success = self.download_dxcc_json()
|
||||||
|
if success:
|
||||||
|
with open(self.DXCC_JSON_DOWNLOAD_LOCATION) as f:
|
||||||
|
tmp_dxcc_data = json.load(f)["dxcc"]
|
||||||
|
# Reformat as a map for faster lookup
|
||||||
|
self.DXCC_DATA = {}
|
||||||
|
for dxcc in tmp_dxcc_data:
|
||||||
|
self.DXCC_DATA[dxcc["entityCode"]] = dxcc
|
||||||
|
else:
|
||||||
|
logging.error("Could not download DXCC data, flags and similar data may be missing!")
|
||||||
|
|
||||||
# Download the cty.plist file from country-files.com on first startup. The pyhamtools lib can actually download and use
|
# Download the cty.plist file from country-files.com on first startup. The pyhamtools lib can actually download and use
|
||||||
# this itself, but it's occasionally offline which causes it to throw an error. By downloading it separately, we can
|
# this itself, but it's occasionally offline which causes it to throw an error. By downloading it separately, we can
|
||||||
# catch errors and handle them, falling back to a previous copy of the file in the cache, and we can use the
|
# catch errors and handle them, falling back to a previous copy of the file in the cache, and we can use the
|
||||||
@@ -103,6 +120,22 @@ class LookupHelper:
|
|||||||
logging.error("Exception when downloading Clublog cty.xml", e)
|
logging.error("Exception when downloading Clublog cty.xml", e)
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
# Download the dxcc.json file on first startup.
|
||||||
|
def download_dxcc_json(self):
|
||||||
|
try:
|
||||||
|
logging.info("Downloading dxcc.json...")
|
||||||
|
response = SEMI_STATIC_URL_DATA_CACHE.get("https://raw.githubusercontent.com/k0swe/dxcc-json/refs/heads/main/dxcc.json",
|
||||||
|
headers=HTTP_HEADERS).text
|
||||||
|
|
||||||
|
with open(self.DXCC_JSON_DOWNLOAD_LOCATION, "w") as f:
|
||||||
|
f.write(response)
|
||||||
|
f.flush()
|
||||||
|
return True
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logging.error("Exception when downloading dxcc.json", e)
|
||||||
|
return False
|
||||||
|
|
||||||
# Download the cty.xml (gzipped) file from Clublog on first startup, so we can use it in preference to querying the
|
# Download the cty.xml (gzipped) file from Clublog on first startup, so we can use it in preference to querying the
|
||||||
# database live if possible.
|
# database live if possible.
|
||||||
def download_clublog_ctyxml(self):
|
def download_clublog_ctyxml(self):
|
||||||
@@ -175,11 +208,11 @@ class LookupHelper:
|
|||||||
clublog_data = self.get_clublog_api_data_for_callsign(call)
|
clublog_data = self.get_clublog_api_data_for_callsign(call)
|
||||||
if clublog_data and "Name" in clublog_data:
|
if clublog_data and "Name" in clublog_data:
|
||||||
country = clublog_data["Name"]
|
country = clublog_data["Name"]
|
||||||
# Couldn't get anything from Clublog database, try QRZCQ data
|
# Couldn't get anything from Clublog database, try DXCC data
|
||||||
if not country:
|
if not country:
|
||||||
qrzcq_data = self.get_qrzcq_data_for_callsign(call)
|
dxcc_data = self.get_dxcc_data_for_callsign(call)
|
||||||
if qrzcq_data and "country" in qrzcq_data:
|
if dxcc_data and "name" in dxcc_data:
|
||||||
country = qrzcq_data["country"]
|
country = dxcc_data["name"]
|
||||||
return country
|
return country
|
||||||
|
|
||||||
# Infer a DXCC ID from a callsign
|
# Infer a DXCC ID from a callsign
|
||||||
@@ -208,11 +241,11 @@ class LookupHelper:
|
|||||||
clublog_data = self.get_clublog_api_data_for_callsign(call)
|
clublog_data = self.get_clublog_api_data_for_callsign(call)
|
||||||
if clublog_data and "DXCC" in clublog_data:
|
if clublog_data and "DXCC" in clublog_data:
|
||||||
dxcc = clublog_data["DXCC"]
|
dxcc = clublog_data["DXCC"]
|
||||||
# Couldn't get anything from Clublog database, try QRZCQ data
|
# Couldn't get anything from Clublog database, try DXCC data
|
||||||
if not dxcc:
|
if not dxcc:
|
||||||
qrzcq_data = self.get_qrzcq_data_for_callsign(call)
|
dxcc_data = self.get_dxcc_data_for_callsign(call)
|
||||||
if qrzcq_data and "dxcc" in qrzcq_data:
|
if dxcc_data and "entityCode" in dxcc_data:
|
||||||
dxcc = qrzcq_data["dxcc"]
|
dxcc = dxcc_data["entityCode"]
|
||||||
return dxcc
|
return dxcc
|
||||||
|
|
||||||
# Infer a continent shortcode from a callsign
|
# Infer a continent shortcode from a callsign
|
||||||
@@ -236,11 +269,12 @@ class LookupHelper:
|
|||||||
clublog_data = self.get_clublog_api_data_for_callsign(call)
|
clublog_data = self.get_clublog_api_data_for_callsign(call)
|
||||||
if clublog_data and "Continent" in clublog_data:
|
if clublog_data and "Continent" in clublog_data:
|
||||||
continent = clublog_data["Continent"]
|
continent = clublog_data["Continent"]
|
||||||
# Couldn't get anything from Clublog database, try QRZCQ data
|
# Couldn't get anything from Clublog database, try DXCC data
|
||||||
if not continent:
|
if not continent:
|
||||||
qrzcq_data = self.get_qrzcq_data_for_callsign(call)
|
dxcc_data = self.get_dxcc_data_for_callsign(call)
|
||||||
if qrzcq_data and "continent" in qrzcq_data:
|
# Some DXCCs are in two continents, if so don't use the continent data as we can't be sure
|
||||||
continent = qrzcq_data["continent"]
|
if dxcc_data and "continent" in dxcc_data and len(dxcc_data["continent"]) == 1:
|
||||||
|
continent = dxcc_data["continent"][0]
|
||||||
return continent
|
return continent
|
||||||
|
|
||||||
# Infer a CQ zone from a callsign
|
# Infer a CQ zone from a callsign
|
||||||
@@ -269,11 +303,12 @@ class LookupHelper:
|
|||||||
clublog_data = self.get_clublog_api_data_for_callsign(call)
|
clublog_data = self.get_clublog_api_data_for_callsign(call)
|
||||||
if clublog_data and "CQZ" in clublog_data:
|
if clublog_data and "CQZ" in clublog_data:
|
||||||
cqz = clublog_data["CQZ"]
|
cqz = clublog_data["CQZ"]
|
||||||
# Couldn't get anything from Clublog database, try QRZCQ data
|
# Couldn't get anything from Clublog database, try DXCC data
|
||||||
if not cqz:
|
if not cqz:
|
||||||
qrzcq_data = self.get_qrzcq_data_for_callsign(call)
|
dxcc_data = self.get_dxcc_data_for_callsign(call)
|
||||||
if qrzcq_data and "cqz" in qrzcq_data:
|
# Some DXCCs are in multiple zones, if so don't use the zone data as we can't be sure
|
||||||
cqz = qrzcq_data["cqz"]
|
if dxcc_data and "cq" in dxcc_data and len(dxcc_data["cq"]) == 1:
|
||||||
|
cqz = dxcc_data["cq"][0]
|
||||||
return cqz
|
return cqz
|
||||||
|
|
||||||
# Infer a ITU zone from a callsign
|
# Infer a ITU zone from a callsign
|
||||||
@@ -293,13 +328,18 @@ class LookupHelper:
|
|||||||
hamqth_data = self.get_hamqth_data_for_callsign(call)
|
hamqth_data = self.get_hamqth_data_for_callsign(call)
|
||||||
if hamqth_data and "itu" in hamqth_data:
|
if hamqth_data and "itu" in hamqth_data:
|
||||||
ituz = hamqth_data["itu"]
|
ituz = hamqth_data["itu"]
|
||||||
# Couldn't get anything from HamQTH database, Clublog doesn't provide this, so try QRZCQ data
|
# Couldn't get anything from HamQTH database, Clublog doesn't provide this, so try DXCC data
|
||||||
if not ituz:
|
if not ituz:
|
||||||
qrzcq_data = self.get_qrzcq_data_for_callsign(call)
|
dxcc_data = self.get_dxcc_data_for_callsign(call)
|
||||||
if qrzcq_data and "ituz" in qrzcq_data:
|
# Some DXCCs are in multiple zones, if so don't use the zone data as we can't be sure
|
||||||
ituz = qrzcq_data["ituz"]
|
if dxcc_data and "itu" in dxcc_data and len(dxcc_data["itu"]) == 1:
|
||||||
|
ituz = dxcc_data["itu"]
|
||||||
return ituz
|
return ituz
|
||||||
|
|
||||||
|
# Get an emoji flag for a given DXCC entity ID
|
||||||
|
def get_flag_for_dxcc(self, dxcc):
|
||||||
|
return self.DXCC_DATA[dxcc]["flag"] if dxcc in self.DXCC_DATA else None
|
||||||
|
|
||||||
# Infer an operator name from a callsign (requires QRZ.com/HamQTH)
|
# Infer an operator name from a callsign (requires QRZ.com/HamQTH)
|
||||||
def infer_name_from_callsign_online_lookup(self, call):
|
def infer_name_from_callsign_online_lookup(self, call):
|
||||||
data = self.get_qrz_data_for_callsign(call)
|
data = self.get_qrz_data_for_callsign(call)
|
||||||
@@ -318,10 +358,10 @@ class LookupHelper:
|
|||||||
# Coordinates that look default are rejected (apologies if your position really is 0,0, enjoy your voyage)
|
# Coordinates that look default are rejected (apologies if your position really is 0,0, enjoy your voyage)
|
||||||
def infer_latlon_from_callsign_online_lookup(self, call):
|
def infer_latlon_from_callsign_online_lookup(self, call):
|
||||||
data = self.get_qrz_data_for_callsign(call)
|
data = self.get_qrz_data_for_callsign(call)
|
||||||
if data and "latitude" in data and "longitude" in data and (data["latitude"] != 0 or data["longitude"] != 0):
|
if data and "latitude" in data and "longitude" in data and (float(data["latitude"]) != 0 or float(data["longitude"]) != 0) and -89.9 < float(data["latitude"]) < 89.9:
|
||||||
return [data["latitude"], data["longitude"]]
|
return [data["latitude"], data["longitude"]]
|
||||||
data = self.get_hamqth_data_for_callsign(call)
|
data = self.get_hamqth_data_for_callsign(call)
|
||||||
if data and "latitude" in data and "longitude" in data and (data["latitude"] != 0 or data["longitude"] != 0):
|
if data and "latitude" in data and "longitude" in data and (float(data["latitude"]) != 0 or float(data["longitude"]) != 0) and -89.9 < float(data["latitude"]) < 89.9:
|
||||||
return [data["latitude"], data["longitude"]]
|
return [data["latitude"], data["longitude"]]
|
||||||
else:
|
else:
|
||||||
return None
|
return None
|
||||||
@@ -378,7 +418,20 @@ class LookupHelper:
|
|||||||
# Infer a mode from the frequency (in Hz) according to the band plan. Just a guess really.
|
# Infer a mode from the frequency (in Hz) according to the band plan. Just a guess really.
|
||||||
def infer_mode_from_frequency(self, freq):
|
def infer_mode_from_frequency(self, freq):
|
||||||
try:
|
try:
|
||||||
return freq_to_band(freq / 1000.0)["mode"]
|
khz = freq / 1000.0
|
||||||
|
mode = freq_to_band(khz)["mode"]
|
||||||
|
# Some additional common digimode ranges in addition to what the 3rd-party freq_to_band function returns.
|
||||||
|
# This is mostly here just because freq_to_band is very specific about things like FT8 frequencies, and e.g.
|
||||||
|
# a spot at 7074.5 kHz will be indicated as LSB, even though it's clearly in the FT8 range. Future updates
|
||||||
|
# might include other common digimode centres of activity here, but this achieves the main goal of keeping
|
||||||
|
# large numbers of clearly-FT* spots off the list of people filtering out digimodes.
|
||||||
|
if (7074 <= khz < 7077) or (10136 <= khz < 10139) or (14074 <= khz < 14077) or (18100 <= khz < 18103) or (
|
||||||
|
21074 <= khz < 21077) or (24915 <= khz < 24918) or (28074 <= khz < 28077):
|
||||||
|
mode = "FT8"
|
||||||
|
if (7047.5 <= khz < 7050.5) or (10140 <= khz < 10143) or (14080 <= khz < 14083) or (
|
||||||
|
18104 <= khz < 18107) or (21140 <= khz < 21143) or (24919 <= khz < 24922) or (28180 <= khz < 28183):
|
||||||
|
mode = "FT4"
|
||||||
|
return mode
|
||||||
except KeyError:
|
except KeyError:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
@@ -402,6 +455,11 @@ class LookupHelper:
|
|||||||
# QRZ had no info for the call, that's OK. Cache a None so we don't try to look this up again
|
# QRZ had no info for the call, that's OK. Cache a None so we don't try to look this up again
|
||||||
self.QRZ_CALLSIGN_DATA_CACHE.add(call, None, expire=604800) # 1 week in seconds
|
self.QRZ_CALLSIGN_DATA_CACHE.add(call, None, expire=604800) # 1 week in seconds
|
||||||
return None
|
return None
|
||||||
|
except (Exception):
|
||||||
|
# General exception like a timeout when communicating with QRZ. Return None this time, but don't cache
|
||||||
|
# that, so we can try again next time.
|
||||||
|
logging.error("Exception when looking up QRZ data")
|
||||||
|
return None
|
||||||
else:
|
else:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
@@ -488,11 +546,10 @@ class LookupHelper:
|
|||||||
else:
|
else:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Utility method to get QRZCQ data from our constants table, if we can find it
|
# Utility method to get generic DXCC data from our lookup table, if we can find it
|
||||||
def get_qrzcq_data_for_callsign(self, call):
|
def get_dxcc_data_for_callsign(self, call):
|
||||||
# Iterate in reverse order - see comments on the data structure itself
|
for entry in self.DXCC_DATA.values():
|
||||||
for entry in reversed(QRZCQ_CALLSIGN_LOOKUP_DATA):
|
if re.match(entry["prefixRegex"], call):
|
||||||
if call.startswith(entry["prefix"]):
|
|
||||||
return entry
|
return entry
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,4 @@
|
|||||||
from bottle import response
|
from prometheus_client import CollectorRegistry, generate_latest, Counter, disable_created_metrics, Gauge
|
||||||
from prometheus_client import CollectorRegistry, generate_latest, CONTENT_TYPE_LATEST, Counter, disable_created_metrics, \
|
|
||||||
Gauge
|
|
||||||
|
|
||||||
disable_created_metrics()
|
disable_created_metrics()
|
||||||
# Prometheus metrics registry
|
# Prometheus metrics registry
|
||||||
@@ -33,8 +31,6 @@ memory_use_gauge = Gauge(
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
# Get a Prometheus metrics response for Bottle
|
# Get a Prometheus metrics response for the web server
|
||||||
def get_metrics():
|
def get_metrics():
|
||||||
response.content_type = CONTENT_TYPE_LATEST
|
|
||||||
response.status = 200
|
|
||||||
return generate_latest(registry)
|
return generate_latest(registry)
|
||||||
|
|||||||
@@ -6,7 +6,6 @@ from pyhamtools.locator import latlong_to_locator
|
|||||||
from core.cache_utils import SEMI_STATIC_URL_DATA_CACHE
|
from core.cache_utils import SEMI_STATIC_URL_DATA_CACHE
|
||||||
from core.constants import SIGS, HTTP_HEADERS
|
from core.constants import SIGS, HTTP_HEADERS
|
||||||
from core.geo_utils import wab_wai_square_to_lat_lon
|
from core.geo_utils import wab_wai_square_to_lat_lon
|
||||||
from data.sig_ref import SIGRef
|
|
||||||
|
|
||||||
|
|
||||||
# Utility function to get the icon for a named SIG. If no match is found, the "circle-question" icon will be returned.
|
# Utility function to get the icon for a named SIG. If no match is found, the "circle-question" icon will be returned.
|
||||||
@@ -25,46 +24,51 @@ def get_ref_regex_for_sig(sig):
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
# Look up details of a SIG reference (e.g. POTA park) such as name, lat/lon, and grid.
|
# Look up details of a SIG reference (e.g. POTA park) such as name, lat/lon, and grid. Takes in a sig_ref object which
|
||||||
|
# must at minimum have a "sig" and an "id". The rest of the object will be populated and returned.
|
||||||
# Note there is currently no support for KRMNPA location lookup, see issue #61.
|
# Note there is currently no support for KRMNPA location lookup, see issue #61.
|
||||||
def get_sig_ref_info(sig, sig_ref_id):
|
def populate_sig_ref_info(sig_ref):
|
||||||
sig_ref = SIGRef(id=sig_ref_id, sig=sig)
|
if sig_ref.sig is None or sig_ref.id is None:
|
||||||
|
logging.warn("Failed to look up sig_ref info, sig or id were not set.")
|
||||||
|
|
||||||
|
sig = sig_ref.sig
|
||||||
|
ref_id = sig_ref.id
|
||||||
try:
|
try:
|
||||||
if sig.upper() == "POTA":
|
if sig.upper() == "POTA":
|
||||||
data = SEMI_STATIC_URL_DATA_CACHE.get("https://api.pota.app/park/" + sig_ref_id, headers=HTTP_HEADERS).json()
|
data = SEMI_STATIC_URL_DATA_CACHE.get("https://api.pota.app/park/" + ref_id, headers=HTTP_HEADERS).json()
|
||||||
if data:
|
if data:
|
||||||
fullname = data["name"] if "name" in data else None
|
fullname = data["name"] if "name" in data else None
|
||||||
if fullname and "parktypeDesc" in data and data["parktypeDesc"] != "":
|
if fullname and "parktypeDesc" in data and data["parktypeDesc"] != "":
|
||||||
fullname = fullname + " " + data["parktypeDesc"]
|
fullname = fullname + " " + data["parktypeDesc"]
|
||||||
sig_ref.name = fullname
|
sig_ref.name = fullname
|
||||||
sig_ref.url = "https://pota.app/#/park/" + sig_ref_id
|
sig_ref.url = "https://pota.app/#/park/" + ref_id
|
||||||
sig_ref.grid = data["grid6"] if "grid6" in data else None
|
sig_ref.grid = data["grid6"] if "grid6" in data else None
|
||||||
sig_ref.latitude = data["latitude"] if "latitude" in data else None
|
sig_ref.latitude = data["latitude"] if "latitude" in data else None
|
||||||
sig_ref.longitude = data["longitude"] if "longitude" in data else None
|
sig_ref.longitude = data["longitude"] if "longitude" in data else None
|
||||||
elif sig.upper() == "SOTA":
|
elif sig.upper() == "SOTA":
|
||||||
data = SEMI_STATIC_URL_DATA_CACHE.get("https://api-db2.sota.org.uk/api/summits/" + sig_ref_id,
|
data = SEMI_STATIC_URL_DATA_CACHE.get("https://api-db2.sota.org.uk/api/summits/" + ref_id,
|
||||||
headers=HTTP_HEADERS).json()
|
headers=HTTP_HEADERS).json()
|
||||||
if data:
|
if data:
|
||||||
sig_ref.name = data["name"] if "name" in data else None
|
sig_ref.name = data["name"] if "name" in data else None
|
||||||
sig_ref.url = "https://www.sotadata.org.uk/en/summit/" + sig_ref_id
|
sig_ref.url = "https://www.sotadata.org.uk/en/summit/" + ref_id
|
||||||
sig_ref.grid = data["locator"] if "locator" in data else None
|
sig_ref.grid = data["locator"] if "locator" in data else None
|
||||||
sig_ref.latitude = data["latitude"] if "latitude" in data else None
|
sig_ref.latitude = data["latitude"] if "latitude" in data else None
|
||||||
sig_ref.longitude = data["longitude"] if "longitude" in data else None
|
sig_ref.longitude = data["longitude"] if "longitude" in data else None
|
||||||
elif sig.upper() == "WWBOTA":
|
elif sig.upper() == "WWBOTA":
|
||||||
data = SEMI_STATIC_URL_DATA_CACHE.get("https://api.wwbota.org/bunkers/" + sig_ref_id,
|
data = SEMI_STATIC_URL_DATA_CACHE.get("https://api.wwbota.org/bunkers/" + ref_id,
|
||||||
headers=HTTP_HEADERS).json()
|
headers=HTTP_HEADERS).json()
|
||||||
if data:
|
if data:
|
||||||
sig_ref.name = data["name"] if "name" in data else None
|
sig_ref.name = data["name"] if "name" in data else None
|
||||||
sig_ref.url = "https://bunkerwiki.org/?s=" + sig_ref_id if sig_ref_id.startswith("B/G") else None
|
sig_ref.url = "https://bunkerwiki.org/?s=" + ref_id if ref_id.startswith("B/G") else None
|
||||||
sig_ref.grid = data["locator"] if "locator" in data else None
|
sig_ref.grid = data["locator"] if "locator" in data else None
|
||||||
sig_ref.latitude = data["lat"] if "lat" in data else None
|
sig_ref.latitude = data["lat"] if "lat" in data else None
|
||||||
sig_ref.longitude = data["long"] if "long" in data else None
|
sig_ref.longitude = data["long"] if "long" in data else None
|
||||||
elif sig.upper() == "GMA" or sig.upper() == "ARLHS" or sig.upper() == "ILLW" or sig.upper() == "WCA" or sig.upper() == "MOTA" or sig.upper() == "IOTA":
|
elif sig.upper() == "GMA" or sig.upper() == "ARLHS" or sig.upper() == "ILLW" or sig.upper() == "WCA" or sig.upper() == "MOTA" or sig.upper() == "IOTA":
|
||||||
data = SEMI_STATIC_URL_DATA_CACHE.get("https://www.cqgma.org/api/ref/?" + sig_ref_id,
|
data = SEMI_STATIC_URL_DATA_CACHE.get("https://www.cqgma.org/api/ref/?" + ref_id,
|
||||||
headers=HTTP_HEADERS).json()
|
headers=HTTP_HEADERS).json()
|
||||||
if data:
|
if data:
|
||||||
sig_ref.name = data["name"] if "name" in data else None
|
sig_ref.name = data["name"] if "name" in data else None
|
||||||
sig_ref.url = "https://www.cqgma.org/zinfo.php?ref=" + sig_ref_id
|
sig_ref.url = "https://www.cqgma.org/zinfo.php?ref=" + ref_id
|
||||||
sig_ref.grid = data["locator"] if "locator" in data else None
|
sig_ref.grid = data["locator"] if "locator" in data else None
|
||||||
sig_ref.latitude = data["latitude"] if "latitude" in data else None
|
sig_ref.latitude = data["latitude"] if "latitude" in data else None
|
||||||
sig_ref.longitude = data["longitude"] if "longitude" in data else None
|
sig_ref.longitude = data["longitude"] if "longitude" in data else None
|
||||||
@@ -73,9 +77,9 @@ def get_sig_ref_info(sig, sig_ref_id):
|
|||||||
headers=HTTP_HEADERS)
|
headers=HTTP_HEADERS)
|
||||||
wwff_dr = csv.DictReader(wwff_csv_data.content.decode().splitlines())
|
wwff_dr = csv.DictReader(wwff_csv_data.content.decode().splitlines())
|
||||||
for row in wwff_dr:
|
for row in wwff_dr:
|
||||||
if row["reference"] == sig_ref_id:
|
if row["reference"] == ref_id:
|
||||||
sig_ref.name = row["name"] if "name" in row else None
|
sig_ref.name = row["name"] if "name" in row else None
|
||||||
sig_ref.url = "https://wwff.co/directory/?showRef=" + sig_ref_id
|
sig_ref.url = "https://wwff.co/directory/?showRef=" + ref_id
|
||||||
sig_ref.grid = row["iaruLocator"] if "iaruLocator" in row else None
|
sig_ref.grid = row["iaruLocator"] if "iaruLocator" in row else None
|
||||||
sig_ref.latitude = float(row["latitude"]) if "latitude" in row else None
|
sig_ref.latitude = float(row["latitude"]) if "latitude" in row else None
|
||||||
sig_ref.longitude = float(row["longitude"]) if "longitude" in row else None
|
sig_ref.longitude = float(row["longitude"]) if "longitude" in row else None
|
||||||
@@ -85,7 +89,7 @@ def get_sig_ref_info(sig, sig_ref_id):
|
|||||||
headers=HTTP_HEADERS)
|
headers=HTTP_HEADERS)
|
||||||
siota_dr = csv.DictReader(siota_csv_data.content.decode().splitlines())
|
siota_dr = csv.DictReader(siota_csv_data.content.decode().splitlines())
|
||||||
for row in siota_dr:
|
for row in siota_dr:
|
||||||
if row["SILO_CODE"] == sig_ref_id:
|
if row["SILO_CODE"] == ref_id:
|
||||||
sig_ref.name = row["NAME"] if "NAME" in row else None
|
sig_ref.name = row["NAME"] if "NAME" in row else None
|
||||||
sig_ref.grid = row["LOCATOR"] if "LOCATOR" in row else None
|
sig_ref.grid = row["LOCATOR"] if "LOCATOR" in row else None
|
||||||
sig_ref.latitude = float(row["LAT"]) if "LAT" in row else None
|
sig_ref.latitude = float(row["LAT"]) if "LAT" in row else None
|
||||||
@@ -96,9 +100,14 @@ def get_sig_ref_info(sig, sig_ref_id):
|
|||||||
headers=HTTP_HEADERS).json()
|
headers=HTTP_HEADERS).json()
|
||||||
if data:
|
if data:
|
||||||
for feature in data["features"]:
|
for feature in data["features"]:
|
||||||
if feature["properties"]["wotaId"] == sig_ref_id:
|
if feature["properties"]["wotaId"] == ref_id:
|
||||||
sig_ref.name = feature["properties"]["title"]
|
sig_ref.name = feature["properties"]["title"]
|
||||||
sig_ref.url = "https://www.wota.org.uk/MM_" + sig_ref_id
|
# Fudge WOTA URLs. Outlying fell (LDO) URLs don't match their ID numbers but require 214 to be
|
||||||
|
# added to them
|
||||||
|
sig_ref.url = "https://www.wota.org.uk/MM_" + ref_id
|
||||||
|
if ref_id.upper().startswith("LDO-"):
|
||||||
|
number = int(ref_id.upper().replace("LDO-", ""))
|
||||||
|
sig_ref.url = "https://www.wota.org.uk/MM_LDO-" + str(number + 214)
|
||||||
sig_ref.grid = feature["properties"]["qthLocator"]
|
sig_ref.grid = feature["properties"]["qthLocator"]
|
||||||
sig_ref.latitude = feature["geometry"]["coordinates"][1]
|
sig_ref.latitude = feature["geometry"]["coordinates"][1]
|
||||||
sig_ref.longitude = feature["geometry"]["coordinates"][0]
|
sig_ref.longitude = feature["geometry"]["coordinates"][0]
|
||||||
@@ -107,9 +116,9 @@ def get_sig_ref_info(sig, sig_ref_id):
|
|||||||
data = SEMI_STATIC_URL_DATA_CACHE.get("https://ontheair.nz/assets/assets.json", headers=HTTP_HEADERS).json()
|
data = SEMI_STATIC_URL_DATA_CACHE.get("https://ontheair.nz/assets/assets.json", headers=HTTP_HEADERS).json()
|
||||||
if data:
|
if data:
|
||||||
for asset in data:
|
for asset in data:
|
||||||
if asset["code"] == sig_ref_id:
|
if asset["code"] == ref_id:
|
||||||
sig_ref.name = asset["name"]
|
sig_ref.name = asset["name"]
|
||||||
sig_ref.url = "https://ontheair.nz/assets/ZLI_OT-030" + sig_ref_id.replace("/", "_")
|
sig_ref.url = "https://ontheair.nz/assets/ZLI_OT-030" + ref_id.replace("/", "_")
|
||||||
sig_ref.grid = latlong_to_locator(asset["y"], asset["x"], 6)
|
sig_ref.grid = latlong_to_locator(asset["y"], asset["x"], 6)
|
||||||
sig_ref.latitude = asset["y"]
|
sig_ref.latitude = asset["y"]
|
||||||
sig_ref.longitude = asset["x"]
|
sig_ref.longitude = asset["x"]
|
||||||
@@ -119,14 +128,14 @@ def get_sig_ref_info(sig, sig_ref_id):
|
|||||||
sig_ref.name = sig_ref.id
|
sig_ref.name = sig_ref.id
|
||||||
sig_ref.url = "https://www.beachesontheair.com/beaches/" + sig_ref.name.lower().replace(" ", "-")
|
sig_ref.url = "https://www.beachesontheair.com/beaches/" + sig_ref.name.lower().replace(" ", "-")
|
||||||
elif sig.upper() == "WAB" or sig.upper() == "WAI":
|
elif sig.upper() == "WAB" or sig.upper() == "WAI":
|
||||||
ll = wab_wai_square_to_lat_lon(sig_ref_id)
|
ll = wab_wai_square_to_lat_lon(ref_id)
|
||||||
if ll:
|
if ll:
|
||||||
sig_ref.name = sig_ref_id
|
sig_ref.name = ref_id
|
||||||
sig_ref.grid = latlong_to_locator(ll[0], ll[1], 6)
|
sig_ref.grid = latlong_to_locator(ll[0], ll[1], 6)
|
||||||
sig_ref.latitude = ll[0]
|
sig_ref.latitude = ll[0]
|
||||||
sig_ref.longitude = ll[1]
|
sig_ref.longitude = ll[1]
|
||||||
except:
|
except:
|
||||||
logging.warn("Failed to look up sig_ref info for " + sig + " ref " + sig_ref_id + ".")
|
logging.warn("Failed to look up sig_ref info for " + sig + " ref " + ref_id + ".")
|
||||||
return sig_ref
|
return sig_ref
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -58,13 +58,17 @@ class StatusReporter:
|
|||||||
self.status_data["cleanup"] = {"status": self.cleanup_timer.status,
|
self.status_data["cleanup"] = {"status": self.cleanup_timer.status,
|
||||||
"last_ran": self.cleanup_timer.last_cleanup_time.replace(
|
"last_ran": self.cleanup_timer.last_cleanup_time.replace(
|
||||||
tzinfo=pytz.UTC).timestamp() if self.cleanup_timer.last_cleanup_time else 0}
|
tzinfo=pytz.UTC).timestamp() if self.cleanup_timer.last_cleanup_time else 0}
|
||||||
self.status_data["webserver"] = {"status": self.web_server.status,
|
self.status_data["webserver"] = {"status": self.web_server.web_server_metrics["status"],
|
||||||
"last_api_access": self.web_server.last_api_access_time.replace(
|
"last_api_access": self.web_server.web_server_metrics[
|
||||||
tzinfo=pytz.UTC).timestamp() if self.web_server.last_api_access_time else 0,
|
"last_api_access_time"].replace(
|
||||||
"api_access_count": self.web_server.api_access_counter,
|
tzinfo=pytz.UTC).timestamp() if self.web_server.web_server_metrics[
|
||||||
"last_page_access": self.web_server.last_page_access_time.replace(
|
"last_api_access_time"] else 0,
|
||||||
tzinfo=pytz.UTC).timestamp() if self.web_server.last_page_access_time else 0,
|
"api_access_count": self.web_server.web_server_metrics["api_access_counter"],
|
||||||
"page_access_count": self.web_server.page_access_counter}
|
"last_page_access": self.web_server.web_server_metrics[
|
||||||
|
"last_page_access_time"].replace(
|
||||||
|
tzinfo=pytz.UTC).timestamp() if self.web_server.web_server_metrics[
|
||||||
|
"last_page_access_time"] else 0,
|
||||||
|
"page_access_count": self.web_server.web_server_metrics["page_access_counter"]}
|
||||||
|
|
||||||
# Update Prometheus metrics
|
# Update Prometheus metrics
|
||||||
memory_use_gauge.set(psutil.Process(os.getpid()).memory_info().rss * 1024)
|
memory_use_gauge.set(psutil.Process(os.getpid()).memory_info().rss * 1024)
|
||||||
|
|||||||
5
core/utils.py
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Convert objects to serialisable things. Used by JSON serialiser as a default when it encounters unserializable things.
|
||||||
|
# Just converts objects to dict. Try to avoid doing anything clever here when serialising spots, because we also need
|
||||||
|
# to receive spots without complex handling.
|
||||||
|
def serialize_everything(obj):
|
||||||
|
return obj.__dict__
|
||||||
@@ -6,9 +6,8 @@ from datetime import datetime, timedelta
|
|||||||
|
|
||||||
import pytz
|
import pytz
|
||||||
|
|
||||||
from core.constants import DXCC_FLAGS
|
|
||||||
from core.lookup_helper import lookup_helper
|
from core.lookup_helper import lookup_helper
|
||||||
from core.sig_utils import get_icon_for_sig, get_sig_ref_info
|
from core.sig_utils import get_icon_for_sig, populate_sig_ref_info
|
||||||
|
|
||||||
|
|
||||||
# Data class that defines an alert.
|
# Data class that defines an alert.
|
||||||
@@ -95,18 +94,15 @@ class Alert:
|
|||||||
self.dx_itu_zone = lookup_helper.infer_itu_zone_from_callsign(self.dx_calls[0])
|
self.dx_itu_zone = lookup_helper.infer_itu_zone_from_callsign(self.dx_calls[0])
|
||||||
if self.dx_calls and self.dx_calls[0] and not self.dx_dxcc_id:
|
if self.dx_calls and self.dx_calls[0] and not self.dx_dxcc_id:
|
||||||
self.dx_dxcc_id = lookup_helper.infer_dxcc_id_from_callsign(self.dx_calls[0])
|
self.dx_dxcc_id = lookup_helper.infer_dxcc_id_from_callsign(self.dx_calls[0])
|
||||||
if self.dx_dxcc_id and self.dx_dxcc_id in DXCC_FLAGS and not self.dx_flag:
|
if self.dx_dxcc_id and not self.dx_flag:
|
||||||
self.dx_flag = DXCC_FLAGS[self.dx_dxcc_id]
|
self.dx_flag = lookup_helper.get_flag_for_dxcc(self.dx_dxcc_id)
|
||||||
|
|
||||||
# Fetch SIG data. In case a particular API doesn't provide a full set of name, lat, lon & grid for a reference
|
# Fetch SIG data. In case a particular API doesn't provide a full set of name, lat, lon & grid for a reference
|
||||||
# in its initial call, we use this code to populate the rest of the data. This includes working out grid refs
|
# in its initial call, we use this code to populate the rest of the data. This includes working out grid refs
|
||||||
# from WAB and WAI, which count as a SIG even though there's no real lookup, just maths
|
# from WAB and WAI, which count as a SIG even though there's no real lookup, just maths
|
||||||
if self.sig_refs and len(self.sig_refs) > 0:
|
if self.sig_refs and len(self.sig_refs) > 0:
|
||||||
for sig_ref in self.sig_refs:
|
for sig_ref in self.sig_refs:
|
||||||
lookup_data = get_sig_ref_info(sig_ref.sig, sig_ref.id)
|
populate_sig_ref_info(sig_ref)
|
||||||
if lookup_data:
|
|
||||||
# Update the sig_ref data from the lookup
|
|
||||||
sig_ref.__dict__.update(lookup_data.__dict__)
|
|
||||||
|
|
||||||
# If the spot itself doesn't have a SIG yet, but we have at least one SIG reference, take that reference's SIG
|
# If the spot itself doesn't have a SIG yet, but we have at least one SIG reference, take that reference's SIG
|
||||||
# and apply it to the whole spot.
|
# and apply it to the whole spot.
|
||||||
@@ -138,7 +134,7 @@ class Alert:
|
|||||||
return json.dumps(self, default=lambda o: o.__dict__, sort_keys=True)
|
return json.dumps(self, default=lambda o: o.__dict__, sort_keys=True)
|
||||||
|
|
||||||
# Decide if this alert has expired (in which case it should not be added to the system in the first place, and not
|
# Decide if this alert has expired (in which case it should not be added to the system in the first place, and not
|
||||||
# returned by the web server if later requested, and removed by the cleanup functions. "Expired" is defined as
|
# returned by the web server if later requested, and removed by the cleanup functions). "Expired" is defined as
|
||||||
# either having an end_time in the past, or if it only has a start_time, then that start time was more than 3 hours
|
# either having an end_time in the past, or if it only has a start_time, then that start time was more than 3 hours
|
||||||
# ago. If it somehow doesn't have a start_time either, it is considered to be expired.
|
# ago. If it somehow doesn't have a start_time either, it is considered to be expired.
|
||||||
def expired(self):
|
def expired(self):
|
||||||
|
|||||||
79
data/spot.py
@@ -4,14 +4,14 @@ import json
|
|||||||
import logging
|
import logging
|
||||||
import re
|
import re
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from datetime import datetime
|
from datetime import datetime, timedelta
|
||||||
|
|
||||||
import pytz
|
import pytz
|
||||||
from pyhamtools.locator import locator_to_latlong, latlong_to_locator
|
from pyhamtools.locator import locator_to_latlong, latlong_to_locator
|
||||||
|
|
||||||
from core.constants import DXCC_FLAGS
|
from core.config import MAX_SPOT_AGE
|
||||||
from core.lookup_helper import lookup_helper
|
from core.lookup_helper import lookup_helper
|
||||||
from core.sig_utils import get_icon_for_sig, get_sig_ref_info, ANY_SIG_REGEX, get_ref_regex_for_sig
|
from core.sig_utils import get_icon_for_sig, populate_sig_ref_info, ANY_SIG_REGEX, get_ref_regex_for_sig
|
||||||
from data.sig_ref import SIGRef
|
from data.sig_ref import SIGRef
|
||||||
|
|
||||||
|
|
||||||
@@ -174,8 +174,8 @@ class Spot:
|
|||||||
self.dx_itu_zone = lookup_helper.infer_itu_zone_from_callsign(self.dx_call)
|
self.dx_itu_zone = lookup_helper.infer_itu_zone_from_callsign(self.dx_call)
|
||||||
if self.dx_call and not self.dx_dxcc_id:
|
if self.dx_call and not self.dx_dxcc_id:
|
||||||
self.dx_dxcc_id = lookup_helper.infer_dxcc_id_from_callsign(self.dx_call)
|
self.dx_dxcc_id = lookup_helper.infer_dxcc_id_from_callsign(self.dx_call)
|
||||||
if self.dx_dxcc_id and self.dx_dxcc_id in DXCC_FLAGS and not self.dx_flag:
|
if self.dx_dxcc_id and not self.dx_flag:
|
||||||
self.dx_flag = DXCC_FLAGS[self.dx_dxcc_id]
|
self.dx_flag = lookup_helper.get_flag_for_dxcc(self.dx_dxcc_id)
|
||||||
|
|
||||||
# Clean up spotter call if it has an SSID or -# from RBN
|
# Clean up spotter call if it has an SSID or -# from RBN
|
||||||
if self.de_call and "-" in self.de_call:
|
if self.de_call and "-" in self.de_call:
|
||||||
@@ -207,8 +207,8 @@ class Spot:
|
|||||||
self.de_continent = lookup_helper.infer_continent_from_callsign(self.de_call)
|
self.de_continent = lookup_helper.infer_continent_from_callsign(self.de_call)
|
||||||
if not self.de_dxcc_id:
|
if not self.de_dxcc_id:
|
||||||
self.de_dxcc_id = lookup_helper.infer_dxcc_id_from_callsign(self.de_call)
|
self.de_dxcc_id = lookup_helper.infer_dxcc_id_from_callsign(self.de_call)
|
||||||
if self.de_dxcc_id and self.de_dxcc_id in DXCC_FLAGS and not self.de_flag:
|
if self.de_dxcc_id and not self.de_flag:
|
||||||
self.de_flag = DXCC_FLAGS[self.de_dxcc_id]
|
self.de_flag = lookup_helper.get_flag_for_dxcc(self.de_dxcc_id)
|
||||||
|
|
||||||
# Band from frequency
|
# Band from frequency
|
||||||
if self.freq and not self.band:
|
if self.freq and not self.band:
|
||||||
@@ -240,13 +240,19 @@ class Spot:
|
|||||||
if self.dx_latitude:
|
if self.dx_latitude:
|
||||||
self.dx_location_source = "SPOT"
|
self.dx_location_source = "SPOT"
|
||||||
|
|
||||||
|
# Set the top-level "SIG" if it is missing but we have at least one SIG ref.
|
||||||
|
if not self.sig and self.sig_refs and len(self.sig_refs) > 0:
|
||||||
|
self.sig = self.sig_refs[0].sig.upper()
|
||||||
|
|
||||||
# See if we already have a SIG reference, but the comment looks like it contains more for the same SIG. This
|
# See if we already have a SIG reference, but the comment looks like it contains more for the same SIG. This
|
||||||
# should catch e.g. POTA comments like "2-fer: GB-0001 GB-0002".
|
# should catch e.g. POTA comments like "2-fer: GB-0001 GB-0002".
|
||||||
if self.comment and self.sig_refs and len(self.sig_refs) > 0:
|
if self.comment and self.sig_refs and len(self.sig_refs) > 0 and self.sig_refs[0].sig:
|
||||||
sig = self.sig_refs[0].sig.upper()
|
sig = self.sig_refs[0].sig.upper()
|
||||||
all_comment_refs = re.findall(get_ref_regex_for_sig(sig), self.comment)
|
regex = get_ref_regex_for_sig(sig)
|
||||||
for ref in all_comment_refs:
|
if regex:
|
||||||
self.append_sig_ref_if_missing(SIGRef(id=ref.upper(), sig=sig))
|
all_comment_ref_matches = re.finditer(r"(^|\W)(" + regex + r")(^|\W)", self.comment, re.IGNORECASE)
|
||||||
|
for ref_match in all_comment_ref_matches:
|
||||||
|
self.append_sig_ref_if_missing(SIGRef(id=ref_match.group(2).upper(), sig=sig))
|
||||||
|
|
||||||
# See if the comment looks like it contains any SIGs (and optionally SIG references) that we can
|
# See if the comment looks like it contains any SIGs (and optionally SIG references) that we can
|
||||||
# add to the spot. This should catch cluster spot comments like "POTA GB-0001 WWFF GFF-0001" and e.g. POTA
|
# add to the spot. This should catch cluster spot comments like "POTA GB-0001 WWFF GFF-0001" and e.g. POTA
|
||||||
@@ -273,20 +279,17 @@ class Spot:
|
|||||||
# from WAB and WAI, which count as a SIG even though there's no real lookup, just maths
|
# from WAB and WAI, which count as a SIG even though there's no real lookup, just maths
|
||||||
if self.sig_refs and len(self.sig_refs) > 0:
|
if self.sig_refs and len(self.sig_refs) > 0:
|
||||||
for sig_ref in self.sig_refs:
|
for sig_ref in self.sig_refs:
|
||||||
lookup_data = get_sig_ref_info(sig_ref.sig, sig_ref.id)
|
sig_ref = populate_sig_ref_info(sig_ref)
|
||||||
if lookup_data:
|
# If the spot itself doesn't have location yet, but the SIG ref does, extract it
|
||||||
# Update the sig_ref data from the lookup
|
if sig_ref.grid and not self.dx_grid:
|
||||||
sig_ref.__dict__.update(lookup_data.__dict__)
|
self.dx_grid = sig_ref.grid
|
||||||
# If the spot itself doesn't have location yet, but the SIG ref does, extract it
|
if sig_ref.latitude and not self.dx_latitude:
|
||||||
if lookup_data.grid and not self.dx_grid:
|
self.dx_latitude = sig_ref.latitude
|
||||||
self.dx_grid = lookup_data.grid
|
self.dx_longitude = sig_ref.longitude
|
||||||
if lookup_data.latitude and not self.dx_latitude:
|
if self.sig == "WAB" or self.sig == "WAI":
|
||||||
self.dx_latitude = lookup_data.latitude
|
self.dx_location_source = "WAB/WAI GRID"
|
||||||
self.dx_longitude = lookup_data.longitude
|
else:
|
||||||
if self.sig == "WAB" or self.sig == "WAI":
|
self.dx_location_source = "SIG REF LOOKUP"
|
||||||
self.dx_location_source = "WAB/WAI GRID"
|
|
||||||
else:
|
|
||||||
self.dx_location_source = "SIG REF LOOKUP"
|
|
||||||
|
|
||||||
# If the spot itself doesn't have a SIG yet, but we have at least one SIG reference, take that reference's SIG
|
# If the spot itself doesn't have a SIG yet, but we have at least one SIG reference, take that reference's SIG
|
||||||
# and apply it to the whole spot.
|
# and apply it to the whole spot.
|
||||||
@@ -297,6 +300,10 @@ class Spot:
|
|||||||
if self.sig:
|
if self.sig:
|
||||||
self.icon = get_icon_for_sig(self.sig)
|
self.icon = get_icon_for_sig(self.sig)
|
||||||
|
|
||||||
|
# Default "radio" icon if nothing else has set it
|
||||||
|
if not self.icon:
|
||||||
|
self.icon = "tower-cell"
|
||||||
|
|
||||||
# DX Grid to lat/lon and vice versa in case one is missing
|
# DX Grid to lat/lon and vice versa in case one is missing
|
||||||
if self.dx_grid and not self.dx_latitude:
|
if self.dx_grid and not self.dx_latitude:
|
||||||
ll = locator_to_latlong(self.dx_grid)
|
ll = locator_to_latlong(self.dx_grid)
|
||||||
@@ -345,9 +352,10 @@ class Spot:
|
|||||||
|
|
||||||
# DX Location is "good" if it is from a spot, or from QRZ if the callsign doesn't contain a slash, so the operator
|
# DX Location is "good" if it is from a spot, or from QRZ if the callsign doesn't contain a slash, so the operator
|
||||||
# is likely at home.
|
# is likely at home.
|
||||||
self.dx_location_good = (self.dx_location_source == "SPOT" or self.dx_location_source == "SIG REF LOOKUP"
|
self.dx_location_good = self.dx_latitude and self.dx_longitude and (
|
||||||
or self.dx_location_source == "WAB/WAI GRID"
|
self.dx_location_source == "SPOT" or self.dx_location_source == "SIG REF LOOKUP"
|
||||||
or (self.dx_location_source == "HOME QTH" and not "/" in self.dx_call))
|
or self.dx_location_source == "WAB/WAI GRID"
|
||||||
|
or (self.dx_location_source == "HOME QTH" and not "/" in self.dx_call))
|
||||||
|
|
||||||
# DE with no digits and APRS servers starting "T2" are not things we can look up location for
|
# DE with no digits and APRS servers starting "T2" are not things we can look up location for
|
||||||
if self.de_call and any(char.isdigit() for char in self.de_call) and not (self.de_call.startswith("T2") and self.source == "APRS-IS"):
|
if self.de_call and any(char.isdigit() for char in self.de_call) and not (self.de_call.startswith("T2") and self.source == "APRS-IS"):
|
||||||
@@ -377,7 +385,7 @@ class Spot:
|
|||||||
self_copy.received_time_iso = ""
|
self_copy.received_time_iso = ""
|
||||||
self.id = hashlib.sha256(str(self_copy).encode("utf-8")).hexdigest()
|
self.id = hashlib.sha256(str(self_copy).encode("utf-8")).hexdigest()
|
||||||
|
|
||||||
# JSON serialise
|
# JSON sspoterialise
|
||||||
def to_json(self):
|
def to_json(self):
|
||||||
return json.dumps(self, default=lambda o: o.__dict__, sort_keys=True)
|
return json.dumps(self, default=lambda o: o.__dict__, sort_keys=True)
|
||||||
|
|
||||||
@@ -385,7 +393,18 @@ class Spot:
|
|||||||
def append_sig_ref_if_missing(self, new_sig_ref):
|
def append_sig_ref_if_missing(self, new_sig_ref):
|
||||||
if not self.sig_refs:
|
if not self.sig_refs:
|
||||||
self.sig_refs = []
|
self.sig_refs = []
|
||||||
|
new_sig_ref.id = new_sig_ref.id.strip().upper()
|
||||||
|
new_sig_ref.sig = new_sig_ref.sig.strip().upper()
|
||||||
|
if new_sig_ref.id == "":
|
||||||
|
return
|
||||||
for sig_ref in self.sig_refs:
|
for sig_ref in self.sig_refs:
|
||||||
if sig_ref.id.upper() == new_sig_ref.id.upper() and sig_ref.sig.upper() == new_sig_ref.sig.upper():
|
if sig_ref.id == new_sig_ref.id and sig_ref.sig == new_sig_ref.sig:
|
||||||
return
|
return
|
||||||
self.sig_refs.append(new_sig_ref)
|
self.sig_refs.append(new_sig_ref)
|
||||||
|
|
||||||
|
# Decide if this spot has expired (in which case it should not be added to the system in the first place, and not
|
||||||
|
# returned by the web server if later requested, and removed by the cleanup functions). "Expired" is defined as
|
||||||
|
# either having a time further ago than the server's MAX_SPOT_AGE. If it somehow doesn't have a time either, it is
|
||||||
|
# considered to be expired.
|
||||||
|
def expired(self):
|
||||||
|
return not self.time or self.time < (datetime.now(pytz.UTC) - timedelta(seconds=MAX_SPOT_AGE)).timestamp()
|
||||||
@@ -1,5 +1,4 @@
|
|||||||
pyyaml~=6.0.3
|
pyyaml~=6.0.3
|
||||||
bottle~=0.13.4
|
|
||||||
requests-cache~=1.2.1
|
requests-cache~=1.2.1
|
||||||
pyhamtools~=0.12.0
|
pyhamtools~=0.12.0
|
||||||
telnetlib3~=2.0.8
|
telnetlib3~=2.0.8
|
||||||
@@ -12,4 +11,7 @@ requests-sse~=0.5.2
|
|||||||
rss-parser~=2.1.1
|
rss-parser~=2.1.1
|
||||||
pyproj~=3.7.2
|
pyproj~=3.7.2
|
||||||
prometheus_client~=0.23.1
|
prometheus_client~=0.23.1
|
||||||
beautifulsoup4~=4.14.2
|
beautifulsoup4~=4.14.2
|
||||||
|
websocket-client~=1.9.0
|
||||||
|
tornado~=6.5.4
|
||||||
|
tornado_eventsource~=3.0.0
|
||||||
142
server/handlers/api/addspot.py
Normal file
@@ -0,0 +1,142 @@
|
|||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import re
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
import pytz
|
||||||
|
import tornado
|
||||||
|
|
||||||
|
from core.config import ALLOW_SPOTTING, MAX_SPOT_AGE
|
||||||
|
from core.constants import UNKNOWN_BAND
|
||||||
|
from core.lookup_helper import lookup_helper
|
||||||
|
from core.prometheus_metrics_handler import api_requests_counter
|
||||||
|
from core.sig_utils import get_ref_regex_for_sig
|
||||||
|
from core.utils import serialize_everything
|
||||||
|
from data.sig_ref import SIGRef
|
||||||
|
from data.spot import Spot
|
||||||
|
|
||||||
|
|
||||||
|
# API request handler for /api/v1/spot (POST)
|
||||||
|
class APISpotHandler(tornado.web.RequestHandler):
|
||||||
|
def initialize(self, spots, web_server_metrics):
|
||||||
|
self.spots = spots
|
||||||
|
self.web_server_metrics = web_server_metrics
|
||||||
|
|
||||||
|
def post(self):
|
||||||
|
try:
|
||||||
|
# Metrics
|
||||||
|
self.web_server_metrics["last_api_access_time"] = datetime.now(pytz.UTC)
|
||||||
|
self.web_server_metrics["api_access_counter"] += 1
|
||||||
|
self.web_server_metrics["status"] = "OK"
|
||||||
|
api_requests_counter.inc()
|
||||||
|
|
||||||
|
# Reject if not allowed
|
||||||
|
if not ALLOW_SPOTTING:
|
||||||
|
self.set_status(401)
|
||||||
|
self.write(json.dumps("Error - this server does not allow new spots to be added via the API.",
|
||||||
|
default=serialize_everything))
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Reject if format not json
|
||||||
|
if 'Content-Type' not in self.request.headers or self.request.headers.get('Content-Type') != "application/json":
|
||||||
|
self.set_status(415)
|
||||||
|
self.write(json.dumps("Error - request Content-Type must be application/json", default=serialize_everything))
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Reject if request body is empty
|
||||||
|
post_data = self.request.body
|
||||||
|
if not post_data:
|
||||||
|
self.set_status(422)
|
||||||
|
self.write(json.dumps("Error - request body is empty", default=serialize_everything))
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Read in the request body as JSON then convert to a Spot object
|
||||||
|
json_spot = tornado.escape.json_decode(post_data)
|
||||||
|
spot = Spot(**json_spot)
|
||||||
|
|
||||||
|
# Converting to a spot object this way won't have coped with sig_ref objects, so fix that. (Would be nice to
|
||||||
|
# redo this in a functional style)
|
||||||
|
if spot.sig_refs:
|
||||||
|
real_sig_refs = []
|
||||||
|
for dict_obj in spot.sig_refs:
|
||||||
|
real_sig_refs.append(json.loads(json.dumps(dict_obj), object_hook=lambda d: SIGRef(**d)))
|
||||||
|
spot.sig_refs = real_sig_refs
|
||||||
|
|
||||||
|
# Reject if no timestamp, frequency, dx_call or de_call
|
||||||
|
if not spot.time or not spot.dx_call or not spot.freq or not spot.de_call:
|
||||||
|
self.set_status(422)
|
||||||
|
self.write(json.dumps("Error - 'time', 'dx_call', 'freq' and 'de_call' must be provided as a minimum.",
|
||||||
|
default=serialize_everything))
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Reject invalid-looking callsigns
|
||||||
|
if not re.match(r"^[A-Za-z0-9/\-]*$", spot.dx_call):
|
||||||
|
self.set_status(422)
|
||||||
|
self.write(json.dumps("Error - '" + spot.dx_call + "' does not look like a valid callsign.",
|
||||||
|
default=serialize_everything))
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
|
return
|
||||||
|
if not re.match(r"^[A-Za-z0-9/\-]*$", spot.de_call):
|
||||||
|
self.set_status(422)
|
||||||
|
self.write(json.dumps("Error - '" + spot.de_call + "' does not look like a valid callsign.",
|
||||||
|
default=serialize_everything))
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Reject if frequency not in a known band
|
||||||
|
if lookup_helper.infer_band_from_freq(spot.freq) == UNKNOWN_BAND:
|
||||||
|
self.set_status(422)
|
||||||
|
self.write(json.dumps("Error - Frequency of " + str(spot.freq / 1000.0) + "kHz is not in a known band.",
|
||||||
|
default=serialize_everything))
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Reject if grid formatting incorrect
|
||||||
|
if spot.dx_grid and not re.match(
|
||||||
|
r"^([A-R]{2}[0-9]{2}[A-X]{2}[0-9]{2}[A-X]{2}|[A-R]{2}[0-9]{2}[A-X]{2}[0-9]{2}|[A-R]{2}[0-9]{2}[A-X]{2}|[A-R]{2}[0-9]{2})$",
|
||||||
|
spot.dx_grid.upper()):
|
||||||
|
self.set_status(422)
|
||||||
|
self.write(json.dumps("Error - '" + spot.dx_grid + "' does not look like a valid Maidenhead grid.",
|
||||||
|
default=serialize_everything))
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Reject if sig_ref format incorrect for sig
|
||||||
|
if spot.sig and spot.sig_refs and len(spot.sig_refs) > 0 and spot.sig_refs[0].id and get_ref_regex_for_sig(
|
||||||
|
spot.sig) and not re.match(get_ref_regex_for_sig(spot.sig), spot.sig_refs[0].id):
|
||||||
|
self.set_status(422)
|
||||||
|
self.write(json.dumps(
|
||||||
|
"Error - '" + spot.sig_refs[0].id + "' does not look like a valid reference for " + spot.sig + ".",
|
||||||
|
default=serialize_everything))
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
|
return
|
||||||
|
|
||||||
|
# infer missing data, and add it to our database.
|
||||||
|
spot.source = "API"
|
||||||
|
spot.infer_missing()
|
||||||
|
self.spots.add(spot.id, spot, expire=MAX_SPOT_AGE)
|
||||||
|
|
||||||
|
self.write(json.dumps("OK", default=serialize_everything))
|
||||||
|
self.set_status(201)
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(e)
|
||||||
|
self.write(json.dumps("Error - " + str(e), default=serialize_everything))
|
||||||
|
self.set_status(500)
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
174
server/handlers/api/alerts.py
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
import json
|
||||||
|
import logging
|
||||||
|
from datetime import datetime
|
||||||
|
from queue import Queue
|
||||||
|
|
||||||
|
import pytz
|
||||||
|
import tornado
|
||||||
|
import tornado_eventsource.handler
|
||||||
|
|
||||||
|
from core.prometheus_metrics_handler import api_requests_counter
|
||||||
|
from core.utils import serialize_everything
|
||||||
|
|
||||||
|
SSE_HANDLER_MAX_QUEUE_SIZE = 100
|
||||||
|
SSE_HANDLER_QUEUE_CHECK_INTERVAL = 5000
|
||||||
|
|
||||||
|
|
||||||
|
# API request handler for /api/v1/alerts
|
||||||
|
class APIAlertsHandler(tornado.web.RequestHandler):
|
||||||
|
def initialize(self, alerts, web_server_metrics):
|
||||||
|
self.alerts = alerts
|
||||||
|
self.web_server_metrics = web_server_metrics
|
||||||
|
|
||||||
|
def get(self):
|
||||||
|
try:
|
||||||
|
# Metrics
|
||||||
|
self.web_server_metrics["last_api_access_time"] = datetime.now(pytz.UTC)
|
||||||
|
self.web_server_metrics["api_access_counter"] += 1
|
||||||
|
self.web_server_metrics["status"] = "OK"
|
||||||
|
api_requests_counter.inc()
|
||||||
|
|
||||||
|
# request.arguments contains lists for each param key because technically the client can supply multiple,
|
||||||
|
# reduce that to just the first entry, and convert bytes to string
|
||||||
|
query_params = {k: v[0].decode("utf-8") for k, v in self.request.arguments.items()}
|
||||||
|
|
||||||
|
# Fetch all alerts matching the query
|
||||||
|
data = get_alert_list_with_filters(self.alerts, query_params)
|
||||||
|
self.write(json.dumps(data, default=serialize_everything))
|
||||||
|
self.set_status(200)
|
||||||
|
except ValueError as e:
|
||||||
|
logging.error(e)
|
||||||
|
self.write(json.dumps("Bad request - " + str(e), default=serialize_everything))
|
||||||
|
self.set_status(400)
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(e)
|
||||||
|
self.write(json.dumps("Error - " + str(e), default=serialize_everything))
|
||||||
|
self.set_status(500)
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
|
|
||||||
|
# API request handler for /api/v1/alerts/stream
|
||||||
|
class APIAlertsStreamHandler(tornado_eventsource.handler.EventSourceHandler):
|
||||||
|
def initialize(self, sse_alert_queues, web_server_metrics):
|
||||||
|
self.sse_alert_queues = sse_alert_queues
|
||||||
|
self.web_server_metrics = web_server_metrics
|
||||||
|
|
||||||
|
def open(self):
|
||||||
|
try:
|
||||||
|
# Metrics
|
||||||
|
self.web_server_metrics["last_api_access_time"] = datetime.now(pytz.UTC)
|
||||||
|
self.web_server_metrics["api_access_counter"] += 1
|
||||||
|
self.web_server_metrics["status"] = "OK"
|
||||||
|
api_requests_counter.inc()
|
||||||
|
|
||||||
|
# request.arguments contains lists for each param key because technically the client can supply multiple,
|
||||||
|
# reduce that to just the first entry, and convert bytes to string
|
||||||
|
self.query_params = {k: v[0].decode("utf-8") for k, v in self.request.arguments.items()}
|
||||||
|
|
||||||
|
# Create a alert queue and add it to the web server's list. The web server will fill this when alerts arrive
|
||||||
|
self.alert_queue = Queue(maxsize=SSE_HANDLER_MAX_QUEUE_SIZE)
|
||||||
|
self.sse_alert_queues.append(self.alert_queue)
|
||||||
|
|
||||||
|
# Set up a timed callback to check if anything is in the queue
|
||||||
|
self.heartbeat = tornado.ioloop.PeriodicCallback(self._callback, SSE_HANDLER_QUEUE_CHECK_INTERVAL)
|
||||||
|
self.heartbeat.start()
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logging.warn("Exception when serving SSE socket", e)
|
||||||
|
|
||||||
|
# When the user closes the socket, empty our queue and remove it from the list so the server no longer fills it
|
||||||
|
def close(self):
|
||||||
|
try:
|
||||||
|
if self.alert_queue in self.sse_alert_queues:
|
||||||
|
self.sse_alert_queues.remove(self.alert_queue)
|
||||||
|
self.alert_queue.empty()
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
self.alert_queue = None
|
||||||
|
super().close()
|
||||||
|
|
||||||
|
# Callback to check if anything has arrived in the queue, and if so send it to the client
|
||||||
|
def _callback(self):
|
||||||
|
try:
|
||||||
|
if self.alert_queue:
|
||||||
|
while not self.alert_queue.empty():
|
||||||
|
alert = self.alert_queue.get()
|
||||||
|
# If the new alert matches our param filters, send it to the client. If not, ignore it.
|
||||||
|
if alert_allowed_by_query(alert, self.query_params):
|
||||||
|
self.write_message(msg=json.dumps(alert, default=serialize_everything))
|
||||||
|
|
||||||
|
if self.alert_queue not in self.sse_alert_queues:
|
||||||
|
logging.error("Web server cleared up a queue of an active connection!")
|
||||||
|
self.close()
|
||||||
|
except:
|
||||||
|
logging.warn("Exception in SSE callback, connection will be closed.")
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Utility method to apply filters to the overall alert list and return only a subset. Enables query parameters in
|
||||||
|
# the main "alerts" GET call.
|
||||||
|
def get_alert_list_with_filters(all_alerts, query):
|
||||||
|
# Create a shallow copy of the alert list ordered by start time, then filter the list to reduce it only to alerts
|
||||||
|
# that match the filter parameters in the query string. Finally, apply a limit to the number of alerts returned.
|
||||||
|
# The list of query string filters is defined in the API docs.
|
||||||
|
alert_ids = list(all_alerts.iterkeys())
|
||||||
|
alerts = []
|
||||||
|
for k in alert_ids:
|
||||||
|
a = all_alerts.get(k)
|
||||||
|
if a is not None:
|
||||||
|
alerts.append(a)
|
||||||
|
alerts = sorted(alerts, key=lambda alert: (alert.start_time if alert and alert.start_time else 0))
|
||||||
|
alerts = list(filter(lambda alert: alert_allowed_by_query(alert, query), alerts))
|
||||||
|
if "limit" in query.keys():
|
||||||
|
alerts = alerts[:int(query.get("limit"))]
|
||||||
|
return alerts
|
||||||
|
|
||||||
|
# Given URL query params and an alert, figure out if the alert "passes" the requested filters or is rejected. The list
|
||||||
|
# of query parameters and their function is defined in the API docs.
|
||||||
|
def alert_allowed_by_query(alert, query):
|
||||||
|
for k in query.keys():
|
||||||
|
match k:
|
||||||
|
case "received_since":
|
||||||
|
since = datetime.fromtimestamp(int(query.get(k)), pytz.UTC)
|
||||||
|
if not alert.received_time or alert.received_time <= since:
|
||||||
|
return False
|
||||||
|
case "max_duration":
|
||||||
|
max_duration = int(query.get(k))
|
||||||
|
# Check the duration if end_time is provided. If end_time is not provided, assume the activation is
|
||||||
|
# "short", i.e. it always passes this check. If dxpeditions_skip_max_duration_check is true and
|
||||||
|
# the alert is a dxpedition, it also always passes the check.
|
||||||
|
if alert.is_dxpedition and (bool(query.get(
|
||||||
|
"dxpeditions_skip_max_duration_check")) if "dxpeditions_skip_max_duration_check" in query.keys() else False):
|
||||||
|
continue
|
||||||
|
if alert.end_time and alert.start_time and alert.end_time - alert.start_time > max_duration:
|
||||||
|
return False
|
||||||
|
case "source":
|
||||||
|
sources = query.get(k).split(",")
|
||||||
|
if not alert.source or alert.source not in sources:
|
||||||
|
return False
|
||||||
|
case "sig":
|
||||||
|
# If a list of sigs is provided, the alert must have a sig and it must match one of them.
|
||||||
|
# The special "sig" "NO_SIG", when supplied in the list, mathches alerts with no sig.
|
||||||
|
sigs = query.get(k).split(",")
|
||||||
|
include_no_sig = "NO_SIG" in sigs
|
||||||
|
if not alert.sig and not include_no_sig:
|
||||||
|
return False
|
||||||
|
if alert.sig and alert.sig not in sigs:
|
||||||
|
return False
|
||||||
|
case "dx_continent":
|
||||||
|
dxconts = query.get(k).split(",")
|
||||||
|
if not alert.dx_continent or alert.dx_continent not in dxconts:
|
||||||
|
return False
|
||||||
|
case "dx_call_includes":
|
||||||
|
dx_call_includes = query.get(k).strip()
|
||||||
|
if not alert.dx_call or dx_call_includes.upper() not in alert.dx_call.upper():
|
||||||
|
return False
|
||||||
|
case "text_includes":
|
||||||
|
text_includes = query.get(k).strip()
|
||||||
|
if (not alert.dx_call or text_includes.upper() not in alert.dx_call.upper()) \
|
||||||
|
and (not alert.comment or text_includes.upper() not in alert.comment.upper()) \
|
||||||
|
and (not alert.freqs_modes or text_includes.upper() not in alert.freqs_modes.upper()):
|
||||||
|
return False
|
||||||
|
return True
|
||||||
121
server/handlers/api/lookups.py
Normal file
@@ -0,0 +1,121 @@
|
|||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import re
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
import pytz
|
||||||
|
import tornado
|
||||||
|
|
||||||
|
from core.constants import SIGS
|
||||||
|
from core.prometheus_metrics_handler import api_requests_counter
|
||||||
|
from core.sig_utils import get_ref_regex_for_sig, populate_sig_ref_info
|
||||||
|
from core.utils import serialize_everything
|
||||||
|
from data.sig_ref import SIGRef
|
||||||
|
from data.spot import Spot
|
||||||
|
|
||||||
|
|
||||||
|
# API request handler for /api/v1/lookup/call
|
||||||
|
class APILookupCallHandler(tornado.web.RequestHandler):
|
||||||
|
def initialize(self, web_server_metrics):
|
||||||
|
self.web_server_metrics = web_server_metrics
|
||||||
|
|
||||||
|
def get(self):
|
||||||
|
try:
|
||||||
|
# Metrics
|
||||||
|
self.web_server_metrics["last_api_access_time"] = datetime.now(pytz.UTC)
|
||||||
|
self.web_server_metrics["api_access_counter"] += 1
|
||||||
|
self.web_server_metrics["status"] = "OK"
|
||||||
|
api_requests_counter.inc()
|
||||||
|
|
||||||
|
# request.arguments contains lists for each param key because technically the client can supply multiple,
|
||||||
|
# reduce that to just the first entry, and convert bytes to string
|
||||||
|
query_params = {k: v[0].decode("utf-8") for k, v in self.request.arguments.items()}
|
||||||
|
|
||||||
|
# The "call" query param must exist and look like a callsign
|
||||||
|
if "call" in query_params.keys():
|
||||||
|
call = query_params.get("call").upper()
|
||||||
|
if re.match(r"^[A-Z0-9/\-]*$", call):
|
||||||
|
# Take the callsign, make a "fake spot" so we can run infer_missing() on it, then repack the
|
||||||
|
# resulting data in the correct way for the API response.
|
||||||
|
fake_spot = Spot(dx_call=call)
|
||||||
|
fake_spot.infer_missing()
|
||||||
|
data = {
|
||||||
|
"call": call,
|
||||||
|
"name": fake_spot.dx_name,
|
||||||
|
"qth": fake_spot.dx_qth,
|
||||||
|
"country": fake_spot.dx_country,
|
||||||
|
"flag": fake_spot.dx_flag,
|
||||||
|
"continent": fake_spot.dx_continent,
|
||||||
|
"dxcc_id": fake_spot.dx_dxcc_id,
|
||||||
|
"cq_zone": fake_spot.dx_cq_zone,
|
||||||
|
"itu_zone": fake_spot.dx_itu_zone,
|
||||||
|
"grid": fake_spot.dx_grid,
|
||||||
|
"latitude": fake_spot.dx_latitude,
|
||||||
|
"longitude": fake_spot.dx_longitude,
|
||||||
|
"location_source": fake_spot.dx_location_source
|
||||||
|
}
|
||||||
|
self.write(json.dumps(data, default=serialize_everything))
|
||||||
|
|
||||||
|
else:
|
||||||
|
self.write(json.dumps("Error - '" + call + "' does not look like a valid callsign.",
|
||||||
|
default=serialize_everything))
|
||||||
|
self.set_status(422)
|
||||||
|
else:
|
||||||
|
self.write(json.dumps("Error - call must be provided", default=serialize_everything))
|
||||||
|
self.set_status(422)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(e)
|
||||||
|
self.write(json.dumps("Error - " + str(e), default=serialize_everything))
|
||||||
|
self.set_status(500)
|
||||||
|
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
|
|
||||||
|
|
||||||
|
# API request handler for /api/v1/lookup/sigref
|
||||||
|
class APILookupSIGRefHandler(tornado.web.RequestHandler):
|
||||||
|
def initialize(self, web_server_metrics):
|
||||||
|
self.web_server_metrics = web_server_metrics
|
||||||
|
|
||||||
|
def get(self):
|
||||||
|
try:
|
||||||
|
# Metrics
|
||||||
|
self.web_server_metrics["last_api_access_time"] = datetime.now(pytz.UTC)
|
||||||
|
self.web_server_metrics["api_access_counter"] += 1
|
||||||
|
self.web_server_metrics["status"] = "OK"
|
||||||
|
api_requests_counter.inc()
|
||||||
|
|
||||||
|
# request.arguments contains lists for each param key because technically the client can supply multiple,
|
||||||
|
# reduce that to just the first entry, and convert bytes to string
|
||||||
|
query_params = {k: v[0].decode("utf-8") for k, v in self.request.arguments.items()}
|
||||||
|
|
||||||
|
# "sig" and "id" query params must exist, SIG must be known, and if we have a reference regex for that SIG,
|
||||||
|
# the provided id must match it.
|
||||||
|
if "sig" in query_params.keys() and "id" in query_params.keys():
|
||||||
|
sig = query_params.get("sig").upper()
|
||||||
|
id = query_params.get("id").upper()
|
||||||
|
if sig in list(map(lambda p: p.name, SIGS)):
|
||||||
|
if not get_ref_regex_for_sig(sig) or re.match(get_ref_regex_for_sig(sig), id):
|
||||||
|
data = populate_sig_ref_info(SIGRef(id=id, sig=sig))
|
||||||
|
self.write(json.dumps(data, default=serialize_everything))
|
||||||
|
|
||||||
|
else:
|
||||||
|
self.write(
|
||||||
|
json.dumps("Error - '" + id + "' does not look like a valid reference ID for " + sig + ".",
|
||||||
|
default=serialize_everything))
|
||||||
|
self.set_status(422)
|
||||||
|
else:
|
||||||
|
self.write(json.dumps("Error - sig '" + sig + "' is not known.", default=serialize_everything))
|
||||||
|
self.set_status(422)
|
||||||
|
else:
|
||||||
|
self.write(json.dumps("Error - sig and id must be provided", default=serialize_everything))
|
||||||
|
self.set_status(422)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(e)
|
||||||
|
self.write(json.dumps("Error - " + str(e), default=serialize_everything))
|
||||||
|
self.set_status(500)
|
||||||
|
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
47
server/handlers/api/options.py
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
import json
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
import pytz
|
||||||
|
import tornado
|
||||||
|
|
||||||
|
from core.config import MAX_SPOT_AGE, ALLOW_SPOTTING, WEB_UI_OPTIONS
|
||||||
|
from core.constants import BANDS, ALL_MODES, MODE_TYPES, SIGS, CONTINENTS
|
||||||
|
from core.prometheus_metrics_handler import api_requests_counter
|
||||||
|
from core.utils import serialize_everything
|
||||||
|
|
||||||
|
|
||||||
|
# API request handler for /api/v1/options
|
||||||
|
class APIOptionsHandler(tornado.web.RequestHandler):
|
||||||
|
def initialize(self, status_data, web_server_metrics):
|
||||||
|
self.status_data = status_data
|
||||||
|
self.web_server_metrics = web_server_metrics
|
||||||
|
|
||||||
|
def get(self):
|
||||||
|
# Metrics
|
||||||
|
self.web_server_metrics["last_api_access_time"] = datetime.now(pytz.UTC)
|
||||||
|
self.web_server_metrics["api_access_counter"] += 1
|
||||||
|
self.web_server_metrics["status"] = "OK"
|
||||||
|
api_requests_counter.inc()
|
||||||
|
|
||||||
|
options = {"bands": BANDS,
|
||||||
|
"modes": ALL_MODES,
|
||||||
|
"mode_types": MODE_TYPES,
|
||||||
|
"sigs": SIGS,
|
||||||
|
# Spot/alert sources are filtered for only ones that are enabled in config, no point letting the user toggle things that aren't even available.
|
||||||
|
"spot_sources": list(
|
||||||
|
map(lambda p: p["name"], filter(lambda p: p["enabled"], self.status_data["spot_providers"]))),
|
||||||
|
"alert_sources": list(
|
||||||
|
map(lambda p: p["name"], filter(lambda p: p["enabled"], self.status_data["alert_providers"]))),
|
||||||
|
"continents": CONTINENTS,
|
||||||
|
"max_spot_age": MAX_SPOT_AGE,
|
||||||
|
"spot_allowed": ALLOW_SPOTTING,
|
||||||
|
"web-ui-options": WEB_UI_OPTIONS}
|
||||||
|
# If spotting to this server is enabled, "API" is another valid spot source even though it does not come from
|
||||||
|
# one of our proviers.
|
||||||
|
if ALLOW_SPOTTING:
|
||||||
|
options["spot_sources"].append("API")
|
||||||
|
|
||||||
|
self.write(json.dumps(options, default=serialize_everything))
|
||||||
|
self.set_status(200)
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
234
server/handlers/api/spots.py
Normal file
@@ -0,0 +1,234 @@
|
|||||||
|
import json
|
||||||
|
import logging
|
||||||
|
from datetime import datetime, timedelta
|
||||||
|
from queue import Queue
|
||||||
|
|
||||||
|
import pytz
|
||||||
|
import tornado
|
||||||
|
import tornado_eventsource.handler
|
||||||
|
|
||||||
|
from core.prometheus_metrics_handler import api_requests_counter
|
||||||
|
from core.utils import serialize_everything
|
||||||
|
|
||||||
|
SSE_HANDLER_MAX_QUEUE_SIZE = 1000
|
||||||
|
SSE_HANDLER_QUEUE_CHECK_INTERVAL = 5000
|
||||||
|
|
||||||
|
|
||||||
|
# API request handler for /api/v1/spots
|
||||||
|
class APISpotsHandler(tornado.web.RequestHandler):
|
||||||
|
def initialize(self, spots, web_server_metrics):
|
||||||
|
self.spots = spots
|
||||||
|
self.web_server_metrics = web_server_metrics
|
||||||
|
|
||||||
|
def get(self):
|
||||||
|
try:
|
||||||
|
# Metrics
|
||||||
|
self.web_server_metrics["last_api_access_time"] = datetime.now(pytz.UTC)
|
||||||
|
self.web_server_metrics["api_access_counter"] += 1
|
||||||
|
self.web_server_metrics["status"] = "OK"
|
||||||
|
api_requests_counter.inc()
|
||||||
|
|
||||||
|
# request.arguments contains lists for each param key because technically the client can supply multiple,
|
||||||
|
# reduce that to just the first entry, and convert bytes to string
|
||||||
|
query_params = {k: v[0].decode("utf-8") for k, v in self.request.arguments.items()}
|
||||||
|
|
||||||
|
# Fetch all spots matching the query
|
||||||
|
data = get_spot_list_with_filters(self.spots, query_params)
|
||||||
|
self.write(json.dumps(data, default=serialize_everything))
|
||||||
|
self.set_status(200)
|
||||||
|
except ValueError as e:
|
||||||
|
logging.error(e)
|
||||||
|
self.write(json.dumps("Bad request - " + str(e), default=serialize_everything))
|
||||||
|
self.set_status(400)
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(e)
|
||||||
|
self.write(json.dumps("Error - " + str(e), default=serialize_everything))
|
||||||
|
self.set_status(500)
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
|
|
||||||
|
|
||||||
|
# API request handler for /api/v1/spots/stream
|
||||||
|
class APISpotsStreamHandler(tornado_eventsource.handler.EventSourceHandler):
|
||||||
|
def initialize(self, sse_spot_queues, web_server_metrics):
|
||||||
|
self.sse_spot_queues = sse_spot_queues
|
||||||
|
self.web_server_metrics = web_server_metrics
|
||||||
|
|
||||||
|
# Called once on the client opening a connection, set things up
|
||||||
|
def open(self):
|
||||||
|
try:
|
||||||
|
# Metrics
|
||||||
|
self.web_server_metrics["last_api_access_time"] = datetime.now(pytz.UTC)
|
||||||
|
self.web_server_metrics["api_access_counter"] += 1
|
||||||
|
self.web_server_metrics["status"] = "OK"
|
||||||
|
api_requests_counter.inc()
|
||||||
|
|
||||||
|
# request.arguments contains lists for each param key because technically the client can supply multiple,
|
||||||
|
# reduce that to just the first entry, and convert bytes to string
|
||||||
|
self.query_params = {k: v[0].decode("utf-8") for k, v in self.request.arguments.items()}
|
||||||
|
|
||||||
|
# Create a spot queue and add it to the web server's list. The web server will fill this when spots arrive
|
||||||
|
self.spot_queue = Queue(maxsize=SSE_HANDLER_MAX_QUEUE_SIZE)
|
||||||
|
self.sse_spot_queues.append(self.spot_queue)
|
||||||
|
|
||||||
|
# Set up a timed callback to check if anything is in the queue
|
||||||
|
self.heartbeat = tornado.ioloop.PeriodicCallback(self._callback, SSE_HANDLER_QUEUE_CHECK_INTERVAL)
|
||||||
|
self.heartbeat.start()
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logging.warn("Exception when serving SSE socket", e)
|
||||||
|
|
||||||
|
# When the user closes the socket, empty our queue and remove it from the list so the server no longer fills it
|
||||||
|
def close(self):
|
||||||
|
try:
|
||||||
|
if self.spot_queue in self.sse_spot_queues:
|
||||||
|
self.sse_spot_queues.remove(self.spot_queue)
|
||||||
|
self.spot_queue.empty()
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
self.spot_queue = None
|
||||||
|
super().close()
|
||||||
|
|
||||||
|
# Callback to check if anything has arrived in the queue, and if so send it to the client
|
||||||
|
def _callback(self):
|
||||||
|
try:
|
||||||
|
if self.spot_queue:
|
||||||
|
while not self.spot_queue.empty():
|
||||||
|
spot = self.spot_queue.get()
|
||||||
|
# If the new spot matches our param filters, send it to the client. If not, ignore it.
|
||||||
|
if spot_allowed_by_query(spot, self.query_params):
|
||||||
|
self.write_message(msg=json.dumps(spot, default=serialize_everything))
|
||||||
|
|
||||||
|
if self.spot_queue not in self.sse_spot_queues:
|
||||||
|
logging.error("Web server cleared up a queue of an active connection!")
|
||||||
|
self.close()
|
||||||
|
except:
|
||||||
|
logging.warn("Exception in SSE callback, connection will be closed.")
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Utility method to apply filters to the overall spot list and return only a subset. Enables query parameters in
|
||||||
|
# the main "spots" GET call.
|
||||||
|
def get_spot_list_with_filters(all_spots, query):
|
||||||
|
# Create a shallow copy of the spot list, ordered by spot time, then filter the list to reduce it only to spots
|
||||||
|
# that match the filter parameters in the query string. Finally, apply a limit to the number of spots returned.
|
||||||
|
# The list of query string filters is defined in the API docs.
|
||||||
|
spot_ids = list(all_spots.iterkeys())
|
||||||
|
spots = []
|
||||||
|
for k in spot_ids:
|
||||||
|
s = all_spots.get(k)
|
||||||
|
if s is not None:
|
||||||
|
spots.append(s)
|
||||||
|
spots = sorted(spots, key=lambda spot: (spot.time if spot and spot.time else 0), reverse=True)
|
||||||
|
spots = list(filter(lambda spot: spot_allowed_by_query(spot, query), spots))
|
||||||
|
if "limit" in query.keys():
|
||||||
|
spots = spots[:int(query.get("limit"))]
|
||||||
|
|
||||||
|
# Ensure only the latest spot of each callsign-SSID combo is present in the list. This relies on the
|
||||||
|
# list being in reverse time order, so if any future change allows re-ordering the list, that should
|
||||||
|
# be done *after* this. SSIDs are deliberately included here (see issue #68) because e.g. M0TRT-7
|
||||||
|
# and M0TRT-9 APRS transponders could well be in different locations, on different frequencies etc.
|
||||||
|
# This is a special consideration for the geo map and band map views (and Field Spotter) because while
|
||||||
|
# duplicates are fine in the main spot list (e.g. different cluster spots of the same DX) this doesn't
|
||||||
|
# work well for the other views.
|
||||||
|
if "dedupe" in query.keys():
|
||||||
|
dedupe = query.get("dedupe").upper() == "TRUE"
|
||||||
|
if dedupe:
|
||||||
|
spots_temp = []
|
||||||
|
already_seen = []
|
||||||
|
for s in spots:
|
||||||
|
call_plus_ssid = s.dx_call + (s.dx_ssid if s.dx_ssid else "")
|
||||||
|
if call_plus_ssid not in already_seen:
|
||||||
|
spots_temp.append(s)
|
||||||
|
already_seen.append(call_plus_ssid)
|
||||||
|
spots = spots_temp
|
||||||
|
|
||||||
|
return spots
|
||||||
|
|
||||||
|
# Given URL query params and a spot, figure out if the spot "passes" the requested filters or is rejected. The list
|
||||||
|
# of query parameters and their function is defined in the API docs.
|
||||||
|
def spot_allowed_by_query(spot, query):
|
||||||
|
for k in query.keys():
|
||||||
|
match k:
|
||||||
|
case "since":
|
||||||
|
since = datetime.fromtimestamp(int(query.get(k)), pytz.UTC).timestamp()
|
||||||
|
if not spot.time or spot.time <= since:
|
||||||
|
return False
|
||||||
|
case "max_age":
|
||||||
|
max_age = int(query.get(k))
|
||||||
|
since = (datetime.now(pytz.UTC) - timedelta(seconds=max_age)).timestamp()
|
||||||
|
if not spot.time or spot.time <= since:
|
||||||
|
return False
|
||||||
|
case "received_since":
|
||||||
|
since = datetime.fromtimestamp(int(query.get(k)), pytz.UTC).timestamp()
|
||||||
|
if not spot.received_time or spot.received_time <= since:
|
||||||
|
return False
|
||||||
|
case "source":
|
||||||
|
sources = query.get(k).split(",")
|
||||||
|
if not spot.source or spot.source not in sources:
|
||||||
|
return False
|
||||||
|
case "sig":
|
||||||
|
# If a list of sigs is provided, the spot must have a sig and it must match one of them.
|
||||||
|
# The special "sig" "NO_SIG", when supplied in the list, mathches spots with no sig.
|
||||||
|
sigs = query.get(k).split(",")
|
||||||
|
include_no_sig = "NO_SIG" in sigs
|
||||||
|
if not spot.sig and not include_no_sig:
|
||||||
|
return False
|
||||||
|
if spot.sig and spot.sig not in sigs:
|
||||||
|
return False
|
||||||
|
case "needs_sig":
|
||||||
|
# If true, a sig is required, regardless of what it is, it just can't be missing. Mutually
|
||||||
|
# exclusive with supplying the special "NO_SIG" parameter to the "sig" query param.
|
||||||
|
needs_sig = query.get(k).upper() == "TRUE"
|
||||||
|
if needs_sig and not spot.sig:
|
||||||
|
return False
|
||||||
|
case "needs_sig_ref":
|
||||||
|
# If true, at least one sig ref is required, regardless of what it is, it just can't be missing.
|
||||||
|
needs_sig_ref = query.get(k).upper() == "TRUE"
|
||||||
|
if needs_sig_ref and (not spot.sig_refs or len(spot.sig_refs) == 0):
|
||||||
|
return False
|
||||||
|
case "band":
|
||||||
|
bands = query.get(k).split(",")
|
||||||
|
if not spot.band or spot.band not in bands:
|
||||||
|
return False
|
||||||
|
case "mode":
|
||||||
|
modes = query.get(k).split(",")
|
||||||
|
if not spot.mode or spot.mode not in modes:
|
||||||
|
return False
|
||||||
|
case "mode_type":
|
||||||
|
mode_types = query.get(k).split(",")
|
||||||
|
if not spot.mode_type or spot.mode_type not in mode_types:
|
||||||
|
return False
|
||||||
|
case "dx_continent":
|
||||||
|
dxconts = query.get(k).split(",")
|
||||||
|
if not spot.dx_continent or spot.dx_continent not in dxconts:
|
||||||
|
return False
|
||||||
|
case "de_continent":
|
||||||
|
deconts = query.get(k).split(",")
|
||||||
|
if not spot.de_continent or spot.de_continent not in deconts:
|
||||||
|
return False
|
||||||
|
case "comment_includes":
|
||||||
|
comment_includes = query.get(k).strip()
|
||||||
|
if not spot.comment or comment_includes.upper() not in spot.comment.upper():
|
||||||
|
return False
|
||||||
|
case "dx_call_includes":
|
||||||
|
dx_call_includes = query.get(k).strip()
|
||||||
|
if not spot.dx_call or dx_call_includes.upper() not in spot.dx_call.upper():
|
||||||
|
return False
|
||||||
|
case "text_includes":
|
||||||
|
text_includes = query.get(k).strip()
|
||||||
|
if (not spot.dx_call or text_includes.upper() not in spot.dx_call.upper()) \
|
||||||
|
and (not spot.comment or text_includes.upper() not in spot.comment.upper()):
|
||||||
|
return False
|
||||||
|
case "allow_qrt":
|
||||||
|
# If false, spots that are flagged as QRT are not returned.
|
||||||
|
prevent_qrt = query.get(k).upper() == "FALSE"
|
||||||
|
if prevent_qrt and spot.qrt and spot.qrt == True:
|
||||||
|
return False
|
||||||
|
case "needs_good_location":
|
||||||
|
# If true, spots require a "good" location to be returned
|
||||||
|
needs_good_location = query.get(k).upper() == "TRUE"
|
||||||
|
if needs_good_location and not spot.dx_location_good:
|
||||||
|
return False
|
||||||
|
return True
|
||||||
27
server/handlers/api/status.py
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
import json
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
import pytz
|
||||||
|
import tornado
|
||||||
|
|
||||||
|
from core.prometheus_metrics_handler import api_requests_counter
|
||||||
|
from core.utils import serialize_everything
|
||||||
|
|
||||||
|
|
||||||
|
# API request handler for /api/v1/status
|
||||||
|
class APIStatusHandler(tornado.web.RequestHandler):
|
||||||
|
def initialize(self, status_data, web_server_metrics):
|
||||||
|
self.status_data = status_data
|
||||||
|
self.web_server_metrics = web_server_metrics
|
||||||
|
|
||||||
|
def get(self):
|
||||||
|
# Metrics
|
||||||
|
self.web_server_metrics["last_api_access_time"] = datetime.now(pytz.UTC)
|
||||||
|
self.web_server_metrics["api_access_counter"] += 1
|
||||||
|
self.web_server_metrics["status"] = "OK"
|
||||||
|
api_requests_counter.inc()
|
||||||
|
|
||||||
|
self.write(json.dumps(self.status_data, default=serialize_everything))
|
||||||
|
self.set_status(200)
|
||||||
|
self.set_header("Cache-Control", "no-store")
|
||||||
|
self.set_header("Content-Type", "application/json")
|
||||||
12
server/handlers/metrics.py
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
import tornado
|
||||||
|
from prometheus_client.openmetrics.exposition import CONTENT_TYPE_LATEST
|
||||||
|
|
||||||
|
from core.prometheus_metrics_handler import get_metrics
|
||||||
|
|
||||||
|
|
||||||
|
# Handler for Prometheus metrics endpoint
|
||||||
|
class PrometheusMetricsHandler(tornado.web.RequestHandler):
|
||||||
|
def get(self):
|
||||||
|
self.write(get_metrics())
|
||||||
|
self.set_status(200)
|
||||||
|
self.set_header('Content-Type', CONTENT_TYPE_LATEST)
|
||||||
26
server/handlers/pagetemplate.py
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
import pytz
|
||||||
|
import tornado
|
||||||
|
|
||||||
|
from core.config import ALLOW_SPOTTING
|
||||||
|
from core.constants import SOFTWARE_VERSION
|
||||||
|
from core.prometheus_metrics_handler import page_requests_counter
|
||||||
|
|
||||||
|
|
||||||
|
# Handler for all HTML pages generated from templates
|
||||||
|
class PageTemplateHandler(tornado.web.RequestHandler):
|
||||||
|
def initialize(self, template_name, web_server_metrics):
|
||||||
|
self.template_name = template_name
|
||||||
|
self.web_server_metrics = web_server_metrics
|
||||||
|
|
||||||
|
def get(self):
|
||||||
|
# Metrics
|
||||||
|
self.web_server_metrics["last_page_access_time"] = datetime.now(pytz.UTC)
|
||||||
|
self.web_server_metrics["page_access_counter"] += 1
|
||||||
|
self.web_server_metrics["status"] = "OK"
|
||||||
|
page_requests_counter.inc()
|
||||||
|
|
||||||
|
# Load named template, and provide variables used in templates
|
||||||
|
self.render(self.template_name + ".html", software_version=SOFTWARE_VERSION, allow_spotting=ALLOW_SPOTTING)
|
||||||
|
|
||||||
@@ -1,485 +1,120 @@
|
|||||||
import json
|
import asyncio
|
||||||
import logging
|
import logging
|
||||||
import re
|
import os
|
||||||
from datetime import datetime, timedelta
|
|
||||||
from threading import Thread
|
|
||||||
|
|
||||||
import bottle
|
import tornado
|
||||||
import pytz
|
from tornado.web import StaticFileHandler
|
||||||
from bottle import run, request, response, template
|
|
||||||
|
|
||||||
from core.config import MAX_SPOT_AGE, ALLOW_SPOTTING
|
from server.handlers.api.addspot import APISpotHandler
|
||||||
from core.constants import BANDS, ALL_MODES, MODE_TYPES, SIGS, CONTINENTS, SOFTWARE_VERSION, UNKNOWN_BAND
|
from server.handlers.api.alerts import APIAlertsHandler, APIAlertsStreamHandler
|
||||||
from core.lookup_helper import lookup_helper
|
from server.handlers.api.lookups import APILookupCallHandler, APILookupSIGRefHandler
|
||||||
from core.prometheus_metrics_handler import page_requests_counter, get_metrics, api_requests_counter
|
from server.handlers.api.options import APIOptionsHandler
|
||||||
from core.sig_utils import get_ref_regex_for_sig, get_sig_ref_info
|
from server.handlers.api.spots import APISpotsHandler, APISpotsStreamHandler
|
||||||
from data.sig_ref import SIGRef
|
from server.handlers.api.status import APIStatusHandler
|
||||||
from data.spot import Spot
|
from server.handlers.metrics import PrometheusMetricsHandler
|
||||||
|
from server.handlers.pagetemplate import PageTemplateHandler
|
||||||
|
|
||||||
|
|
||||||
# Provides the public-facing web server.
|
# Provides the public-facing web server.
|
||||||
class WebServer:
|
class WebServer:
|
||||||
|
|
||||||
# Constructor
|
# Constructor
|
||||||
def __init__(self, spots, alerts, status_data, port):
|
def __init__(self, spots, alerts, status_data, port):
|
||||||
self.last_page_access_time = None
|
|
||||||
self.last_api_access_time = None
|
|
||||||
self.page_access_counter = 0
|
|
||||||
self.api_access_counter = 0
|
|
||||||
self.spots = spots
|
self.spots = spots
|
||||||
self.alerts = alerts
|
self.alerts = alerts
|
||||||
|
self.sse_spot_queues = []
|
||||||
|
self.sse_alert_queues = []
|
||||||
self.status_data = status_data
|
self.status_data = status_data
|
||||||
self.port = port
|
self.port = port
|
||||||
self.thread = Thread(target=self.run)
|
self.shutdown_event = asyncio.Event()
|
||||||
self.thread.daemon = True
|
self.web_server_metrics = {
|
||||||
self.status = "Starting"
|
"last_page_access_time": None,
|
||||||
|
"last_api_access_time": None,
|
||||||
# Base template data
|
"page_access_counter": 0,
|
||||||
bottle.BaseTemplate.defaults['software_version'] = SOFTWARE_VERSION
|
"api_access_counter": 0,
|
||||||
bottle.BaseTemplate.defaults['allow_spotting'] = ALLOW_SPOTTING
|
"status": "Starting"
|
||||||
|
}
|
||||||
# Routes for API calls
|
|
||||||
bottle.get("/api/v1/spots")(lambda: self.serve_spots_api())
|
|
||||||
bottle.get("/api/v1/alerts")(lambda: self.serve_alerts_api())
|
|
||||||
bottle.get("/api/v1/options")(lambda: self.serve_api(self.get_options()))
|
|
||||||
bottle.get("/api/v1/status")(lambda: self.serve_api(self.status_data))
|
|
||||||
bottle.get("/api/v1/lookup/call")(lambda: self.serve_call_lookup_api())
|
|
||||||
bottle.get("/api/v1/lookup/sigref")(lambda: self.serve_sig_ref_lookup_api())
|
|
||||||
bottle.post("/api/v1/spot")(lambda: self.accept_spot())
|
|
||||||
# Routes for templated pages
|
|
||||||
bottle.get("/")(lambda: self.serve_template('webpage_spots'))
|
|
||||||
bottle.get("/map")(lambda: self.serve_template('webpage_map'))
|
|
||||||
bottle.get("/bands")(lambda: self.serve_template('webpage_bands'))
|
|
||||||
bottle.get("/alerts")(lambda: self.serve_template('webpage_alerts'))
|
|
||||||
bottle.get("/add-spot")(lambda: self.serve_template('webpage_add_spot'))
|
|
||||||
bottle.get("/status")(lambda: self.serve_template('webpage_status'))
|
|
||||||
bottle.get("/about")(lambda: self.serve_template('webpage_about'))
|
|
||||||
bottle.get("/apidocs")(lambda: self.serve_template('webpage_apidocs'))
|
|
||||||
# Route for Prometheus metrics
|
|
||||||
bottle.get("/metrics")(lambda: self.serve_prometheus_metrics())
|
|
||||||
# Default route to serve from "webassets"
|
|
||||||
bottle.get("/<filepath:path>")(self.serve_static_file)
|
|
||||||
|
|
||||||
# Start the web server
|
# Start the web server
|
||||||
def start(self):
|
def start(self):
|
||||||
self.thread.start()
|
asyncio.run(self.start_inner())
|
||||||
|
|
||||||
# Run the web server itself. This blocks until the server is shut down, so it runs in a separate thread.
|
# Stop the web server
|
||||||
def run(self):
|
def stop(self):
|
||||||
logging.info("Starting web server on port " + str(self.port) + "...")
|
self.shutdown_event.set()
|
||||||
self.status = "Waiting"
|
|
||||||
run(host='localhost', port=self.port)
|
|
||||||
|
|
||||||
# Serve the JSON API /spots endpoint
|
# Start method (async). Sets up the Tornado application.
|
||||||
def serve_spots_api(self):
|
async def start_inner(self):
|
||||||
try:
|
app = tornado.web.Application([
|
||||||
data = self.get_spot_list_with_filters()
|
# Routes for API calls
|
||||||
return self.serve_api(data)
|
(r"/api/v1/spots", APISpotsHandler, {"spots": self.spots, "web_server_metrics": self.web_server_metrics}),
|
||||||
except ValueError as e:
|
(r"/api/v1/alerts", APIAlertsHandler, {"alerts": self.alerts, "web_server_metrics": self.web_server_metrics}),
|
||||||
logging.error(e)
|
(r"/api/v1/spots/stream", APISpotsStreamHandler, {"sse_spot_queues": self.sse_spot_queues, "web_server_metrics": self.web_server_metrics}),
|
||||||
response.content_type = 'application/json'
|
(r"/api/v1/alerts/stream", APIAlertsStreamHandler, {"sse_alert_queues": self.sse_alert_queues, "web_server_metrics": self.web_server_metrics}),
|
||||||
response.status = 400
|
(r"/api/v1/options", APIOptionsHandler, {"status_data": self.status_data, "web_server_metrics": self.web_server_metrics}),
|
||||||
return json.dumps("Bad request - " + str(e), default=serialize_everything)
|
(r"/api/v1/status", APIStatusHandler, {"status_data": self.status_data, "web_server_metrics": self.web_server_metrics}),
|
||||||
except Exception as e:
|
(r"/api/v1/lookup/call", APILookupCallHandler, {"web_server_metrics": self.web_server_metrics}),
|
||||||
logging.error(e)
|
(r"/api/v1/lookup/sigref", APILookupSIGRefHandler, {"web_server_metrics": self.web_server_metrics}),
|
||||||
response.content_type = 'application/json'
|
(r"/api/v1/spot", APISpotHandler, {"spots": self.spots, "web_server_metrics": self.web_server_metrics}),
|
||||||
response.status = 500
|
# Routes for templated pages
|
||||||
return json.dumps("Error - " + str(e), default=serialize_everything)
|
(r"/", PageTemplateHandler, {"template_name": "spots", "web_server_metrics": self.web_server_metrics}),
|
||||||
|
(r"/map", PageTemplateHandler, {"template_name": "map", "web_server_metrics": self.web_server_metrics}),
|
||||||
|
(r"/bands", PageTemplateHandler, {"template_name": "bands", "web_server_metrics": self.web_server_metrics}),
|
||||||
|
(r"/alerts", PageTemplateHandler, {"template_name": "alerts", "web_server_metrics": self.web_server_metrics}),
|
||||||
|
(r"/add-spot", PageTemplateHandler, {"template_name": "add_spot", "web_server_metrics": self.web_server_metrics}),
|
||||||
|
(r"/status", PageTemplateHandler, {"template_name": "status", "web_server_metrics": self.web_server_metrics}),
|
||||||
|
(r"/about", PageTemplateHandler, {"template_name": "about", "web_server_metrics": self.web_server_metrics}),
|
||||||
|
(r"/apidocs", PageTemplateHandler, {"template_name": "apidocs", "web_server_metrics": self.web_server_metrics}),
|
||||||
|
# Route for Prometheus metrics
|
||||||
|
(r"/metrics", PrometheusMetricsHandler),
|
||||||
|
# Default route to serve from "webassets"
|
||||||
|
(r"/(.*)", StaticFileHandler, {"path": os.path.join(os.path.dirname(__file__), "../webassets")}),
|
||||||
|
],
|
||||||
|
template_path=os.path.join(os.path.dirname(__file__), "../templates"),
|
||||||
|
debug=False)
|
||||||
|
app.listen(self.port)
|
||||||
|
await self.shutdown_event.wait()
|
||||||
|
|
||||||
# Serve the JSON API /alerts endpoint
|
# Internal method called when a new spot is added to the system. This is used to ping any SSE clients that are
|
||||||
def serve_alerts_api(self):
|
# awaiting a server-sent message with new spots.
|
||||||
try:
|
def notify_new_spot(self, spot):
|
||||||
data = self.get_alert_list_with_filters()
|
for queue in self.sse_spot_queues:
|
||||||
return self.serve_api(data)
|
try:
|
||||||
except ValueError as e:
|
queue.put(spot)
|
||||||
logging.error(e)
|
except:
|
||||||
response.content_type = 'application/json'
|
# Cleanup thread was probably deleting the queue, that's fine
|
||||||
response.status = 400
|
pass
|
||||||
return json.dumps("Bad request - " + str(e), default=serialize_everything)
|
pass
|
||||||
except Exception as e:
|
|
||||||
logging.error(e)
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.status = 500
|
|
||||||
return json.dumps("Error - " + str(e), default=serialize_everything)
|
|
||||||
|
|
||||||
# Look up data for a callsign
|
# Internal method called when a new alert is added to the system. This is used to ping any SSE clients that are
|
||||||
def serve_call_lookup_api(self):
|
# awaiting a server-sent message with new spots.
|
||||||
try:
|
def notify_new_alert(self, alert):
|
||||||
# Reject if no callsign
|
for queue in self.sse_alert_queues:
|
||||||
query = bottle.request.query
|
try:
|
||||||
if not "call" in query.keys():
|
queue.put(alert)
|
||||||
response.content_type = 'application/json'
|
except:
|
||||||
response.status = 422
|
# Cleanup thread was probably deleting the queue, that's fine
|
||||||
return json.dumps("Error - call must be provided", default=serialize_everything)
|
pass
|
||||||
call = query.get("call").upper()
|
pass
|
||||||
|
|
||||||
# Reject badly formatted callsigns
|
# Clean up any SSE queues that are growing too large; probably their client disconnected and we didn't catch it
|
||||||
if not re.match(r"^[A-Za-z0-9/\-]*$", call):
|
# properly for some reason.
|
||||||
response.content_type = 'application/json'
|
def clean_up_sse_queues(self):
|
||||||
response.status = 422
|
for q in self.sse_spot_queues:
|
||||||
return json.dumps("Error - '" + call + "' does not look like a valid callsign.",
|
try:
|
||||||
default=serialize_everything)
|
if q.full():
|
||||||
|
logging.warn("A full SSE spot queue was found, presumably because the client disconnected strangely. It has been removed.")
|
||||||
# Take the callsign, make a "fake spot" so we can run infer_missing() on it, then repack the resulting data
|
self.sse_spot_queues.remove(q)
|
||||||
# in the correct way for the API response.
|
q.empty()
|
||||||
fake_spot = Spot(dx_call=call)
|
except:
|
||||||
fake_spot.infer_missing()
|
# Probably got deleted already on another thread
|
||||||
return self.serve_api({
|
pass
|
||||||
"call": call,
|
for q in self.sse_alert_queues:
|
||||||
"name": fake_spot.dx_name,
|
try:
|
||||||
"qth": fake_spot.dx_qth,
|
if q.full():
|
||||||
"country": fake_spot.dx_country,
|
logging.warn("A full SSE alert queue was found, presumably because the client disconnected strangely. It has been removed.")
|
||||||
"flag": fake_spot.dx_flag,
|
self.sse_alert_queues.remove(q)
|
||||||
"continent": fake_spot.dx_continent,
|
q.empty()
|
||||||
"dxcc_id": fake_spot.dx_dxcc_id,
|
except:
|
||||||
"cq_zone": fake_spot.dx_cq_zone,
|
# Probably got deleted already on another thread
|
||||||
"itu_zone": fake_spot.dx_itu_zone,
|
pass
|
||||||
"grid": fake_spot.dx_grid,
|
pass
|
||||||
"latitude": fake_spot.dx_latitude,
|
|
||||||
"longitude": fake_spot.dx_longitude,
|
|
||||||
"location_source": fake_spot.dx_location_source
|
|
||||||
})
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logging.error(e)
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.status = 500
|
|
||||||
return json.dumps("Error - " + str(e), default=serialize_everything)
|
|
||||||
|
|
||||||
# Look up data for a SIG reference
|
|
||||||
def serve_sig_ref_lookup_api(self):
|
|
||||||
try:
|
|
||||||
# Reject if no sig or sig_ref
|
|
||||||
query = bottle.request.query
|
|
||||||
if not "sig" in query.keys() or not "id" in query.keys():
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.status = 422
|
|
||||||
return json.dumps("Error - sig and id must be provided", default=serialize_everything)
|
|
||||||
sig = query.get("sig").upper()
|
|
||||||
id = query.get("id").upper()
|
|
||||||
|
|
||||||
# Reject if sig unknown
|
|
||||||
if not sig in list(map(lambda p: p.name, SIGS)):
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.status = 422
|
|
||||||
return json.dumps("Error - sig '" + sig + "' is not known.", default=serialize_everything)
|
|
||||||
|
|
||||||
# Reject if sig_ref format incorrect for sig
|
|
||||||
if get_ref_regex_for_sig(sig) and not re.match(get_ref_regex_for_sig(sig), id):
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.status = 422
|
|
||||||
return json.dumps("Error - '" + id + "' does not look like a valid reference ID for " + sig + ".", default=serialize_everything)
|
|
||||||
|
|
||||||
data = get_sig_ref_info(sig, id)
|
|
||||||
return self.serve_api(data)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logging.error(e)
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.status = 500
|
|
||||||
return json.dumps("Error - " + str(e), default=serialize_everything)
|
|
||||||
|
|
||||||
# Serve a JSON API endpoint
|
|
||||||
def serve_api(self, data):
|
|
||||||
self.last_api_access_time = datetime.now(pytz.UTC)
|
|
||||||
self.api_access_counter += 1
|
|
||||||
api_requests_counter.inc()
|
|
||||||
self.status = "OK"
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.set_header('Cache-Control', 'no-store')
|
|
||||||
return json.dumps(data, default=serialize_everything)
|
|
||||||
|
|
||||||
# Accept a spot
|
|
||||||
def accept_spot(self):
|
|
||||||
self.last_api_access_time = datetime.now(pytz.UTC)
|
|
||||||
self.api_access_counter += 1
|
|
||||||
api_requests_counter.inc()
|
|
||||||
self.status = "OK"
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Reject if not allowed
|
|
||||||
if not ALLOW_SPOTTING:
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.status = 401
|
|
||||||
return json.dumps("Error - this server does not allow new spots to be added via the API.",
|
|
||||||
default=serialize_everything)
|
|
||||||
|
|
||||||
# Reject if format not json
|
|
||||||
if not request.get_header('Content-Type') or request.get_header('Content-Type') != "application/json":
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.status = 415
|
|
||||||
return json.dumps("Error - request Content-Type must be application/json", default=serialize_everything)
|
|
||||||
|
|
||||||
# Reject if request body is empty
|
|
||||||
post_data = request.body.read()
|
|
||||||
if not post_data:
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.status = 422
|
|
||||||
return json.dumps("Error - request body is empty", default=serialize_everything)
|
|
||||||
|
|
||||||
# Read in the request body as JSON then convert to a Spot object
|
|
||||||
json_spot = json.loads(post_data)
|
|
||||||
spot = Spot(**json_spot)
|
|
||||||
|
|
||||||
# Converting to a spot object this way won't have coped with sig_ref objects, so fix that. (Would be nice to
|
|
||||||
# redo this in a functional style)
|
|
||||||
if spot.sig_refs:
|
|
||||||
real_sig_refs = []
|
|
||||||
for dict_obj in spot.sig_refs:
|
|
||||||
real_sig_refs.append(json.loads(json.dumps(dict_obj), object_hook=lambda d: SIGRef(**d)))
|
|
||||||
spot.sig_refs = real_sig_refs
|
|
||||||
|
|
||||||
# Reject if no timestamp, frequency, dx_call or de_call
|
|
||||||
if not spot.time or not spot.dx_call or not spot.freq or not spot.de_call:
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.status = 422
|
|
||||||
return json.dumps("Error - 'time', 'dx_call', 'freq' and 'de_call' must be provided as a minimum.",
|
|
||||||
default=serialize_everything)
|
|
||||||
|
|
||||||
# Reject invalid-looking callsigns
|
|
||||||
if not re.match(r"^[A-Za-z0-9/\-]*$", spot.dx_call):
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.status = 422
|
|
||||||
return json.dumps("Error - '" + spot.dx_call + "' does not look like a valid callsign.",
|
|
||||||
default=serialize_everything)
|
|
||||||
if not re.match(r"^[A-Za-z0-9/\-]*$", spot.de_call):
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.status = 422
|
|
||||||
return json.dumps("Error - '" + spot.de_call + "' does not look like a valid callsign.",
|
|
||||||
default=serialize_everything)
|
|
||||||
|
|
||||||
# Reject if frequency not in a known band
|
|
||||||
if lookup_helper.infer_band_from_freq(spot.freq) == UNKNOWN_BAND:
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.status = 422
|
|
||||||
return json.dumps("Error - Frequency of " + str(spot.freq / 1000.0) + "kHz is not in a known band.", default=serialize_everything)
|
|
||||||
|
|
||||||
# Reject if grid formatting incorrect
|
|
||||||
if spot.dx_grid and not re.match(r"^([A-R]{2}[0-9]{2}[A-X]{2}[0-9]{2}[A-X]{2}|[A-R]{2}[0-9]{2}[A-X]{2}[0-9]{2}|[A-R]{2}[0-9]{2}[A-X]{2}|[A-R]{2}[0-9]{2})$", spot.dx_grid.upper()):
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.status = 422
|
|
||||||
return json.dumps("Error - '" + spot.dx_grid + "' does not look like a valid Maidenhead grid.", default=serialize_everything)
|
|
||||||
|
|
||||||
# Reject if sig_ref format incorrect for sig
|
|
||||||
if spot.sig and spot.sig_refs and len(spot.sig_refs) > 0 and spot.sig_refs[0].id and get_ref_regex_for_sig(spot.sig) and not re.match(get_ref_regex_for_sig(spot.sig), spot.sig_refs[0].id):
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.status = 422
|
|
||||||
return json.dumps("Error - '" + spot.sig_refs[0].id + "' does not look like a valid reference for " + spot.sig + ".", default=serialize_everything)
|
|
||||||
|
|
||||||
# infer missing data, and add it to our database.
|
|
||||||
spot.source = "API"
|
|
||||||
if not spot.sig:
|
|
||||||
spot.icon = "desktop"
|
|
||||||
spot.infer_missing()
|
|
||||||
self.spots.add(spot.id, spot, expire=MAX_SPOT_AGE)
|
|
||||||
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.set_header('Cache-Control', 'no-store')
|
|
||||||
response.status = 201
|
|
||||||
return json.dumps("OK", default=serialize_everything)
|
|
||||||
except Exception as e:
|
|
||||||
logging.error(e)
|
|
||||||
response.content_type = 'application/json'
|
|
||||||
response.status = 500
|
|
||||||
return json.dumps("Error - " + str(e), default=serialize_everything)
|
|
||||||
|
|
||||||
# Serve a templated page
|
|
||||||
def serve_template(self, template_name):
|
|
||||||
self.last_page_access_time = datetime.now(pytz.UTC)
|
|
||||||
self.page_access_counter += 1
|
|
||||||
page_requests_counter.inc()
|
|
||||||
self.status = "OK"
|
|
||||||
return template(template_name)
|
|
||||||
|
|
||||||
# Serve general static files from "webassets" directory.
|
|
||||||
def serve_static_file(self, filepath):
|
|
||||||
return bottle.static_file(filepath, root="webassets")
|
|
||||||
|
|
||||||
# Serve Prometheus metrics
|
|
||||||
def serve_prometheus_metrics(self):
|
|
||||||
return get_metrics()
|
|
||||||
|
|
||||||
# Utility method to apply filters to the overall spot list and return only a subset. Enables query parameters in
|
|
||||||
# the main "spots" GET call.
|
|
||||||
def get_spot_list_with_filters(self):
|
|
||||||
# Get the query (and the right one, with Bottle magic. This is a MultiDict object)
|
|
||||||
query = bottle.request.query
|
|
||||||
|
|
||||||
# Create a shallow copy of the spot list, ordered by spot time. We'll then filter it accordingly.
|
|
||||||
# We can filter by spot time and received time with "since" and "received_since", which take a UNIX timestamp
|
|
||||||
# in seconds UTC.
|
|
||||||
# We can also filter by source, sig, band, mode, dx_continent and de_continent. Each of these accepts a single
|
|
||||||
# value or a comma-separated list.
|
|
||||||
# We can filter by comments, accepting a single string, where the API will only return spots where the comment
|
|
||||||
# contains the provided value (case-insensitive).
|
|
||||||
# We can "de-dupe" spots, so only the latest spot will be sent for each callsign.
|
|
||||||
# We can provide a "limit" number as well. Spots are always returned newest-first; "limit" limits to only the
|
|
||||||
# most recent X spots.
|
|
||||||
spot_ids = list(self.spots.iterkeys())
|
|
||||||
spots = []
|
|
||||||
for k in spot_ids:
|
|
||||||
s = self.spots.get(k)
|
|
||||||
if s is not None:
|
|
||||||
spots.append(s)
|
|
||||||
spots = sorted(spots, key=lambda spot: (spot.time if spot and spot.time else 0), reverse=True)
|
|
||||||
for k in query.keys():
|
|
||||||
match k:
|
|
||||||
case "since":
|
|
||||||
since = datetime.fromtimestamp(int(query.get(k)), pytz.UTC).timestamp()
|
|
||||||
spots = [s for s in spots if s.time and s.time > since]
|
|
||||||
case "max_age":
|
|
||||||
max_age = int(query.get(k))
|
|
||||||
since = (datetime.now(pytz.UTC) - timedelta(seconds=max_age)).timestamp()
|
|
||||||
spots = [s for s in spots if s.time and s.time > since]
|
|
||||||
case "received_since":
|
|
||||||
since = datetime.fromtimestamp(int(query.get(k)), pytz.UTC).timestamp()
|
|
||||||
spots = [s for s in spots if s.received_time and s.received_time > since]
|
|
||||||
case "source":
|
|
||||||
sources = query.get(k).split(",")
|
|
||||||
spots = [s for s in spots if s.source and s.source in sources]
|
|
||||||
case "sig":
|
|
||||||
# If a list of sigs is provided, the spot must have a sig and it must match one of them
|
|
||||||
sigs = query.get(k).split(",")
|
|
||||||
spots = [s for s in spots if s.sig and s.sig in sigs]
|
|
||||||
case "needs_sig":
|
|
||||||
# If true, a sig is required, regardless of what it is, it just can't be missing.
|
|
||||||
needs_sig = query.get(k).upper() == "TRUE"
|
|
||||||
if needs_sig:
|
|
||||||
spots = [s for s in spots if s.sig]
|
|
||||||
case "needs_sig_ref":
|
|
||||||
# If true, at least one sig ref is required, regardless of what it is, it just can't be missing.
|
|
||||||
needs_sig_ref = query.get(k).upper() == "TRUE"
|
|
||||||
if needs_sig_ref:
|
|
||||||
spots = [s for s in spots if s.sig_refs and len(s.sig_refs) > 0]
|
|
||||||
case "band":
|
|
||||||
bands = query.get(k).split(",")
|
|
||||||
spots = [s for s in spots if s.band and s.band in bands]
|
|
||||||
case "mode":
|
|
||||||
modes = query.get(k).split(",")
|
|
||||||
spots = [s for s in spots if s.mode in modes]
|
|
||||||
case "mode_type":
|
|
||||||
mode_families = query.get(k).split(",")
|
|
||||||
spots = [s for s in spots if s.mode_type and s.mode_type in mode_families]
|
|
||||||
case "dx_continent":
|
|
||||||
dxconts = query.get(k).split(",")
|
|
||||||
spots = [s for s in spots if s.dx_continent and s.dx_continent in dxconts]
|
|
||||||
case "de_continent":
|
|
||||||
deconts = query.get(k).split(",")
|
|
||||||
spots = [s for s in spots if s.de_continent and s.de_continent in deconts]
|
|
||||||
case "comment_includes":
|
|
||||||
comment_includes = query.get(k).strip()
|
|
||||||
spots = [s for s in spots if s.comment and comment_includes.upper() in s.comment.upper()]
|
|
||||||
case "dx_call_includes":
|
|
||||||
dx_call_includes = query.get(k).strip()
|
|
||||||
spots = [s for s in spots if s.dx_call and dx_call_includes.upper() in s.dx_call.upper()]
|
|
||||||
case "allow_qrt":
|
|
||||||
# If false, spots that are flagged as QRT are not returned.
|
|
||||||
prevent_qrt = query.get(k).upper() == "FALSE"
|
|
||||||
if prevent_qrt:
|
|
||||||
spots = [s for s in spots if not s.qrt or s.qrt == False]
|
|
||||||
case "needs_good_location":
|
|
||||||
# If true, spots require a "good" location to be returned
|
|
||||||
needs_good_location = query.get(k).upper() == "TRUE"
|
|
||||||
if needs_good_location:
|
|
||||||
spots = [s for s in spots if s.dx_location_good]
|
|
||||||
case "dedupe":
|
|
||||||
# Ensure only the latest spot of each callsign-SSID combo is present in the list. This relies on the
|
|
||||||
# list being in reverse time order, so if any future change allows re-ordering the list, that should
|
|
||||||
# be done *after* this. SSIDs are deliberately included here (see issue #68) because e.g. M0TRT-7
|
|
||||||
# and M0TRT-9 APRS transponders could well be in different locations, on different frequencies etc.
|
|
||||||
dedupe = query.get(k).upper() == "TRUE"
|
|
||||||
if dedupe:
|
|
||||||
spots_temp = []
|
|
||||||
already_seen = []
|
|
||||||
for s in spots:
|
|
||||||
call_plus_ssid = s.dx_call + (s.dx_ssid if s.dx_ssid else "")
|
|
||||||
if call_plus_ssid not in already_seen:
|
|
||||||
spots_temp.append(s)
|
|
||||||
already_seen.append(call_plus_ssid)
|
|
||||||
spots = spots_temp
|
|
||||||
# If we have a "limit" parameter, we apply that last, regardless of where it appeared in the list of keys.
|
|
||||||
if "limit" in query.keys():
|
|
||||||
spots = spots[:int(query.get("limit"))]
|
|
||||||
return spots
|
|
||||||
|
|
||||||
# Utility method to apply filters to the overall alert list and return only a subset. Enables query parameters in
|
|
||||||
# the main "alerts" GET call.
|
|
||||||
def get_alert_list_with_filters(self):
|
|
||||||
# Get the query (and the right one, with Bottle magic. This is a MultiDict object)
|
|
||||||
query = bottle.request.query
|
|
||||||
|
|
||||||
# Create a shallow copy of the alert list, ordered by start time. We'll then filter it accordingly.
|
|
||||||
# We can filter by received time with "received_since", which take a UNIX timestamp in seconds UTC.
|
|
||||||
# We can also filter by source, sig, and dx_continent. Each of these accepts a single
|
|
||||||
# value or a comma-separated list.
|
|
||||||
# We can provide a "limit" number as well. Alerts are always returned newest-first; "limit" limits to only the
|
|
||||||
# most recent X alerts.
|
|
||||||
alert_ids = list(self.alerts.iterkeys())
|
|
||||||
alerts = []
|
|
||||||
for k in alert_ids:
|
|
||||||
a = self.alerts.get(k)
|
|
||||||
if a is not None:
|
|
||||||
alerts.append(a)
|
|
||||||
# We never want alerts that seem to be in the past
|
|
||||||
alerts = list(filter(lambda alert: not alert.expired(), alerts))
|
|
||||||
alerts = sorted(alerts, key=lambda alert: (alert.start_time if alert and alert.start_time else 0))
|
|
||||||
for k in query.keys():
|
|
||||||
match k:
|
|
||||||
case "received_since":
|
|
||||||
since = datetime.fromtimestamp(int(query.get(k)), pytz.UTC)
|
|
||||||
alerts = [a for a in alerts if a.received_time and a.received_time > since]
|
|
||||||
case "max_duration":
|
|
||||||
max_duration = int(query.get(k))
|
|
||||||
# Check the duration if end_time is provided. If end_time is not provided, assume the activation is
|
|
||||||
# "short", i.e. it always passes this check. If dxpeditions_skip_max_duration_check is true and
|
|
||||||
# the alert is a dxpedition, it also always passes the check.
|
|
||||||
dxpeditions_skip_check = bool(query.get(
|
|
||||||
"dxpeditions_skip_max_duration_check")) if "dxpeditions_skip_max_duration_check" in query.keys() else False
|
|
||||||
alerts = [a for a in alerts if (a.end_time and a.end_time - a.start_time <= max_duration) or
|
|
||||||
not a.end_time or (dxpeditions_skip_check and a.is_dxpedition)]
|
|
||||||
case "source":
|
|
||||||
sources = query.get(k).split(",")
|
|
||||||
alerts = [a for a in alerts if a.source and a.source in sources]
|
|
||||||
case "sig":
|
|
||||||
sigs = query.get(k).split(",")
|
|
||||||
alerts = [a for a in alerts if a.sig and a.sig in sigs]
|
|
||||||
case "dx_continent":
|
|
||||||
dxconts = query.get(k).split(",")
|
|
||||||
alerts = [a for a in alerts if a.dx_continent and a.dx_continent in dxconts]
|
|
||||||
case "dx_call_includes":
|
|
||||||
dx_call_includes = query.get(k).strip()
|
|
||||||
spots = [a for a in alerts if a.dx_call and dx_call_includes.upper() in a.dx_call.upper()]
|
|
||||||
# If we have a "limit" parameter, we apply that last, regardless of where it appeared in the list of keys.
|
|
||||||
if "limit" in query.keys():
|
|
||||||
alerts = alerts[:int(query.get("limit"))]
|
|
||||||
return alerts
|
|
||||||
|
|
||||||
# Return all the "options" for various things that the server is aware of. This can be fetched with an API call.
|
|
||||||
# The idea is that this will include most of the things that can be provided as queries to the main spots call,
|
|
||||||
# and thus a client can use this data to configure its filter controls.
|
|
||||||
def get_options(self):
|
|
||||||
options = {"bands": BANDS,
|
|
||||||
"modes": ALL_MODES,
|
|
||||||
"mode_types": MODE_TYPES,
|
|
||||||
"sigs": SIGS,
|
|
||||||
# Spot/alert sources are filtered for only ones that are enabled in config, no point letting the user toggle things that aren't even available.
|
|
||||||
"spot_sources": list(
|
|
||||||
map(lambda p: p["name"], filter(lambda p: p["enabled"], self.status_data["spot_providers"]))),
|
|
||||||
"alert_sources": list(
|
|
||||||
map(lambda p: p["name"], filter(lambda p: p["enabled"], self.status_data["alert_providers"]))),
|
|
||||||
"continents": CONTINENTS,
|
|
||||||
"max_spot_age": MAX_SPOT_AGE,
|
|
||||||
"spot_allowed": ALLOW_SPOTTING}
|
|
||||||
# If spotting to this server is enabled, "API" is another valid spot source even though it does not come from
|
|
||||||
# one of our proviers.
|
|
||||||
if ALLOW_SPOTTING:
|
|
||||||
options["spot_sources"].append("API")
|
|
||||||
|
|
||||||
return options
|
|
||||||
|
|
||||||
|
|
||||||
# Convert objects to serialisable things. Used by JSON serialiser as a default when it encounters unserializable things.
|
|
||||||
# Just converts objects to dict. Try to avoid doing anything clever here when serialising spots, because we also need
|
|
||||||
# to receive spots without complex handling.
|
|
||||||
def serialize_everything(obj):
|
|
||||||
return obj.__dict__
|
|
||||||
|
|||||||
27
spothole.py
@@ -1,6 +1,7 @@
|
|||||||
# Main script
|
# Main script
|
||||||
import importlib
|
import importlib
|
||||||
import logging
|
import logging
|
||||||
|
import os
|
||||||
import signal
|
import signal
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
@@ -16,15 +17,20 @@ from server.webserver import WebServer
|
|||||||
# Globals
|
# Globals
|
||||||
spots = Cache('cache/spots_cache')
|
spots = Cache('cache/spots_cache')
|
||||||
alerts = Cache('cache/alerts_cache')
|
alerts = Cache('cache/alerts_cache')
|
||||||
|
web_server = None
|
||||||
status_data = {}
|
status_data = {}
|
||||||
spot_providers = []
|
spot_providers = []
|
||||||
alert_providers = []
|
alert_providers = []
|
||||||
cleanup_timer = None
|
cleanup_timer = None
|
||||||
|
run = True
|
||||||
|
|
||||||
|
|
||||||
# Shutdown function
|
# Shutdown function
|
||||||
def shutdown(sig, frame):
|
def shutdown(sig, frame):
|
||||||
logging.info("Stopping program, this may take a few seconds...")
|
global run
|
||||||
|
|
||||||
|
logging.info("Stopping program...")
|
||||||
|
web_server.stop()
|
||||||
for p in spot_providers:
|
for p in spot_providers:
|
||||||
if p.enabled:
|
if p.enabled:
|
||||||
p.stop()
|
p.stop()
|
||||||
@@ -35,6 +41,7 @@ def shutdown(sig, frame):
|
|||||||
lookup_helper.stop()
|
lookup_helper.stop()
|
||||||
spots.close()
|
spots.close()
|
||||||
alerts.close()
|
alerts.close()
|
||||||
|
os._exit(0)
|
||||||
|
|
||||||
|
|
||||||
# Utility method to get a spot provider based on the class specified in its config entry.
|
# Utility method to get a spot provider based on the class specified in its config entry.
|
||||||
@@ -72,11 +79,14 @@ if __name__ == '__main__':
|
|||||||
# Set up lookup helper
|
# Set up lookup helper
|
||||||
lookup_helper.start()
|
lookup_helper.start()
|
||||||
|
|
||||||
|
# Set up web server
|
||||||
|
web_server = WebServer(spots=spots, alerts=alerts, status_data=status_data, port=WEB_SERVER_PORT)
|
||||||
|
|
||||||
# Fetch, set up and start spot providers
|
# Fetch, set up and start spot providers
|
||||||
for entry in config["spot-providers"]:
|
for entry in config["spot-providers"]:
|
||||||
spot_providers.append(get_spot_provider_from_config(entry))
|
spot_providers.append(get_spot_provider_from_config(entry))
|
||||||
for p in spot_providers:
|
for p in spot_providers:
|
||||||
p.setup(spots=spots)
|
p.setup(spots=spots, web_server=web_server)
|
||||||
if p.enabled:
|
if p.enabled:
|
||||||
p.start()
|
p.start()
|
||||||
|
|
||||||
@@ -84,18 +94,14 @@ if __name__ == '__main__':
|
|||||||
for entry in config["alert-providers"]:
|
for entry in config["alert-providers"]:
|
||||||
alert_providers.append(get_alert_provider_from_config(entry))
|
alert_providers.append(get_alert_provider_from_config(entry))
|
||||||
for p in alert_providers:
|
for p in alert_providers:
|
||||||
p.setup(alerts=alerts)
|
p.setup(alerts=alerts, web_server=web_server)
|
||||||
if p.enabled:
|
if p.enabled:
|
||||||
p.start()
|
p.start()
|
||||||
|
|
||||||
# Set up timer to clear spot list of old data
|
# Set up timer to clear spot list of old data
|
||||||
cleanup_timer = CleanupTimer(spots=spots, alerts=alerts, cleanup_interval=60)
|
cleanup_timer = CleanupTimer(spots=spots, alerts=alerts, web_server=web_server, cleanup_interval=60)
|
||||||
cleanup_timer.start()
|
cleanup_timer.start()
|
||||||
|
|
||||||
# Set up web server
|
|
||||||
web_server = WebServer(spots=spots, alerts=alerts, status_data=status_data, port=WEB_SERVER_PORT)
|
|
||||||
web_server.start()
|
|
||||||
|
|
||||||
# Set up status reporter
|
# Set up status reporter
|
||||||
status_reporter = StatusReporter(status_data=status_data, spots=spots, alerts=alerts, web_server=web_server,
|
status_reporter = StatusReporter(status_data=status_data, spots=spots, alerts=alerts, web_server=web_server,
|
||||||
cleanup_timer=cleanup_timer, spot_providers=spot_providers,
|
cleanup_timer=cleanup_timer, spot_providers=spot_providers,
|
||||||
@@ -103,3 +109,8 @@ if __name__ == '__main__':
|
|||||||
status_reporter.start()
|
status_reporter.start()
|
||||||
|
|
||||||
logging.info("Startup complete.")
|
logging.info("Startup complete.")
|
||||||
|
|
||||||
|
# Run the web server. This is the blocking call that keeps the application running in the main thread, so this must
|
||||||
|
# be the last thing we do. web_server.stop() triggers an await condition in the web server which finishes the main
|
||||||
|
# thread.
|
||||||
|
web_server.start()
|
||||||
|
|||||||
@@ -72,7 +72,7 @@ class DXCluster(SpotProvider):
|
|||||||
de_call=match.group(1),
|
de_call=match.group(1),
|
||||||
freq=float(match.group(2)) * 1000,
|
freq=float(match.group(2)) * 1000,
|
||||||
comment=match.group(4).strip(),
|
comment=match.group(4).strip(),
|
||||||
icon="desktop",
|
icon="tower-cell",
|
||||||
time=spot_datetime.timestamp())
|
time=spot_datetime.timestamp())
|
||||||
|
|
||||||
# Add to our list
|
# Add to our list
|
||||||
|
|||||||
@@ -5,7 +5,6 @@ import pytz
|
|||||||
|
|
||||||
from core.cache_utils import SEMI_STATIC_URL_DATA_CACHE
|
from core.cache_utils import SEMI_STATIC_URL_DATA_CACHE
|
||||||
from core.constants import HTTP_HEADERS
|
from core.constants import HTTP_HEADERS
|
||||||
from core.sig_utils import get_icon_for_sig
|
|
||||||
from data.sig_ref import SIGRef
|
from data.sig_ref import SIGRef
|
||||||
from data.spot import Spot
|
from data.spot import Spot
|
||||||
from spotproviders.http_spot_provider import HTTPSpotProvider
|
from spotproviders.http_spot_provider import HTTPSpotProvider
|
||||||
@@ -42,35 +41,47 @@ class GMA(HTTPSpotProvider):
|
|||||||
dx_longitude=float(source_spot["LON"]) if (source_spot["LON"] and source_spot["LON"] != "") else None)
|
dx_longitude=float(source_spot["LON"]) if (source_spot["LON"] and source_spot["LON"] != "") else None)
|
||||||
|
|
||||||
# GMA doesn't give what programme (SIG) the reference is for until we separately look it up.
|
# GMA doesn't give what programme (SIG) the reference is for until we separately look it up.
|
||||||
ref_response = SEMI_STATIC_URL_DATA_CACHE.get(self.REF_INFO_URL_ROOT + source_spot["REF"],
|
if "REF" in source_spot:
|
||||||
headers=HTTP_HEADERS)
|
try:
|
||||||
# Sometimes this is blank, so handle that
|
ref_response = SEMI_STATIC_URL_DATA_CACHE.get(self.REF_INFO_URL_ROOT + source_spot["REF"],
|
||||||
if ref_response.text is not None and ref_response.text != "":
|
headers=HTTP_HEADERS)
|
||||||
ref_info = ref_response.json()
|
# Sometimes this is blank, so handle that
|
||||||
# If this is POTA, SOTA or WWFF data we already have it through other means, so ignore. POTA and WWFF
|
if ref_response.text is not None and ref_response.text != "":
|
||||||
# spots come through with reftype=POTA or reftype=WWFF. SOTA is harder to figure out because both SOTA
|
ref_info = ref_response.json()
|
||||||
# and GMA summits come through with reftype=Summit, so we must check for the presence of a "sota" entry
|
# If this is POTA, SOTA or WWFF data we already have it through other means, so ignore. POTA and WWFF
|
||||||
# to determine if it's a SOTA summit.
|
# spots come through with reftype=POTA or reftype=WWFF. SOTA is harder to figure out because both SOTA
|
||||||
if ref_info["reftype"] not in ["POTA", "WWFF"] and (ref_info["reftype"] != "Summit" or ref_info["sota"] == ""):
|
# and GMA summits come through with reftype=Summit, so we must check for the presence of a "sota" entry
|
||||||
match ref_info["reftype"]:
|
# to determine if it's a SOTA summit.
|
||||||
case "Summit":
|
if "reftype" in ref_info and ref_info["reftype"] not in ["POTA", "WWFF"] and (
|
||||||
spot.sig_refs[0].sig = "GMA"
|
ref_info["reftype"] != "Summit" or "sota" not in ref_info or ref_info["sota"] == ""):
|
||||||
case "IOTA Island":
|
match ref_info["reftype"]:
|
||||||
spot.sig_refs[0].sig = "IOTA"
|
case "Summit":
|
||||||
case "Lighthouse (ILLW)":
|
spot.sig_refs[0].sig = "GMA"
|
||||||
spot.sig_refs[0].sig = "ILLW"
|
spot.sig = "GMA"
|
||||||
case "Lighthouse (ARLHS)":
|
case "IOTA Island":
|
||||||
spot.sig_refs[0].sig = "ARLHS"
|
spot.sig_refs[0].sig = "IOTA"
|
||||||
case "Castle":
|
spot.sig = "IOTA"
|
||||||
spot.sig_refs[0].sig = "WCA"
|
case "Lighthouse (ILLW)":
|
||||||
case "Mill":
|
spot.sig_refs[0].sig = "ILLW"
|
||||||
spot.sig_refs[0].sig = "MOTA"
|
spot.sig = "ILLW"
|
||||||
case _:
|
case "Lighthouse (ARLHS)":
|
||||||
logging.warn("GMA spot found with ref type " + ref_info[
|
spot.sig_refs[0].sig = "ARLHS"
|
||||||
"reftype"] + ", developer needs to add support for this!")
|
spot.sig = "ARLHS"
|
||||||
spot.sig_refs[0].sig = ref_info["reftype"]
|
case "Castle":
|
||||||
|
spot.sig_refs[0].sig = "WCA"
|
||||||
|
spot.sig = "WCA"
|
||||||
|
case "Mill":
|
||||||
|
spot.sig_refs[0].sig = "MOTA"
|
||||||
|
spot.sig = "MOTA"
|
||||||
|
case _:
|
||||||
|
logging.warn("GMA spot found with ref type " + ref_info[
|
||||||
|
"reftype"] + ", developer needs to add support for this!")
|
||||||
|
spot.sig_refs[0].sig = ref_info["reftype"]
|
||||||
|
spot.sig = ref_info["reftype"]
|
||||||
|
|
||||||
# Add to our list. Don't worry about de-duping, removing old spots etc. at this point; other code will do
|
# Add to our list. Don't worry about de-duping, removing old spots etc. at this point; other code will do
|
||||||
# that for us.
|
# that for us.
|
||||||
new_spots.append(spot)
|
new_spots.append(spot)
|
||||||
|
except:
|
||||||
|
logging.warn("Exception when looking up " + self.REF_INFO_URL_ROOT + source_spot["REF"] + ", ignoring this spot for now")
|
||||||
return new_spots
|
return new_spots
|
||||||
|
|||||||
@@ -5,7 +5,6 @@ import pytz
|
|||||||
import requests
|
import requests
|
||||||
|
|
||||||
from core.constants import HTTP_HEADERS
|
from core.constants import HTTP_HEADERS
|
||||||
from core.sig_utils import get_icon_for_sig
|
|
||||||
from data.sig_ref import SIGRef
|
from data.sig_ref import SIGRef
|
||||||
from data.spot import Spot
|
from data.spot import Spot
|
||||||
from spotproviders.http_spot_provider import HTTPSpotProvider
|
from spotproviders.http_spot_provider import HTTPSpotProvider
|
||||||
@@ -53,6 +52,7 @@ class HEMA(HTTPSpotProvider):
|
|||||||
freq=float(freq_mode_match.group(1)) * 1000000,
|
freq=float(freq_mode_match.group(1)) * 1000000,
|
||||||
mode=freq_mode_match.group(2).upper(),
|
mode=freq_mode_match.group(2).upper(),
|
||||||
comment=spotter_comment_match.group(2),
|
comment=spotter_comment_match.group(2),
|
||||||
|
sig="HEMA",
|
||||||
sig_refs=[SIGRef(id=spot_items[3].upper(), sig="HEMA", name=spot_items[4])],
|
sig_refs=[SIGRef(id=spot_items[3].upper(), sig="HEMA", name=spot_items[4])],
|
||||||
time=datetime.strptime(spot_items[0], "%d/%m/%Y %H:%M").replace(tzinfo=pytz.UTC).timestamp(),
|
time=datetime.strptime(spot_items[0], "%d/%m/%Y %H:%M").replace(tzinfo=pytz.UTC).timestamp(),
|
||||||
dx_latitude=float(spot_items[7]),
|
dx_latitude=float(spot_items[7]),
|
||||||
|
|||||||
@@ -4,7 +4,6 @@ from datetime import datetime
|
|||||||
|
|
||||||
import pytz
|
import pytz
|
||||||
|
|
||||||
from core.sig_utils import get_icon_for_sig
|
|
||||||
from data.sig_ref import SIGRef
|
from data.sig_ref import SIGRef
|
||||||
from data.spot import Spot
|
from data.spot import Spot
|
||||||
from spotproviders.http_spot_provider import HTTPSpotProvider
|
from spotproviders.http_spot_provider import HTTPSpotProvider
|
||||||
@@ -33,6 +32,7 @@ class ParksNPeaks(HTTPSpotProvider):
|
|||||||
# Seen PNP spots with empty frequency, and with comma-separated thousands digits
|
# Seen PNP spots with empty frequency, and with comma-separated thousands digits
|
||||||
mode=source_spot["actMode"].upper(),
|
mode=source_spot["actMode"].upper(),
|
||||||
comment=source_spot["actComments"],
|
comment=source_spot["actComments"],
|
||||||
|
sig=source_spot["actClass"].upper(),
|
||||||
sig_refs=[SIGRef(id=source_spot["actSiteID"], sig=source_spot["actClass"].upper())],
|
sig_refs=[SIGRef(id=source_spot["actSiteID"], sig=source_spot["actClass"].upper())],
|
||||||
time=datetime.strptime(source_spot["actTime"], "%Y-%m-%d %H:%M:%S").replace(
|
time=datetime.strptime(source_spot["actTime"], "%Y-%m-%d %H:%M:%S").replace(
|
||||||
tzinfo=pytz.UTC).timestamp())
|
tzinfo=pytz.UTC).timestamp())
|
||||||
|
|||||||
@@ -1,9 +1,7 @@
|
|||||||
import re
|
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
|
||||||
import pytz
|
import pytz
|
||||||
|
|
||||||
from core.sig_utils import get_icon_for_sig, get_ref_regex_for_sig
|
|
||||||
from data.sig_ref import SIGRef
|
from data.sig_ref import SIGRef
|
||||||
from data.spot import Spot
|
from data.spot import Spot
|
||||||
from spotproviders.http_spot_provider import HTTPSpotProvider
|
from spotproviders.http_spot_provider import HTTPSpotProvider
|
||||||
@@ -31,6 +29,7 @@ class POTA(HTTPSpotProvider):
|
|||||||
freq=float(source_spot["frequency"]) * 1000,
|
freq=float(source_spot["frequency"]) * 1000,
|
||||||
mode=source_spot["mode"].upper(),
|
mode=source_spot["mode"].upper(),
|
||||||
comment=source_spot["comments"],
|
comment=source_spot["comments"],
|
||||||
|
sig="POTA",
|
||||||
sig_refs=[SIGRef(id=source_spot["reference"], sig="POTA", name=source_spot["name"])],
|
sig_refs=[SIGRef(id=source_spot["reference"], sig="POTA", name=source_spot["name"])],
|
||||||
time=datetime.strptime(source_spot["spotTime"], "%Y-%m-%dT%H:%M:%S").replace(
|
time=datetime.strptime(source_spot["spotTime"], "%Y-%m-%dT%H:%M:%S").replace(
|
||||||
tzinfo=pytz.UTC).timestamp(),
|
tzinfo=pytz.UTC).timestamp(),
|
||||||
|
|||||||
@@ -3,7 +3,6 @@ from datetime import datetime
|
|||||||
import requests
|
import requests
|
||||||
|
|
||||||
from core.constants import HTTP_HEADERS
|
from core.constants import HTTP_HEADERS
|
||||||
from core.sig_utils import get_icon_for_sig
|
|
||||||
from data.sig_ref import SIGRef
|
from data.sig_ref import SIGRef
|
||||||
from data.spot import Spot
|
from data.spot import Spot
|
||||||
from spotproviders.http_spot_provider import HTTPSpotProvider
|
from spotproviders.http_spot_provider import HTTPSpotProvider
|
||||||
@@ -45,6 +44,7 @@ class SOTA(HTTPSpotProvider):
|
|||||||
freq=(float(source_spot["frequency"]) * 1000000) if (source_spot["frequency"] is not None) else None, # Seen SOTA spots with no frequency!
|
freq=(float(source_spot["frequency"]) * 1000000) if (source_spot["frequency"] is not None) else None, # Seen SOTA spots with no frequency!
|
||||||
mode=source_spot["mode"].upper(),
|
mode=source_spot["mode"].upper(),
|
||||||
comment=source_spot["comments"],
|
comment=source_spot["comments"],
|
||||||
|
sig="SOTA",
|
||||||
sig_refs=[SIGRef(id=source_spot["summitCode"], sig="SOTA", name=source_spot["summitName"])],
|
sig_refs=[SIGRef(id=source_spot["summitCode"], sig="SOTA", name=source_spot["summitName"])],
|
||||||
time=datetime.fromisoformat(source_spot["timeStamp"]).timestamp(),
|
time=datetime.fromisoformat(source_spot["timeStamp"]).timestamp(),
|
||||||
activation_score=source_spot["points"])
|
activation_score=source_spot["points"])
|
||||||
|
|||||||
@@ -2,8 +2,7 @@ from datetime import datetime
|
|||||||
|
|
||||||
import pytz
|
import pytz
|
||||||
|
|
||||||
from core.constants import SOFTWARE_NAME, SOFTWARE_VERSION
|
from core.config import MAX_SPOT_AGE
|
||||||
from core.config import SERVER_OWNER_CALLSIGN, MAX_SPOT_AGE
|
|
||||||
|
|
||||||
|
|
||||||
# Generic spot provider class. Subclasses of this query the individual APIs for data.
|
# Generic spot provider class. Subclasses of this query the individual APIs for data.
|
||||||
@@ -17,10 +16,12 @@ class SpotProvider:
|
|||||||
self.last_spot_time = datetime.min.replace(tzinfo=pytz.UTC)
|
self.last_spot_time = datetime.min.replace(tzinfo=pytz.UTC)
|
||||||
self.status = "Not Started" if self.enabled else "Disabled"
|
self.status = "Not Started" if self.enabled else "Disabled"
|
||||||
self.spots = None
|
self.spots = None
|
||||||
|
self.web_server = None
|
||||||
|
|
||||||
# Set up the provider, e.g. giving it the spot list to work from
|
# Set up the provider, e.g. giving it the spot list to work from
|
||||||
def setup(self, spots):
|
def setup(self, spots, web_server):
|
||||||
self.spots = spots
|
self.spots = spots
|
||||||
|
self.web_server = web_server
|
||||||
|
|
||||||
# Start the provider. This should return immediately after spawning threads to access the remote resources
|
# Start the provider. This should return immediately after spawning threads to access the remote resources
|
||||||
def start(self):
|
def start(self):
|
||||||
@@ -31,24 +32,32 @@ class SpotProvider:
|
|||||||
# their infer_missing() method called to complete their data set. This is called by the API-querying
|
# their infer_missing() method called to complete their data set. This is called by the API-querying
|
||||||
# subclasses on receiving spots.
|
# subclasses on receiving spots.
|
||||||
def submit_batch(self, spots):
|
def submit_batch(self, spots):
|
||||||
|
# Sort the batch so that earliest ones go in first. This helps keep the ordering correct when spots are fired
|
||||||
|
# off to SSE listeners.
|
||||||
|
spots = sorted(spots, key=lambda spot: (spot.time if spot and spot.time else 0))
|
||||||
for spot in spots:
|
for spot in spots:
|
||||||
if datetime.fromtimestamp(spot.time, pytz.UTC) > self.last_spot_time:
|
if datetime.fromtimestamp(spot.time, pytz.UTC) > self.last_spot_time:
|
||||||
# Fill in any blanks
|
# Fill in any blanks and add to the list
|
||||||
spot.infer_missing()
|
spot.infer_missing()
|
||||||
# Add to the list
|
self.add_spot(spot)
|
||||||
self.spots.add(spot.id, spot, expire=MAX_SPOT_AGE)
|
|
||||||
self.last_spot_time = datetime.fromtimestamp(max(map(lambda s: s.time, spots)), pytz.UTC)
|
self.last_spot_time = datetime.fromtimestamp(max(map(lambda s: s.time, spots)), pytz.UTC)
|
||||||
|
|
||||||
# Submit a single spot retrieved from the provider. This will be added to the list regardless of its age. Spots
|
# Submit a single spot retrieved from the provider. This will be added to the list regardless of its age. Spots
|
||||||
# passing the check will also have their infer_missing() method called to complete their data set. This is called by
|
# passing the check will also have their infer_missing() method called to complete their data set. This is called by
|
||||||
# the data streaming subclasses, which can be relied upon not to re-provide old spots.
|
# the data streaming subclasses, which can be relied upon not to re-provide old spots.
|
||||||
def submit(self, spot):
|
def submit(self, spot):
|
||||||
# Fill in any blanks
|
# Fill in any blanks and add to the list
|
||||||
spot.infer_missing()
|
spot.infer_missing()
|
||||||
# Add to the list
|
self.add_spot(spot)
|
||||||
self.spots.add(spot.id, spot, expire=MAX_SPOT_AGE)
|
|
||||||
self.last_spot_time = datetime.fromtimestamp(spot.time, pytz.UTC)
|
self.last_spot_time = datetime.fromtimestamp(spot.time, pytz.UTC)
|
||||||
|
|
||||||
|
def add_spot(self, spot):
|
||||||
|
if not spot.expired():
|
||||||
|
self.spots.add(spot.id, spot, expire=MAX_SPOT_AGE)
|
||||||
|
# Ping the web server in case we have any SSE connections that need to see this immediately
|
||||||
|
if self.web_server:
|
||||||
|
self.web_server.notify_new_spot(spot)
|
||||||
|
|
||||||
# Stop any threads and prepare for application shutdown
|
# Stop any threads and prepare for application shutdown
|
||||||
def stop(self):
|
def stop(self):
|
||||||
raise NotImplementedError("Subclasses must implement this method")
|
raise NotImplementedError("Subclasses must implement this method")
|
||||||
@@ -9,6 +9,7 @@ from requests_sse import EventSource
|
|||||||
from core.constants import HTTP_HEADERS
|
from core.constants import HTTP_HEADERS
|
||||||
from spotproviders.spot_provider import SpotProvider
|
from spotproviders.spot_provider import SpotProvider
|
||||||
|
|
||||||
|
|
||||||
# Spot provider using Server-Sent Events.
|
# Spot provider using Server-Sent Events.
|
||||||
class SSESpotProvider(SpotProvider):
|
class SSESpotProvider(SpotProvider):
|
||||||
|
|
||||||
|
|||||||
@@ -1,11 +1,8 @@
|
|||||||
import re
|
import re
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime
|
||||||
|
|
||||||
import pytz
|
import pytz
|
||||||
from requests_cache import CachedSession
|
|
||||||
|
|
||||||
from core.constants import HTTP_HEADERS
|
|
||||||
from core.sig_utils import get_icon_for_sig, get_ref_regex_for_sig
|
|
||||||
from data.spot import Spot
|
from data.spot import Spot
|
||||||
from spotproviders.http_spot_provider import HTTPSpotProvider
|
from spotproviders.http_spot_provider import HTTPSpotProvider
|
||||||
|
|
||||||
|
|||||||
75
spotproviders/websocket_spot_provider.py
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
import logging
|
||||||
|
from datetime import datetime
|
||||||
|
from threading import Thread
|
||||||
|
from time import sleep
|
||||||
|
|
||||||
|
import pytz
|
||||||
|
from websocket import create_connection
|
||||||
|
|
||||||
|
from core.constants import HTTP_HEADERS
|
||||||
|
from spotproviders.spot_provider import SpotProvider
|
||||||
|
|
||||||
|
|
||||||
|
# Spot provider using websockets.
|
||||||
|
class WebsocketSpotProvider(SpotProvider):
|
||||||
|
|
||||||
|
def __init__(self, provider_config, url):
|
||||||
|
super().__init__(provider_config)
|
||||||
|
self.url = url
|
||||||
|
self.ws = None
|
||||||
|
self.thread = None
|
||||||
|
self.stopped = False
|
||||||
|
self.last_event_id = None
|
||||||
|
|
||||||
|
def start(self):
|
||||||
|
logging.info("Set up websocket connection to " + self.name + " spot API.")
|
||||||
|
self.stopped = False
|
||||||
|
self.thread = Thread(target=self.run)
|
||||||
|
self.thread.daemon = True
|
||||||
|
self.thread.start()
|
||||||
|
|
||||||
|
def stop(self):
|
||||||
|
self.stopped = True
|
||||||
|
if self.ws:
|
||||||
|
self.ws.close()
|
||||||
|
if self.thread:
|
||||||
|
self.thread.join()
|
||||||
|
|
||||||
|
def _on_open(self):
|
||||||
|
self.status = "Waiting for Data"
|
||||||
|
|
||||||
|
def _on_error(self):
|
||||||
|
self.status = "Connecting"
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
while not self.stopped:
|
||||||
|
try:
|
||||||
|
logging.debug("Connecting to " + self.name + " spot API...")
|
||||||
|
self.status = "Connecting"
|
||||||
|
self.ws = create_connection(self.url, header=HTTP_HEADERS)
|
||||||
|
self.status = "Connected"
|
||||||
|
data = self.ws.recv()
|
||||||
|
if data:
|
||||||
|
try:
|
||||||
|
new_spot = self.ws_message_to_spot(data)
|
||||||
|
if new_spot:
|
||||||
|
self.submit(new_spot)
|
||||||
|
|
||||||
|
self.status = "OK"
|
||||||
|
self.last_update_time = datetime.now(pytz.UTC)
|
||||||
|
logging.debug("Received data from " + self.name + " spot API.")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logging.exception("Exception processing message from Websocket Spot Provider (" + self.name + ")")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
self.status = "Error"
|
||||||
|
logging.exception("Exception in Websocket Spot Provider (" + self.name + ")", e)
|
||||||
|
else:
|
||||||
|
self.status = "Disconnected"
|
||||||
|
sleep(5) # Wait before trying to reconnect
|
||||||
|
|
||||||
|
# Convert a WS message received from the API into a spot. The exact message data (in bytes) is provided here so the
|
||||||
|
# subclass implementations can handle the message as string, JSON, XML, whatever the API actually provides.
|
||||||
|
def ws_message_to_spot(self, bytes):
|
||||||
|
raise NotImplementedError("Subclasses must implement this method")
|
||||||
@@ -1,9 +1,10 @@
|
|||||||
|
import logging
|
||||||
|
import re
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
|
||||||
import pytz
|
import pytz
|
||||||
from rss_parser import RSSParser
|
from rss_parser import RSSParser
|
||||||
|
|
||||||
from core.sig_utils import get_icon_for_sig
|
|
||||||
from data.sig_ref import SIGRef
|
from data.sig_ref import SIGRef
|
||||||
from data.spot import Spot
|
from data.spot import Spot
|
||||||
from spotproviders.http_spot_provider import HTTPSpotProvider
|
from spotproviders.http_spot_provider import HTTPSpotProvider
|
||||||
@@ -25,47 +26,52 @@ class WOTA(HTTPSpotProvider):
|
|||||||
# Iterate through source data
|
# Iterate through source data
|
||||||
for source_spot in rss.channel.items:
|
for source_spot in rss.channel.items:
|
||||||
|
|
||||||
# Reject GUID missing or zero
|
try:
|
||||||
if not source_spot.guid or not source_spot.guid.content or source_spot.guid.content == "http://www.wota.org.uk/spots/0":
|
# Reject GUID missing or zero
|
||||||
continue
|
if not source_spot.guid or not source_spot.guid.content or source_spot.guid.content == "http://www.wota.org.uk/spots/0":
|
||||||
|
continue
|
||||||
|
|
||||||
# Pick apart the title
|
# Pick apart the title
|
||||||
title_split = source_spot.title.split(" on ")
|
title_split = source_spot.title.split(" on ")
|
||||||
dx_call = title_split[0]
|
dx_call = title_split[0]
|
||||||
ref = None
|
ref = None
|
||||||
ref_name = None
|
ref_name = None
|
||||||
if len(title_split) > 1:
|
if len(title_split) > 1:
|
||||||
ref_split = title_split[1].split(" - ")
|
ref_split = title_split[1].split(" - ")
|
||||||
ref = ref_split[0]
|
ref = ref_split[0]
|
||||||
if len(ref_split) > 1:
|
if len(ref_split) > 1:
|
||||||
ref_name = ref_split[1]
|
ref_name = ref_split[1]
|
||||||
|
|
||||||
# Pick apart the description
|
# Pick apart the description
|
||||||
desc_split = source_spot.description.split(". ")
|
desc_split = source_spot.description.split(". ")
|
||||||
freq_mode = desc_split[0].replace("Frequencies/modes:", "").strip()
|
freq_mode = desc_split[0].replace("Frequencies/modes:", "").strip()
|
||||||
freq_mode_split = freq_mode.split("-")
|
freq_mode_split = re.split(r'[\-\s]+', freq_mode)
|
||||||
freq_hz = float(freq_mode_split[0]) * 1000000
|
freq_hz = float(freq_mode_split[0]) * 1000000
|
||||||
mode = freq_mode_split[1]
|
if len(freq_mode_split) > 1:
|
||||||
|
mode = freq_mode_split[1].upper()
|
||||||
|
|
||||||
comment = None
|
comment = None
|
||||||
if len(desc_split) > 1:
|
if len(desc_split) > 1:
|
||||||
comment = desc_split[1].strip()
|
comment = desc_split[1].strip()
|
||||||
spotter = None
|
spotter = None
|
||||||
if len(desc_split) > 2:
|
if len(desc_split) > 2:
|
||||||
spotter = desc_split[2].replace("Spotted by ", "").replace(".", "").strip()
|
spotter = desc_split[2].replace("Spotted by ", "").replace(".", "").upper().strip()
|
||||||
|
|
||||||
time = datetime.strptime(source_spot.pub_date.content, self.RSS_DATE_TIME_FORMAT).astimezone(pytz.UTC)
|
time = datetime.strptime(source_spot.pub_date.content, self.RSS_DATE_TIME_FORMAT).astimezone(pytz.UTC)
|
||||||
|
|
||||||
# Convert to our spot format
|
# Convert to our spot format
|
||||||
spot = Spot(source=self.name,
|
spot = Spot(source=self.name,
|
||||||
source_id=source_spot.guid.content,
|
source_id=source_spot.guid.content,
|
||||||
dx_call=dx_call,
|
dx_call=dx_call,
|
||||||
de_call=spotter,
|
de_call=spotter,
|
||||||
freq=freq_hz,
|
freq=freq_hz,
|
||||||
mode=mode,
|
mode=mode,
|
||||||
comment=comment,
|
comment=comment,
|
||||||
sig_refs=[SIGRef(id=ref, sig="WOTA", name=ref_name)] if ref else [],
|
sig="WOTA",
|
||||||
time=time.timestamp())
|
sig_refs=[SIGRef(id=ref, sig="WOTA", name=ref_name)] if ref else [],
|
||||||
|
time=time.timestamp())
|
||||||
|
|
||||||
new_spots.append(spot)
|
new_spots.append(spot)
|
||||||
|
except Exception as e:
|
||||||
|
logging.error("Exception parsing WOTA spot", e)
|
||||||
return new_spots
|
return new_spots
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
import json
|
import json
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
|
||||||
from core.sig_utils import get_icon_for_sig
|
|
||||||
from data.sig_ref import SIGRef
|
from data.sig_ref import SIGRef
|
||||||
from data.spot import Spot
|
from data.spot import Spot
|
||||||
from spotproviders.sse_spot_provider import SSESpotProvider
|
from spotproviders.sse_spot_provider import SSESpotProvider
|
||||||
@@ -29,6 +28,7 @@ class WWBOTA(SSESpotProvider):
|
|||||||
freq=float(source_spot["freq"]) * 1000000,
|
freq=float(source_spot["freq"]) * 1000000,
|
||||||
mode=source_spot["mode"].upper(),
|
mode=source_spot["mode"].upper(),
|
||||||
comment=source_spot["comment"],
|
comment=source_spot["comment"],
|
||||||
|
sig="WWBOTA",
|
||||||
sig_refs=refs,
|
sig_refs=refs,
|
||||||
time=datetime.fromisoformat(source_spot["time"]).timestamp(),
|
time=datetime.fromisoformat(source_spot["time"]).timestamp(),
|
||||||
# WWBOTA spots can contain multiple references for bunkers being activated simultaneously. For
|
# WWBOTA spots can contain multiple references for bunkers being activated simultaneously. For
|
||||||
|
|||||||
@@ -2,7 +2,6 @@ from datetime import datetime
|
|||||||
|
|
||||||
import pytz
|
import pytz
|
||||||
|
|
||||||
from core.sig_utils import get_icon_for_sig
|
|
||||||
from data.sig_ref import SIGRef
|
from data.sig_ref import SIGRef
|
||||||
from data.spot import Spot
|
from data.spot import Spot
|
||||||
from spotproviders.http_spot_provider import HTTPSpotProvider
|
from spotproviders.http_spot_provider import HTTPSpotProvider
|
||||||
@@ -28,6 +27,7 @@ class WWFF(HTTPSpotProvider):
|
|||||||
freq=float(source_spot["frequency_khz"]) * 1000,
|
freq=float(source_spot["frequency_khz"]) * 1000,
|
||||||
mode=source_spot["mode"].upper(),
|
mode=source_spot["mode"].upper(),
|
||||||
comment=source_spot["remarks"],
|
comment=source_spot["remarks"],
|
||||||
|
sig="WWFF",
|
||||||
sig_refs=[SIGRef(id=source_spot["reference"], sig="WWFF", name=source_spot["reference_name"])],
|
sig_refs=[SIGRef(id=source_spot["reference"], sig="WWFF", name=source_spot["reference_name"])],
|
||||||
time=datetime.fromtimestamp(source_spot["spot_time"], tz=pytz.UTC).timestamp(),
|
time=datetime.fromtimestamp(source_spot["spot_time"], tz=pytz.UTC).timestamp(),
|
||||||
dx_latitude=source_spot["latitude"],
|
dx_latitude=source_spot["latitude"],
|
||||||
|
|||||||
39
spotproviders/xota.py
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
import json
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
import pytz
|
||||||
|
|
||||||
|
from data.sig_ref import SIGRef
|
||||||
|
from data.spot import Spot
|
||||||
|
from spotproviders.websocket_spot_provider import WebsocketSpotProvider
|
||||||
|
|
||||||
|
|
||||||
|
# Spot provider for servers based on the "xOTA" software at https://github.com/nischu/xOTA/
|
||||||
|
# The provider typically doesn't give us a lat/lon or SIG explicitly, so our own config provides this information. This
|
||||||
|
# functionality is implemented for TOTA events.
|
||||||
|
class XOTA(WebsocketSpotProvider):
|
||||||
|
FIXED_LATITUDE = None
|
||||||
|
FIXED_LONGITUDE = None
|
||||||
|
SIG = None
|
||||||
|
|
||||||
|
def __init__(self, provider_config):
|
||||||
|
super().__init__(provider_config, provider_config["url"])
|
||||||
|
self.FIXED_LATITUDE = provider_config["latitude"] if "latitude" in provider_config else None
|
||||||
|
self.FIXED_LONGITUDE = provider_config["longitude"] if "longitude" in provider_config else None
|
||||||
|
self.SIG = provider_config["sig"] if "sig" in provider_config else None
|
||||||
|
|
||||||
|
def ws_message_to_spot(self, bytes):
|
||||||
|
string = bytes.decode("utf-8")
|
||||||
|
source_spot = json.loads(string)
|
||||||
|
spot = Spot(source=self.name,
|
||||||
|
source_id=source_spot["id"],
|
||||||
|
dx_call=source_spot["stationCallSign"].upper(),
|
||||||
|
freq=float(source_spot["freq"]) * 1000,
|
||||||
|
mode=source_spot["mode"].upper(),
|
||||||
|
sig=self.SIG,
|
||||||
|
sig_refs=[SIGRef(id=source_spot["reference"]["title"], sig=self.SIG, url=source_spot["reference"]["website"])],
|
||||||
|
time=datetime.now(pytz.UTC).timestamp(),
|
||||||
|
dx_latitude=self.FIXED_LATITUDE,
|
||||||
|
dx_longitude=self.FIXED_LONGITUDE,
|
||||||
|
qrt=source_spot["state"] != "active")
|
||||||
|
return spot
|
||||||
@@ -2,7 +2,6 @@ from datetime import datetime
|
|||||||
|
|
||||||
import pytz
|
import pytz
|
||||||
|
|
||||||
from core.sig_utils import get_icon_for_sig
|
|
||||||
from data.sig_ref import SIGRef
|
from data.sig_ref import SIGRef
|
||||||
from data.spot import Spot
|
from data.spot import Spot
|
||||||
from spotproviders.http_spot_provider import HTTPSpotProvider
|
from spotproviders.http_spot_provider import HTTPSpotProvider
|
||||||
@@ -34,6 +33,7 @@ class ZLOTA(HTTPSpotProvider):
|
|||||||
freq=freq_hz,
|
freq=freq_hz,
|
||||||
mode=source_spot["mode"].upper().strip(),
|
mode=source_spot["mode"].upper().strip(),
|
||||||
comment=source_spot["comments"],
|
comment=source_spot["comments"],
|
||||||
|
sig="ZLOTA",
|
||||||
sig_refs=[SIGRef(id=source_spot["reference"], sig="ZLOTA", name=source_spot["name"])],
|
sig_refs=[SIGRef(id=source_spot["reference"], sig="ZLOTA", name=source_spot["name"])],
|
||||||
time=datetime.fromisoformat(source_spot["referenced_time"]).astimezone(pytz.UTC).timestamp())
|
time=datetime.fromisoformat(source_spot["referenced_time"]).astimezone(pytz.UTC).timestamp())
|
||||||
|
|
||||||
|
|||||||
69
templates/about.html
Normal file
@@ -0,0 +1,69 @@
|
|||||||
|
{% extends "base.html" %}
|
||||||
|
{% block content %}
|
||||||
|
|
||||||
|
<div id="info-container" class="mt-4">
|
||||||
|
<h2 class="mt-4 mb-4">About Spothole</h2>
|
||||||
|
<p>Spothole is a utility to aggregate "spots" from amateur radio DX clusters and xOTA spotting sites, and provide an open JSON API as well as a website to browse the data.</p>
|
||||||
|
<p>While there are several other web-based interfaces to DX clusters, and sites that aggregate spots from various outdoor activity programmes for amateur radio, Spothole differentiates itself by supporting a larger number of data sources, and by being "API first" rather than just providing a web front-end. This allows other software to be built on top of it.</p>
|
||||||
|
<p>The API is deliberately well-defined with an <a href="/apidocs/openapi.yml">OpenAPI specification</a> and <a href="/apidocs">API documentation</a>. The API delivers spots in a consistent format regardless of the data source, freeing developers from needing to know how each individual data source presents its data.</p>
|
||||||
|
<p>Spothole itself is also open source, Public Domain licenced code that anyone can take and modify. <a href="https://git.ianrenton.com/ian/metaspot/">The source code is here</a>.</p>
|
||||||
|
<p>The software was written by <a href="https://ianrenton.com">Ian Renton, MØTRT</a> and other contributors. Full details are available in the <a href="https://git.ianrenton.com/ian/spothole/src/branch/main/README.md">README file</a>.</p>
|
||||||
|
<p>This server is running Spothole version {{software_version}}.</p>
|
||||||
|
<h2 class="mt-4 mb-4">Using Spothole</h2>
|
||||||
|
<p>There are a number of different ways to use Spothole, depending on what you want to do with it and your level of technical skill:</p>
|
||||||
|
<ol><li>You can <b>use it on the web</b>, like you are (probably) doing right now. This is how most people use it, to look up spots and alerts, and make interesting QSOs.</li>
|
||||||
|
<li>If you are using an Android or iOS device, you can <b>"install" it on your device</b>. Spothole is a Progressive Web App, meaning it's not delivered through app stores, but if you open the page on Chrome (Android) or Safari (iOS) there will be an option in the menu to install it. It will then appear in your main app menu.</li>
|
||||||
|
<li>You can <b>embed the web interface in another website</b> to show its spots in a custom dashboard or the like. The usage is explained in more detail in the <a href="https://git.ianrenton.com/ian/spothole/src/branch/main/README.md">README file</a>.</li>
|
||||||
|
<li>You can <b>write your own client using the Spothole API</b>, using the main Spothole instance to provide data, and do whatever you like with it. The README contains guidance on how to do this, and the full API docs are linked above. You can also find reference implementations in the form of Spothole's own web-based front end, plus my other two tools built on Spothole: <a href="https://fieldspotter.radio">Field Spotter</a> and the <a href="https://qsomap.m0trt.radio">QSO Map Tool</a>.</li>
|
||||||
|
<li>If you want to <b>run your own version of Spothole</b> so you can customise the configuration, such as enabling sources that I disable on the main instance, you can do that too. The README contains not only advice on how to set up Spothole but how to get it auto-starting with systemd, using an nginx reverse proxy, and setting up HTTPS support with certbot.</li>
|
||||||
|
<li>Finally, you can of course download the source code and <b>develop Spothole to meet your needs</b>. Whether you contribute your changes back to the main repository is up to you. As usual, the README file contains some advice on the structure of the repository, and how to get started writing your own spot provider.</li></ol>
|
||||||
|
<h2 id="faq" class="mt-4">FAQ</h2>
|
||||||
|
<h4 class="mt-4">"Spots"? "DX Clusters"? What does any of this mean?</h4>
|
||||||
|
<p>This is a tool for amateur ("ham") radio users. Many amateur radio operators like to make contacts with others who are doing something more interesting than sitting in their home "shack", such as people in rarely-seen countries, remote islands, or on mountaintops. Such operators are often "spotted", i.e. when someone speaks to them, they will put the details such as their operating frequency into an online system, to let others know where to find them. A DX Cluster is one type of those systems. Most outdoor radio awards programmes, such as "Parks on the Air" (POTA) have their own websites for posting spots.</p>
|
||||||
|
<p>Spothole is an "aggregator" for those spots, so it checks lots of different services for data, and brings it all together in one place. So no matter what kinds of interesting spots you are looking for, you can find them here.</p>
|
||||||
|
<p>As well as spots, it also provides a similar feed of "alerts". This is where amateur radio users who are going to interesting places soon will announce their intentions.</p>
|
||||||
|
<h4 class="mt-4">What are "DX", "DE" and modes?</h4>
|
||||||
|
<p>In amateur radio terminology, the "DX" contact is the "interesting" one that is using the frequency shown and looking for callers. They might be on a remote island or just in a local park, but either way it's interesting enough that someone has "spotted" them. The callsign listed under "DE" is the person who entered the spot of the "DX" operator. "Modes" are the type of communication they are using. For example you might see "CW" which is Morse Code, or voice "modes" like SSB or FM, or more exotic "data" modes which are used for computer-to-computer communication.</p>
|
||||||
|
<h4 class="mt-4">What data sources are supported?</h4>
|
||||||
|
<p>Spothole can retrieve spots from: <a href="https://www.dxcluster.info/telnet/">Telnet-based DX clusters</a>, the <a href="https://www.reversebeacon.net/">Reverse Beacon Network (RBN)</a>, the <a href="https://www.aprs-is.net/">APRS Internet Service (APRS-IS)</a>, <a href="https://pota.app">POTA</a>, <a href="https://www.sota.org.uk/">SOTA</a>, <a href="https://wwff.co/">WWFF</a>, <a href="https://www.cqgma.org/">GMA</a>, <a href="https://wwbota.net/">WWBOTA</a>, <a href="http://www.hema.org.uk/">HEMA</a>, <a href="https://www.parksnpeaks.org/">Parks 'n' Peaks</a>, <a href="https://ontheair.nz">ZLOTA</a>, <a href="https://www.wota.org.uk/">WOTA</a>, the <a href="https://ukpacketradio.network/">UK Packet Repeater Network</a>, and any site based on the <a href="https://github.com/nischu/xOTA">xOTA software by nischu</a>.</p>
|
||||||
|
<p>Spothole can retrieve alerts from: <a href="https://www.ng3k.com/">NG3K</a>, <a href="https://pota.app">POTA</a>, <a href="https://www.sota.org.uk/">SOTA</a>, <a href="https://wwff.co/">WWFF</a>, <a href="https://www.parksnpeaks.org/">Parks 'n' Peaks</a>, <a href="https://www.wota.org.uk/">WOTA</a> and <a href="https://www.beachesontheair.com/">BOTA</a>.</p>
|
||||||
|
<p>Note that the server owner has not necessarily enabled all these data sources. In particular it is common to disable RBN, to avoid the server being swamped with FT8 traffic, and to disable APRS-IS and UK Packet Net so that the server only displays stations where there is likely to be an operator physically present for a QSO.</p>
|
||||||
|
<p>Between the various data sources, the following Special Interest Groups (SIGs) are supported: Parks on the Air (POTA), Summits on the Air (SOTA), Worldwide Flora & Fauna (WWFF), Global Mountain Activity (GMA), Worldwide Bunkers on the Air (WWBOTA), HuMPs Excluding Marilyns Award (HEMA), Islands on the Air (IOTA), Mills on the Air (MOTA), the Amateur Radio Lighthouse Socirty (ARLHS), International Lighthouse Lightship Weekend (ILLW), Silos on the Air (SIOTA), World Castles Award (WCA), New Zealand on the Air (ZLOTA), Keith Roget Memorial National Parks Award (KRMNPA), Wainwrights on the Air (WOTA), Beaches on the Air (BOTA), Worked All Britain (WAB), Worked All Ireland (WAI), and Toilets on the Air (TOTA).</p>
|
||||||
|
<p>As of the time of writing in November 2025, I think Spothole captures essentially all outdoor radio programmes that have a defined reference list, and almost certainly those that have a spotting/alerting API. If you know of one I've missed, please let me know!</p>
|
||||||
|
<h4 class="mt-4">Why can I filter spots by both SIG and Source? Isn't that basically the same thing?</h4>
|
||||||
|
<p>Mostly, but not quite. While POTA spots generally come from the POTA source and so on, there are a few exceptions:</p>
|
||||||
|
<ol><li>Sources like GMA and Parks 'n' Peaks provide spots for multiple different programmes (SIGs).</li>
|
||||||
|
<li>Cluster spots may name SIGs in their comment, in which case the source remains the Cluster, but a SIG is assigned.</li>
|
||||||
|
<li>Some SIGs, such as Worked all Britain (WAB), don't have their own spotting site and can <em>only</em> be identified through comments on spots retrieved from other sources.</li>
|
||||||
|
<li>SIGs have well-defined names, whereas the server owner may name the sources as they see fit.</li></ol>
|
||||||
|
<p>Spothole's web interface exists not just for the end user, but also as a reference implementation for the API, so I have chosen to demonstrate both methods of filtering.</p>
|
||||||
|
<h4 class="mt-4">How is this better than DXheat, DXsummit, POTA's own website, etc?</h4>
|
||||||
|
<p>It's probably not? But it's nice to have choice.</p>
|
||||||
|
<p>I think it's got three key advantages over those sites:</p>
|
||||||
|
<ol><li>It provides a public, <a href="/apidocs">well-documented API</a> with an <a href="/apidocs/openapi.yml">OpenAPI specification</a>. Other sites don't have official APIs or don't bother documenting them publicly, because they want people to use their web page. I like Spothole's web page, but you don't have to use it—if you're a programmer, you can build your own software on Spothole's API. Spothole does the hard work of taking all the various data sources and providing a consistent, well-documented data set. You can then do the fun bit of writing your own application.</li>
|
||||||
|
<li>It grabs data from a lot more sources. I've seen other sites that pull in DX Cluster and POTA spots together, but nothing on the scale of what Spothole supports.</li>
|
||||||
|
<li>Spothole is open source, so anyone can contribute the code to support a new data source or add new features, and share them with the community.</li></ol>
|
||||||
|
<h4 class="mt-4">Why does this website ask me if I want to install it?</h4>
|
||||||
|
<p>Spothole is a Progressive Web App, which means you can install it on an Android or iOS device by opening the site in Chrome or Safari respectively, and clicking "Install" on the pop-up panel. It'll only prompt you once, so if you dismiss the prompt and change your mind, you'll find an Install / Add to Home Screen option on your browser's menu.</p>
|
||||||
|
<p>Installing Spothole on your phone is completely optional, the website works exactly the same way as the "app" does.</p>
|
||||||
|
<h4 class="mt-4">Why hasn't my spot/alert shown up yet?</h4>
|
||||||
|
<p>To avoid putting too much load on the various servers that Spothole connects to, the Spothole server only polls them once every two minutes for spots, and once every 30 minutes for alerts. (Some sources, such as DX clusters, RBN, APRS-IS and WWBOTA use a non-polling mechanism, and their updates will therefore arrive more quickly.) Then if you are using the web interface, that has its own rate at which it fetches the data from Spothole. This is instant for the main spots list, with new spots appearing immediately at the top of the list, while the map and bands displays update once a minute, and the alerts display updates once every 5 minutes. So you could be waiting around three minutes to see a newly added spot, or 40 minutes to see a newly added alert.</p>
|
||||||
|
<h4 class="mt-4">What licence does Spothole use?</h4>
|
||||||
|
<p>Spothole's source code is licenced under the Public Domain. You can write a Spothole client, run your own server, modify it however you like, you can claim you wrote it and charge people £1000 for a copy, I don't really mind. (Please don't do the last one. But if you're using my code for something cool, it would be nice to hear from you!)</p>
|
||||||
|
<h2 class="mt-4">Data Accuracy</h2>
|
||||||
|
<p>Please note that the data coming out of Spothole is only as good as the data going in. People mis-hear and make typos when spotting callsigns all the time. There are also plenty of cases where Spothole's data, particularly location data, may be inaccurate. For example, there are POTA parks that span multiple US states, countries that span multiple CQ zones, portable operators with no requirement to sign /P, etc. If you are doing something where accuracy is important, such as contesting, you should not rely on Spothole's data to fill in any gaps in your log.</p>
|
||||||
|
<h2 id="privacy" class="mt-4">Privacy</h2>
|
||||||
|
<p>Spothole collects no data about you, and there is no way to enter personally identifying information into the site apart from by spotting and alerting through Spothole or the various services it connects to. All spots and alerts are "timed out" and deleted from the system after a set interval, which by default is one hour for spots and one week for alerts.</p>
|
||||||
|
<p>Settings you select from Spothole's menus are sent to the server, in order to provide the data with the requested filters. They are also stored in your browser's local storage, so that your preferences are remembered between sessions.</p>
|
||||||
|
<p>There are no trackers, no ads, and no cookies.</p>
|
||||||
|
<p>Spothole is open source, so you can audit <a href="https://git.ianrenton.com/ian/spothole">the code</a> if you like.</p>
|
||||||
|
<h2 class="mt-4">Thanks</h2>
|
||||||
|
<p>This project would not have been possible without those volunteers who have taken it upon themselves to run DX clusters, xOTA programmes, DXpedition lists, callsign lookup databases, and other online tools on which Spothole's data is based.</p>
|
||||||
|
<p>Spothole is also dependent on a number of Python libraries, in particular pyhamtools, and many JavaScript libraries, as well as the Font Awesome icon set and flag icons from the Noto Color Emoji set.</p>
|
||||||
|
<p>This software is dedicated to the memory of Tom G1PJB, SK, a friend and colleague who sadly passed away around the time I started writing it in Autumn 2025. I was looking forward to showing it to you when it was done.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<script src="/js/common.js?v=2"></script>
|
||||||
|
<script>$(document).ready(function() { $("#nav-link-about").addClass("active"); }); <!-- highlight active page in nav --></script>
|
||||||
|
|
||||||
|
{% end %}
|
||||||
@@ -1,4 +1,5 @@
|
|||||||
% rebase('webpage_base.tpl')
|
{% extends "base.html" %}
|
||||||
|
{% block content %}
|
||||||
|
|
||||||
<div id="add-spot-intro-box" class="permanently-dismissible-box mt-3">
|
<div id="add-spot-intro-box" class="permanently-dismissible-box mt-3">
|
||||||
<div class="alert alert-primary alert-dismissible fade show" role="alert">
|
<div class="alert alert-primary alert-dismissible fade show" role="alert">
|
||||||
@@ -68,6 +69,8 @@
|
|||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<script src="/js/common.js"></script>
|
<script src="/js/common.js?v=2"></script>
|
||||||
<script src="/js/add-spot.js"></script>
|
<script src="/js/add-spot.js?v=2"></script>
|
||||||
<script>$(document).ready(function() { $("#nav-link-add-spot").addClass("active"); }); <!-- highlight active page in nav --></script>
|
<script>$(document).ready(function() { $("#nav-link-add-spot").addClass("active"); }); <!-- highlight active page in nav --></script>
|
||||||
|
|
||||||
|
{% end %}
|
||||||
@@ -1,7 +1,8 @@
|
|||||||
% rebase('webpage_base.tpl')
|
{% extends "base.html" %}
|
||||||
|
{% block content %}
|
||||||
|
|
||||||
<div class="mt-3">
|
<div class="mt-3">
|
||||||
<div class="row">
|
<div id="settingsButtonRow" class="row">
|
||||||
<div class="col-auto me-auto pt-3">
|
<div class="col-auto me-auto pt-3">
|
||||||
<p id="timing-container">Loading...</p>
|
<p id="timing-container">Loading...</p>
|
||||||
</div>
|
</div>
|
||||||
@@ -101,17 +102,25 @@
|
|||||||
<h5 class="card-title">Number of Alerts</h5>
|
<h5 class="card-title">Number of Alerts</h5>
|
||||||
<p class="card-text spothole-card-text">Show up to
|
<p class="card-text spothole-card-text">Show up to
|
||||||
<select id="alerts-to-fetch" class="storeable-select form-select ms-2" oninput="filtersUpdated();" style="width: 5em;display: inline-block;">
|
<select id="alerts-to-fetch" class="storeable-select form-select ms-2" oninput="filtersUpdated();" style="width: 5em;display: inline-block;">
|
||||||
<option value="25">25</option>
|
|
||||||
<option value="50">50</option>
|
|
||||||
<option value="100" selected>100</option>
|
|
||||||
<option value="200">200</option>
|
|
||||||
<option value="500">500</option>
|
|
||||||
</select>
|
</select>
|
||||||
alerts
|
alerts
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="col">
|
||||||
|
<div class="card">
|
||||||
|
<div class="card-body">
|
||||||
|
<h5 class="card-title">Theme</h5>
|
||||||
|
<div class="form-group">
|
||||||
|
<div class="form-check form-check-inline">
|
||||||
|
<input class="form-check-input storeable-checkbox" type="checkbox" id="darkMode" value="darkMode" oninput="toggleDarkMode();">
|
||||||
|
<label class="form-check-label" for="darkMode">Dark mode</label>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
<div class="col">
|
<div class="col">
|
||||||
<div class="card">
|
<div class="card">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
@@ -153,10 +162,14 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div id="table-container"></div>
|
<div id="table-container">
|
||||||
|
<table id="table" class="table"><thead><tr class="table-primary"></tr></thead><tbody></tbody></table>
|
||||||
|
</div>
|
||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<script src="/js/common.js"></script>
|
<script src="/js/common.js?v=2"></script>
|
||||||
<script src="/js/alerts.js"></script>
|
<script src="/js/alerts.js?v=2"></script>
|
||||||
<script>$(document).ready(function() { $("#nav-link-alerts").addClass("active"); }); <!-- highlight active page in nav --></script>
|
<script>$(document).ready(function() { $("#nav-link-alerts").addClass("active"); }); <!-- highlight active page in nav --></script>
|
||||||
|
|
||||||
|
{% end %}
|
||||||
@@ -1,5 +1,8 @@
|
|||||||
% rebase('webpage_base.tpl')
|
{% extends "base.html" %}
|
||||||
|
{% block content %}
|
||||||
|
|
||||||
<redoc spec-url="/apidocs/openapi.yml"></redoc>
|
<redoc spec-url="/apidocs/openapi.yml"></redoc>
|
||||||
<script src="https://cdn.redoc.ly/redoc/latest/bundles/redoc.standalone.js"> </script>
|
<script src="https://cdn.redoc.ly/redoc/latest/bundles/redoc.standalone.js"> </script>
|
||||||
<script>$(document).ready(function() { $("#nav-link-api").addClass("active"); }); <!-- highlight active page in nav --></script>
|
<script>$(document).ready(function() { $("#nav-link-api").addClass("active"); }); <!-- highlight active page in nav --></script>
|
||||||
|
|
||||||
|
{% end %}
|
||||||
@@ -1,7 +1,8 @@
|
|||||||
% rebase('webpage_base.tpl')
|
{% extends "base.html" %}
|
||||||
|
{% block content %}
|
||||||
|
|
||||||
<div class="mt-3">
|
<div class="mt-3">
|
||||||
<div class="row">
|
<div id="settingsButtonRow" class="row">
|
||||||
<div class="col-auto me-auto pt-3">
|
<div class="col-auto me-auto pt-3">
|
||||||
<p id="timing-container">Loading...</p>
|
<p id="timing-container">Loading...</p>
|
||||||
</div>
|
</div>
|
||||||
@@ -26,7 +27,7 @@
|
|||||||
|
|
||||||
</div>
|
</div>
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="row row-cols-1 g-4 mb-4">
|
<div class="row row-cols-1 g-4 mb-4 row-cols-md-3">
|
||||||
<div class="col">
|
<div class="col">
|
||||||
<div class="card">
|
<div class="card">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
@@ -35,8 +36,24 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="col">
|
||||||
|
<div class="card">
|
||||||
|
<div class="card-body">
|
||||||
|
<h5 class="card-title">SIGs</h5>
|
||||||
|
<p id="sig-options" class="card-text spothole-card-text"></p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col">
|
||||||
|
<div class="card">
|
||||||
|
<div class="card-body">
|
||||||
|
<h5 class="card-title">Sources</h5>
|
||||||
|
<p id="source-options" class="card-text spothole-card-text"></p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="row row-cols-1 row-cols-md-4 g-4">
|
<div class="row row-cols-1 row-cols-md-3 g-4">
|
||||||
<div class="col">
|
<div class="col">
|
||||||
<div class="card">
|
<div class="card">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
@@ -61,14 +78,6 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="col">
|
|
||||||
<div class="card">
|
|
||||||
<div class="card-body">
|
|
||||||
<h5 class="card-title">Sources</h5>
|
|
||||||
<p id="source-options" class="card-text spothole-card-text"></p>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -93,16 +102,25 @@
|
|||||||
<h5 class="card-title">Spot Age</h5>
|
<h5 class="card-title">Spot Age</h5>
|
||||||
<p class="card-text spothole-card-text">Last
|
<p class="card-text spothole-card-text">Last
|
||||||
<select id="max-spot-age" class="storeable-select form-select ms-2 me-2 d-inline-block" oninput="filtersUpdated();" style="width: 5em; display: inline-block;">
|
<select id="max-spot-age" class="storeable-select form-select ms-2 me-2 d-inline-block" oninput="filtersUpdated();" style="width: 5em; display: inline-block;">
|
||||||
<option value="300">5</option>
|
|
||||||
<option value="600">10</option>
|
|
||||||
<option value="1800" selected>30</option>
|
|
||||||
<option value="3600">60</option>
|
|
||||||
</select>
|
</select>
|
||||||
minutes
|
minutes
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="col">
|
||||||
|
<div class="card">
|
||||||
|
<div class="card-body">
|
||||||
|
<h5 class="card-title">Theme</h5>
|
||||||
|
<div class="form-group">
|
||||||
|
<div class="form-check form-check-inline">
|
||||||
|
<input class="form-check-input storeable-checkbox" type="checkbox" id="darkMode" value="darkMode" oninput="toggleDarkMode();">
|
||||||
|
<label class="form-check-label" for="darkMode">Dark mode</label>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -111,7 +129,9 @@
|
|||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<script src="/js/common.js"></script>
|
<script src="/js/common.js?v=2"></script>
|
||||||
<script src="/js/spotandmap.js"></script>
|
<script src="/js/spotsbandsandmap.js?v=2"></script>
|
||||||
<script src="/js/bands.js"></script>
|
<script src="/js/bands.js?v=2"></script>
|
||||||
<script>$(document).ready(function() { $("#nav-link-bands").addClass("active"); }); <!-- highlight active page in nav --></script>
|
<script>$(document).ready(function() { $("#nav-link-bands").addClass("active"); }); <!-- highlight active page in nav --></script>
|
||||||
|
|
||||||
|
{% end %}
|
||||||
@@ -3,7 +3,7 @@
|
|||||||
<head>
|
<head>
|
||||||
<meta charset="utf-8"/>
|
<meta charset="utf-8"/>
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1.0, viewport-fit=cover">
|
<meta name="viewport" content="width=device-width, initial-scale=1.0, viewport-fit=cover">
|
||||||
<meta name="color-scheme" content="light">
|
<meta name="color-scheme" content="light dark">
|
||||||
<meta name="theme-color" content="white"/>
|
<meta name="theme-color" content="white"/>
|
||||||
<meta name="mobile-web-app-capable" content="yes">
|
<meta name="mobile-web-app-capable" content="yes">
|
||||||
<meta name="apple-mobile-web-app-capable" content="yes">
|
<meta name="apple-mobile-web-app-capable" content="yes">
|
||||||
@@ -35,7 +35,7 @@
|
|||||||
<link rel="alternate icon" type="image/png" href="/img/icon-192.png">
|
<link rel="alternate icon" type="image/png" href="/img/icon-192.png">
|
||||||
<link rel="alternate icon" type="image/png" href="/img/icon-32.png">
|
<link rel="alternate icon" type="image/png" href="/img/icon-32.png">
|
||||||
<link rel="alternate icon" type="image/png" href="/img/icon-16.png">
|
<link rel="alternate icon" type="image/png" href="/img/icon-16.png">
|
||||||
<link rel="alternate icon" type="image/x-icon" href="/img/favicon.ico">
|
<link rel="alternate icon" type="image/x-icon" href="/favicon.ico">
|
||||||
|
|
||||||
<link rel="manifest" href="manifest.webmanifest">
|
<link rel="manifest" href="manifest.webmanifest">
|
||||||
|
|
||||||
@@ -48,23 +48,23 @@
|
|||||||
</head>
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<div class="container">
|
<div class="container">
|
||||||
<nav class="navbar navbar-expand-lg bg-body p-0 border-bottom">
|
<nav id="header" class="navbar navbar-expand-lg bg-body p-0 border-bottom">
|
||||||
<div class="container-fluid p-0">
|
<div class="container-fluid p-0">
|
||||||
<a class="navbar-brand" href="/">
|
<a class="navbar-brand" href="/">
|
||||||
<img src="/img/logo.png" width="192" height="60" alt="Spothole">
|
<img src="/img/logo.png" class="logo" width="192" height="60" alt="Spothole">
|
||||||
</a>
|
</a>
|
||||||
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarTogglerDemo02" aria-controls="navbarTogglerDemo02" aria-expanded="false" aria-label="Toggle navigation">
|
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbar-toggler-content" aria-controls="navbar-toggler-content" aria-expanded="false" aria-label="Toggle navigation">
|
||||||
<span class="navbar-toggler-icon"></span>
|
<span class="navbar-toggler-icon"></span>
|
||||||
</button>
|
</button>
|
||||||
<div class="collapse navbar-collapse" id="navbarTogglerDemo02">
|
<div class="collapse navbar-collapse" id="navbar-toggler-content">
|
||||||
<ul class="navbar-nav me-auto mb-2 mb-lg-0">
|
<ul class="navbar-nav me-auto mb-2 mb-lg-0">
|
||||||
<li class="nav-item ms-4"><a href="/" class="nav-link" id="nav-link-spots"><i class="fa-solid fa-tower-cell"></i> Spots</a></li>
|
<li class="nav-item ms-4"><a href="/" class="nav-link" id="nav-link-spots"><i class="fa-solid fa-tower-cell"></i> Spots</a></li>
|
||||||
<li class="nav-item ms-4"><a href="/map" class="nav-link" id="nav-link-map"><i class="fa-solid fa-map"></i> Map</a></li>
|
<li class="nav-item ms-4"><a href="/map" class="nav-link" id="nav-link-map"><i class="fa-solid fa-map"></i> Map</a></li>
|
||||||
<li class="nav-item ms-4"><a href="/bands" class="nav-link" id="nav-link-bands"><i class="fa-solid fa-ruler-vertical"></i> Bands</a></li>
|
<li class="nav-item ms-4"><a href="/bands" class="nav-link" id="nav-link-bands"><i class="fa-solid fa-ruler-vertical"></i> Bands</a></li>
|
||||||
<li class="nav-item ms-4"><a href="/alerts" class="nav-link" id="nav-link-alerts"><i class="fa-solid fa-bell"></i> Alerts</a></li>
|
<li class="nav-item ms-4"><a href="/alerts" class="nav-link" id="nav-link-alerts"><i class="fa-solid fa-bell"></i> Alerts</a></li>
|
||||||
% if allow_spotting:
|
{% if allow_spotting %}
|
||||||
<li class="nav-item ms-4"><a href="/add-spot" class="nav-link" id="nav-link-add-spot"><i class="fa-solid fa-comment"></i> Add Spot</a></li>
|
<li class="nav-item ms-4"><a href="/add-spot" class="nav-link" id="nav-link-add-spot"><i class="fa-solid fa-comment"></i> Add Spot</a></li>
|
||||||
% end
|
{% end %}
|
||||||
<li class="nav-item ms-4"><a href="/status" class="nav-link" id="nav-link-status"><i class="fa-solid fa-chart-simple"></i> Status</a></li>
|
<li class="nav-item ms-4"><a href="/status" class="nav-link" id="nav-link-status"><i class="fa-solid fa-chart-simple"></i> Status</a></li>
|
||||||
<li class="nav-item ms-4"><a href="/about" class="nav-link" id="nav-link-about"><i class="fa-solid fa-circle-info"></i> About</a></li>
|
<li class="nav-item ms-4"><a href="/about" class="nav-link" id="nav-link-about"><i class="fa-solid fa-circle-info"></i> About</a></li>
|
||||||
<li class="nav-item ms-4"><a href="/apidocs" class="nav-link" id="nav-link-api"><i class="fa-solid fa-gear"></i> API</a></li>
|
<li class="nav-item ms-4"><a href="/apidocs" class="nav-link" id="nav-link-api"><i class="fa-solid fa-gear"></i> API</a></li>
|
||||||
@@ -75,11 +75,11 @@
|
|||||||
|
|
||||||
<main>
|
<main>
|
||||||
|
|
||||||
{{!base}}
|
{% block content %}{% end %}
|
||||||
|
|
||||||
</main>
|
</main>
|
||||||
|
|
||||||
<div class="hideonmobile hideonmap">
|
<div id="footer" class="hideonmobile hideonmap">
|
||||||
<footer class="d-flex flex-wrap justify-content-between align-items-center py-3 my-4 border-top">
|
<footer class="d-flex flex-wrap justify-content-between align-items-center py-3 my-4 border-top">
|
||||||
<p class="col-md-4 mb-0 text-body-secondary">Made with love by <a href="https://ianrenton.com" class="text-body-secondary">Ian, MØTRT</a> and other contributors.</p>
|
<p class="col-md-4 mb-0 text-body-secondary">Made with love by <a href="https://ianrenton.com" class="text-body-secondary">Ian, MØTRT</a> and other contributors.</p>
|
||||||
<p class="col-md-4 mb-0 justify-content-center text-body-secondary" style="text-align: center;">Spothole v{{software_version}}</p>
|
<p class="col-md-4 mb-0 justify-content-center text-body-secondary" style="text-align: center;">Spothole v{{software_version}}</p>
|
||||||
@@ -101,5 +101,7 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<div id="embeddedModeFooter" class="text-body-secondary pt-2 px-3 pb-1">Powered by <img src="/img/logo.png" class="logo" width="96" height="30" alt="Spothole"></div>
|
||||||
|
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
@@ -1,7 +1,8 @@
|
|||||||
% rebase('webpage_base.tpl')
|
{% extends "base.html" %}
|
||||||
|
{% block content %}
|
||||||
|
|
||||||
<div id="map">
|
<div id="map">
|
||||||
<div id="maptools" class="mt-3 px-3" style="z-index: 1002; position: relative;">
|
<div id="settingsButtonRowMap" class="mt-3 px-3" style="z-index: 1002; position: relative;">
|
||||||
<div class="row">
|
<div class="row">
|
||||||
<div class="col-auto me-auto pt-3"></div>
|
<div class="col-auto me-auto pt-3"></div>
|
||||||
<div class="col-auto">
|
<div class="col-auto">
|
||||||
@@ -25,7 +26,7 @@
|
|||||||
|
|
||||||
</div>
|
</div>
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="row row-cols-1 g-4 mb-4">
|
<div class="row row-cols-1 g-4 mb-4 row-cols-md-3">
|
||||||
<div class="col">
|
<div class="col">
|
||||||
<div class="card">
|
<div class="card">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
@@ -34,8 +35,24 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="col">
|
||||||
|
<div class="card">
|
||||||
|
<div class="card-body">
|
||||||
|
<h5 class="card-title">SIGs</h5>
|
||||||
|
<p id="sig-options" class="card-text spothole-card-text"></p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col">
|
||||||
|
<div class="card">
|
||||||
|
<div class="card-body">
|
||||||
|
<h5 class="card-title">Sources</h5>
|
||||||
|
<p id="source-options" class="card-text spothole-card-text"></p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="row row-cols-1 row-cols-md-4 g-4">
|
<div class="row row-cols-1 row-cols-md-3 g-4">
|
||||||
<div class="col">
|
<div class="col">
|
||||||
<div class="card">
|
<div class="card">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
@@ -60,14 +77,6 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="col">
|
|
||||||
<div class="card">
|
|
||||||
<div class="card-body">
|
|
||||||
<h5 class="card-title">Sources</h5>
|
|
||||||
<p id="source-options" class="card-text spothole-card-text"></p>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -92,10 +101,6 @@
|
|||||||
<h5 class="card-title">Spot Age</h5>
|
<h5 class="card-title">Spot Age</h5>
|
||||||
<p class="card-text spothole-card-text">Last
|
<p class="card-text spothole-card-text">Last
|
||||||
<select id="max-spot-age" class="storeable-select form-select ms-2 me-2 d-inline-block" oninput="filtersUpdated();" style="width: 5em; display: inline-block;">
|
<select id="max-spot-age" class="storeable-select form-select ms-2 me-2 d-inline-block" oninput="filtersUpdated();" style="width: 5em; display: inline-block;">
|
||||||
<option value="300">5</option>
|
|
||||||
<option value="600">10</option>
|
|
||||||
<option value="1800" selected>30</option>
|
|
||||||
<option value="3600">60</option>
|
|
||||||
</select>
|
</select>
|
||||||
minutes
|
minutes
|
||||||
</p>
|
</p>
|
||||||
@@ -115,6 +120,19 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="col">
|
||||||
|
<div class="card">
|
||||||
|
<div class="card-body">
|
||||||
|
<h5 class="card-title">Theme</h5>
|
||||||
|
<div class="form-group">
|
||||||
|
<div class="form-check form-check-inline">
|
||||||
|
<input class="form-check-input storeable-checkbox" type="checkbox" id="darkMode" value="darkMode" oninput="toggleDarkMode();">
|
||||||
|
<label class="form-check-label" for="darkMode">Dark mode</label>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -129,7 +147,9 @@
|
|||||||
<script src="https://cdn.jsdelivr.net/npm/leaflet.geodesic"></script>
|
<script src="https://cdn.jsdelivr.net/npm/leaflet.geodesic"></script>
|
||||||
<script src="https://cdn.jsdelivr.net/npm/@joergdietrich/leaflet.terminator@1.1.0/L.Terminator.min.js"></script>
|
<script src="https://cdn.jsdelivr.net/npm/@joergdietrich/leaflet.terminator@1.1.0/L.Terminator.min.js"></script>
|
||||||
|
|
||||||
<script src="/js/common.js"></script>
|
<script src="/js/common.js?v=2"></script>
|
||||||
<script src="/js/spotandmap.js"></script>
|
<script src="/js/spotsbandsandmap.js?v=2"></script>
|
||||||
<script src="/js/map.js"></script>
|
<script src="/js/map.js?v=2"></script>
|
||||||
<script>$(document).ready(function() { $("#nav-link-map").addClass("active"); }); <!-- highlight active page in nav --></script>
|
<script>$(document).ready(function() { $("#nav-link-map").addClass("active"); }); <!-- highlight active page in nav --></script>
|
||||||
|
|
||||||
|
{% end %}
|
||||||
@@ -1,4 +1,5 @@
|
|||||||
% rebase('webpage_base.tpl')
|
{% extends "base.html" %}
|
||||||
|
{% block content %}
|
||||||
|
|
||||||
<div id="intro-box" class="permanently-dismissible-box mt-3">
|
<div id="intro-box" class="permanently-dismissible-box mt-3">
|
||||||
<div class="alert alert-primary alert-dismissible fade show" role="alert">
|
<div class="alert alert-primary alert-dismissible fade show" role="alert">
|
||||||
@@ -8,15 +9,22 @@
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="mt-3">
|
<div class="mt-3">
|
||||||
<div class="row">
|
<div id="settingsButtonRow" class="row">
|
||||||
<div class="col-auto me-auto pt-3">
|
<div class="col-lg-6 me-auto pt-3 hideonmobile">
|
||||||
<p id="timing-container">Loading...</p>
|
<p id="timing-container">Loading...</p>
|
||||||
</div>
|
</div>
|
||||||
<div class="col-auto">
|
<div class="col-lg-6 text-end">
|
||||||
<p class="d-inline-flex gap-1">
|
<p class="d-inline-flex gap-1">
|
||||||
<span style="position: relative;">
|
<span class="btn-group" role="group">
|
||||||
<i class="fa-solid fa-magnifying-glass" style="position: absolute; left: 0px; top: 2px; padding: 10px; pointer-events: none;"></i>
|
<input type="radio" class="btn-check" name="runPause" id="runButton" autocomplete="off" checked>
|
||||||
<input id="filter-dx-call" type="text" class="form-control" oninput="filtersUpdated();" placeholder="Search for call">
|
<label class="btn btn-outline-primary" for="runButton"><i class="fa-solid fa-play"></i> Run</label>
|
||||||
|
|
||||||
|
<input type="radio" class="btn-check" name="runPause" id="pauseButton" autocomplete="off">
|
||||||
|
<label class="btn btn-outline-primary" for="pauseButton"><i class="fa-solid fa-pause"></i> Pause</label>
|
||||||
|
</span>
|
||||||
|
<span class="hideonmobile" style="position: relative;">
|
||||||
|
<i id="searchicon" class="fa-solid fa-magnifying-glass"></i>
|
||||||
|
<input id="search" type="search" class="form-control" oninput="filtersUpdated();" placeholder="Search">
|
||||||
</span>
|
</span>
|
||||||
<button id="filters-button" type="button" class="btn btn-outline-primary" data-bs-toggle="button" onclick="toggleFiltersPanel();"><i class="fa-solid fa-filter"></i> Filters</button>
|
<button id="filters-button" type="button" class="btn btn-outline-primary" data-bs-toggle="button" onclick="toggleFiltersPanel();"><i class="fa-solid fa-filter"></i> Filters</button>
|
||||||
<button id="display-button" type="button" class="btn btn-outline-primary" data-bs-toggle="button" onclick="toggleDisplayPanel();"><i class="fa-solid fa-desktop"></i> Display</button>
|
<button id="display-button" type="button" class="btn btn-outline-primary" data-bs-toggle="button" onclick="toggleDisplayPanel();"><i class="fa-solid fa-desktop"></i> Display</button>
|
||||||
@@ -37,7 +45,7 @@
|
|||||||
|
|
||||||
</div>
|
</div>
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
<div class="row row-cols-1 g-4 mb-4">
|
<div class="row row-cols-1 g-4 mb-4 row-cols-md-3">
|
||||||
<div class="col">
|
<div class="col">
|
||||||
<div class="card">
|
<div class="card">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
@@ -46,8 +54,24 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="col">
|
||||||
|
<div class="card">
|
||||||
|
<div class="card-body">
|
||||||
|
<h5 class="card-title">SIGs</h5>
|
||||||
|
<p id="sig-options" class="card-text spothole-card-text"></p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col">
|
||||||
|
<div class="card">
|
||||||
|
<div class="card-body">
|
||||||
|
<h5 class="card-title">Sources</h5>
|
||||||
|
<p id="source-options" class="card-text spothole-card-text"></p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="row row-cols-1 row-cols-md-4 g-4">
|
<div class="row row-cols-1 row-cols-md-3 g-4">
|
||||||
<div class="col">
|
<div class="col">
|
||||||
<div class="card">
|
<div class="card">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
@@ -72,14 +96,6 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="col">
|
|
||||||
<div class="card">
|
|
||||||
<div class="card-body">
|
|
||||||
<h5 class="card-title">Sources</h5>
|
|
||||||
<p id="source-options" class="card-text spothole-card-text"></p>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -117,16 +133,36 @@
|
|||||||
<h5 class="card-title">Number of Spots</h5>
|
<h5 class="card-title">Number of Spots</h5>
|
||||||
<p class="card-text spothole-card-text">Show up to
|
<p class="card-text spothole-card-text">Show up to
|
||||||
<select id="spots-to-fetch" class="storeable-select form-select ms-2 me-2 d-inline-block" oninput="filtersUpdated();" style="width: 5em; display: inline-block;">
|
<select id="spots-to-fetch" class="storeable-select form-select ms-2 me-2 d-inline-block" oninput="filtersUpdated();" style="width: 5em; display: inline-block;">
|
||||||
<option value="10">10</option>
|
|
||||||
<option value="25">25</option>
|
|
||||||
<option value="50" selected>50</option>
|
|
||||||
<option value="100">100</option>
|
|
||||||
</select>
|
</select>
|
||||||
spots
|
spots
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="col">
|
||||||
|
<div class="card">
|
||||||
|
<div class="card-body">
|
||||||
|
<h5 class="card-title">Location</h5>
|
||||||
|
<div class="form-group spothole-card-text">
|
||||||
|
<label for="userGrid">Your grid:</label>
|
||||||
|
<input type="text" class="storeable-text form-control" id="userGrid" placeholder="AA00aa" oninput="userGridUpdated();" style="width: 10em; display: inline-block;">
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="col">
|
||||||
|
<div class="card">
|
||||||
|
<div class="card-body">
|
||||||
|
<h5 class="card-title">Theme</h5>
|
||||||
|
<div class="form-group">
|
||||||
|
<div class="form-check form-check-inline">
|
||||||
|
<input class="form-check-input storeable-checkbox" type="checkbox" id="darkMode" value="darkMode" oninput="toggleDarkMode();">
|
||||||
|
<label class="form-check-label" for="darkMode">Dark mode</label>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
<div class="col">
|
<div class="col">
|
||||||
<div class="card">
|
<div class="card">
|
||||||
<div class="card-body">
|
<div class="card-body">
|
||||||
@@ -172,26 +208,19 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="col">
|
|
||||||
<div class="card">
|
|
||||||
<div class="card-body">
|
|
||||||
<h5 class="card-title">Location</h5>
|
|
||||||
<div class="form-group spothole-card-text">
|
|
||||||
<label for="userGrid">Your grid:</label>
|
|
||||||
<input type="text" class="storeable-text form-control" id="userGrid" placeholder="AA00aa" oninput="userGridUpdated();" style="width: 10em; display: inline-block;">
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div id="table-container"></div>
|
<div id="table-container">
|
||||||
|
<table id="table" class="table"><thead><tr class="table-primary"></tr></thead><tbody></tbody></table>
|
||||||
|
</div>
|
||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<script src="/js/common.js"></script>
|
<script src="/js/common.js?v=2"></script>
|
||||||
<script src="/js/spotandmap.js"></script>
|
<script src="/js/spotsbandsandmap.js?v=2"></script>
|
||||||
<script src="/js/spots.js"></script>
|
<script src="/js/spots.js?v=2"></script>
|
||||||
<script>$(document).ready(function() { $("#nav-link-spots").addClass("active"); }); <!-- highlight active page in nav --></script>
|
<script>$(document).ready(function() { $("#nav-link-spots").addClass("active"); }); <!-- highlight active page in nav --></script>
|
||||||
|
|
||||||
|
{% end %}
|
||||||
@@ -1,7 +1,10 @@
|
|||||||
% rebase('webpage_base.tpl')
|
{% extends "base.html" %}
|
||||||
|
{% block content %}
|
||||||
|
|
||||||
<div id="status-container" class="row row-cols-1 row-cols-md-4 g-4 mt-4"></div>
|
<div id="status-container" class="row row-cols-1 row-cols-md-4 g-4 mt-4"></div>
|
||||||
|
|
||||||
<script src="/js/common.js"></script>
|
<script src="/js/common.js?v=2"></script>
|
||||||
<script src="/js/status.js"></script>
|
<script src="/js/status.js?v=2"></script>
|
||||||
<script>$(document).ready(function() { $("#nav-link-status").addClass("active"); }); <!-- highlight active page in nav --></script>
|
<script>$(document).ready(function() { $("#nav-link-status").addClass("active"); }); <!-- highlight active page in nav --></script>
|
||||||
|
|
||||||
|
{% end %}
|
||||||
@@ -1,47 +0,0 @@
|
|||||||
% rebase('webpage_base.tpl')
|
|
||||||
|
|
||||||
<div id="info-container" class="mt-4">
|
|
||||||
<h2 class="mt-4 mb-4">About Spothole</h2>
|
|
||||||
<p>Spothole is a utility to aggregate "spots" from amateur radio DX clusters and xOTA spotting sites, and provide an open JSON API as well as a website to browse the data.</p>
|
|
||||||
<p>While there are several other web-based interfaces to DX clusters, and sites that aggregate spots from various outdoor activity programmes for amateur radio, Spothole differentiates itself by supporting a large number of data sources, and by being "API first" rather than just providing a web front-end. This allows other software to be built on top of it.</p>
|
|
||||||
<p>The API is deliberately well-defined with an <a href="/apidocs/openapi.yml">OpenAPI specification</a> and <a href="/apidocs">API documentation</a>. The API delivers spots in a consistent format regardless of the data source, freeing developers from needing to know how each individual data source presents its data.</p>
|
|
||||||
<p>Spothole itself is also open source, Public Domain licenced code that anyone can take and modify. <a href="https://git.ianrenton.com/ian/metaspot/">The source code is here</a>. If you want to run your own copy of Spothole, or start modifying it for your own purposes, the <a href="https://git.ianrenton.com/ian/spothole/src/branch/main/README.md">README file</a> contains a description of how the software works and how it's laid out, as well as instructions for configuring systemd, nginx and anything else you might need to run your own server.</p>
|
|
||||||
<p>The software was written by <a href="https://ianrenton.com">Ian Renton, MØTRT</a> and other contributors. Full details are available in the README.</p>
|
|
||||||
<p>This server is running Spothole version {{software_version}}.</p>
|
|
||||||
<h2 id="faq" class="mt-4">FAQ</h2>
|
|
||||||
<h4 class="mt-4">"Spots"? "DX Clusters"? What does any of this mean?</h4>
|
|
||||||
<p>This is a tool for amateur ("ham") radio users. Many amateur radio operators like to make contacts with others who are doing something more interesting than sitting in their home "shack", such as people in rarely-seen countries, remote islands, or on mountaintops. Such operators are often "spotted", i.e. when someone speaks to them, they will put the details such as their operating frequency into an online system, to let others know where to find them. A DX Cluster is one type of those systems. Most outdoor radio awards programmes, such as "Parks on the Air" (POTA) have their own websites for posting spots.</p>
|
|
||||||
<p>Spothole is an "aggregator" for those spots, so it checks lots of different services for data, and brings it all together in one place. So no matter what kinds of interesting spots you are looking for, you can find them here.</p>
|
|
||||||
<p>As well as spots, it also provides a similar feed of "alerts". This is where amateur radio users who are going to interesting places soon will announce their intentions.</p>
|
|
||||||
<h4 class="mt-4">What are "DX", "DE" and modes?</h4>
|
|
||||||
<p>In amateur radio terminology, the "DX" contact is the "interesting" one that is using the frequency shown. They might be on a remote island or just in a local park, but either way it's interesting enough that someone has "spotted" them. The callsign listed under "DE" is the person who spotted the "DX" operator. "Modes" are the type of communication they are using. You might see "CW" which is Morse Code, or voice "modes" like SSB or FM, or more exotic "data" modes which are used for computer-to-computer communication.</p>
|
|
||||||
<h4 class="mt-4">What data sources are supported?</h4>
|
|
||||||
<p>Spothole can retrieve spots from: Telnet-based DX clusters, the Reverse Beacon Network (RBN), the APRS Internet Service (APRS-IS), POTA, SOTA, WWFF, GMA, WWBOTA, HEMA, Parks 'n' Peaks, ZLOTA, WOTA, and the UK Packet Repeater Network.</p>
|
|
||||||
<p>Spothole can retrieve alerts from: NG3K, POTA, SOTA, WWFF, Parks 'n' Peaks, WOTA and BOTA.</p>
|
|
||||||
<p>Note that the server owner has not necessarily enabled all these data sources. In particular it is common to disable RBN, to avoid the server being swamped with FT8 traffic, and to disable APRS-IS and UK Packet Net so that the server only displays stations where there is likely to be an operator physically present for a QSO.</p>
|
|
||||||
<p>Between the various data sources, the following Special Interest Groups (SIGs) are supported: POTA, SOTA, WWFF, GMA, WWBOTA, HEMA, IOTA, MOTS, ARLHS, ILLW, SIOTA, WCA, ZLOTA, KRMNPA, WOTA, BOTA, WAB & WAI.</p>
|
|
||||||
<h4 class="mt-4">How is this better than DXheat, DXsummit, POTA's own website, etc?</h4>
|
|
||||||
<p>It's probably not? But it's nice to have choice.</p>
|
|
||||||
<p>I think it's got two key advantages over those sites:</p>
|
|
||||||
<ol><li>It provides a public, <a href="/apidocs">well-documented API</a> with an <a href="/apidocs/openapi.yml">OpenAPI specification</a>. Other sites don't have official APIs or don't bother documenting them publicly, because they want people to use their web page. I like Spothole's web page, but you don't have to use it—if you're a programmer, you can build your own software on Spothole's API. Spothole does the hard work of taking all the various data sources and providing a consistent, well-documented data set. You can then do the fun bit of writing your own application.</li>
|
|
||||||
<li>It grabs data from a lot more sources, and it's easy to add more. Since it's open source, anyone can contribute a new data source and share it with the community.</li></ol>
|
|
||||||
<h4 class="mt-4">Why does this website ask me if I want to install it?</h4>
|
|
||||||
<p>Spothole is a Progressive Web App, which means you can install it on an Android or iOS device by opening the site in Chrome or Safari respectively, and clicking "Install" on the pop-up panel. It'll only prompt you once, so if you dismiss the prompt and change your mind, you'll find an Install / Add to Home Screen option on your browser's menu.</p>
|
|
||||||
<p>Installing Spothole on your phone is completely optional, the website works exactly the same way as the "app" does.</p>
|
|
||||||
<h4 class="mt-4">Why hasn't my spot/alert shown up yet?</h4>
|
|
||||||
<p>To avoid putting too much load on the various servers that Spothole connects to, the Spothole server only polls them once every two minutes for spots, and once every hour for alerts. (Some sources, such as DX clusters, RBN, APRS-IS and WWBOTA use a non-polling mechanism, and their updates will therefore arrive more quickly.) Then if you are using the web interface, that has its own rate at which it reloads the data from Spothole, which is once a minute for spots or 30 minutes for alerts. So you could be waiting around three minutes to see a newly added spot, or 90 minutes to see a newly added alert.</p>
|
|
||||||
<h4 class="mt-4">What licence does Spothole use?</h4>
|
|
||||||
<p>Spothole's source code is licenced under the Public Domain. You can write a Spothole client, run your own server, modify it however you like, you can claim you wrote it and charge people £1000 for a copy, I don't really mind. (Please don't do the last one. But if you're using my code for something cool, it would be nice to hear from you!)</p>
|
|
||||||
<h2 class="mt-4">Data Accuracy</h2>
|
|
||||||
<p>Please note that the data coming out of Spothole is only as good as the data going in. People mis-hear and make typos when spotting callsigns all the time. There are also plenty of cases where Spothole's data, particularly location data, may be inaccurate. For example, there are POTA parks that span multiple US states, countries that span multiple CQ zones, portable operators with no requirement to sign /P, etc. If you are doing something where accuracy is important, such as contesting, you should not rely on Spothole's data to fill in any gaps in your log.</p>
|
|
||||||
<h2 id="privacy" class="mt-4">Privacy</h2>
|
|
||||||
<p>Spothole collects no data about you, and there is no way to enter personally identifying information into the site apart from by spotting and alerting through Spothole or the various services it connects to. All spots and alerts are "timed out" and deleted from the system after a set interval, which by default is one hour for spots and one week for alerts.</p>
|
|
||||||
<p>Settings you select from Spothole's menus are sent to the server, in order to provide the data with the requested filters. They are also stored in your browser's local storage, so that your preferences are remembered between sessions.</p>
|
|
||||||
<p>There are no trackers, no ads, and no cookies.</p>
|
|
||||||
<p>Spothole is open source, so you can audit <a href="https://git.ianrenton.com/ian/spothole">the code</a> if you like.</p>
|
|
||||||
<h2 class="mt-4">Thanks</h2>
|
|
||||||
<p>This project would not have been possible without those volunteers who have taken it upon themselves to run DX clusters, xOTA programmes, DXpedition lists, callsign lookup databases, and other online tools on which Spothole's data is based.</p>
|
|
||||||
<p>Spothole is also dependent on a number of Python libraries, in particular pyhamtools, and many JavaScript libraries, as well as the Font Awesome icon set.</p>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<script>$(document).ready(function() { $("#nav-link-about").addClass("active"); }); <!-- highlight active page in nav --></script>
|
|
||||||
@@ -4,6 +4,38 @@
|
|||||||
font-weight: bold;
|
font-weight: bold;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* In embedded mode, hide header/footer/settings. "#header div" is kind of janky but for some reason if we hide the
|
||||||
|
whole of #header, the map vertical sizing breaks. */
|
||||||
|
[embedded-mode=true] #header div, [embedded-mode=true] #footer,
|
||||||
|
[embedded-mode=true] #settingsButtonRow, [embedded-mode=true] #settingsButtonRowMap {
|
||||||
|
display: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Display floating footer in embedded mode only */
|
||||||
|
#embeddedModeFooter {
|
||||||
|
display: none;
|
||||||
|
position: fixed;
|
||||||
|
bottom: 0;
|
||||||
|
right: 0;
|
||||||
|
background: var(--bs-body-bg);
|
||||||
|
border-radius: 1em 0 0 0;
|
||||||
|
font-size: 0.9em;
|
||||||
|
border-top: 1px solid grey;
|
||||||
|
border-left: 1px solid grey;
|
||||||
|
}
|
||||||
|
[embedded-mode=true] #embeddedModeFooter {
|
||||||
|
display: block;
|
||||||
|
}
|
||||||
|
#embeddedModeFooter img.logo {
|
||||||
|
position: relative;
|
||||||
|
top: -2px;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Invert logo colours in dark mode */
|
||||||
|
[data-bs-theme=dark] .logo {
|
||||||
|
filter: invert(100%) hue-rotate(180deg) brightness(80%);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
/* INTRO/WARNING BOXES */
|
/* INTRO/WARNING BOXES */
|
||||||
|
|
||||||
@@ -26,8 +58,13 @@ div.container {
|
|||||||
min-height:100svh;
|
min-height:100svh;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
[embedded-mode=true] div.container {
|
||||||
|
width: 100% !important;
|
||||||
|
max-width: 100% !important;
|
||||||
|
}
|
||||||
|
|
||||||
/* ABOUT PAGE*/
|
|
||||||
|
/* ABOUT PAGE */
|
||||||
|
|
||||||
#info-container{
|
#info-container{
|
||||||
width: 100%;
|
width: 100%;
|
||||||
@@ -43,17 +80,22 @@ div.container {
|
|||||||
|
|
||||||
/* SPOTS/ALERTS PAGES, SETTINGS/STATUS AREAS */
|
/* SPOTS/ALERTS PAGES, SETTINGS/STATUS AREAS */
|
||||||
|
|
||||||
input#filter-dx-call {
|
input#search {
|
||||||
max-width: 10em;
|
max-width: 12em;
|
||||||
|
margin-left: 1rem;
|
||||||
margin-right: 1rem;
|
margin-right: 1rem;
|
||||||
padding-left: 2em;
|
padding-left: 2em;
|
||||||
}
|
}
|
||||||
|
|
||||||
div.appearing-panel {
|
i#searchicon {
|
||||||
display: none;
|
position: absolute;
|
||||||
|
left: 1rem;
|
||||||
|
top: 2px;
|
||||||
|
padding: 10px;
|
||||||
|
pointer-events: none;
|
||||||
}
|
}
|
||||||
|
|
||||||
button#add-spot-button {
|
div.appearing-panel {
|
||||||
display: none;
|
display: none;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -71,11 +113,16 @@ td.nowrap, span.nowrap {
|
|||||||
|
|
||||||
span.flag-wrapper {
|
span.flag-wrapper {
|
||||||
display: inline-block;
|
display: inline-block;
|
||||||
width: 1.7em;
|
width: 1.8em;
|
||||||
text-align: center;
|
text-align: center;
|
||||||
cursor: default;
|
cursor: default;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
img.flag {
|
||||||
|
position: relative;
|
||||||
|
top: -2px;
|
||||||
|
}
|
||||||
|
|
||||||
span.band-bullet {
|
span.band-bullet {
|
||||||
display: inline-block;
|
display: inline-block;
|
||||||
cursor: default;
|
cursor: default;
|
||||||
@@ -132,6 +179,31 @@ tr.table-faded td span {
|
|||||||
text-decoration: line-through !important;
|
text-decoration: line-through !important;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* New spot styles */
|
||||||
|
tr.new td {
|
||||||
|
animation: 2s linear newspotanim;
|
||||||
|
}
|
||||||
|
@keyframes newspotanim {
|
||||||
|
0% {
|
||||||
|
background-color: var(--bs-success-border-subtle);
|
||||||
|
}
|
||||||
|
100% {
|
||||||
|
background-color: intial;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Fudge apply our own "dark primary" and "dark danger" backgrounds as Bootstrap doesn't do this itself */
|
||||||
|
[data-bs-theme=dark] tr.table-primary {
|
||||||
|
--bs-table-bg: #053680;
|
||||||
|
--bs-table-border-color: #021b42;
|
||||||
|
--bs-table-color: white;
|
||||||
|
}
|
||||||
|
[data-bs-theme=dark] tr.table-danger {
|
||||||
|
--bs-table-bg: #74272e;
|
||||||
|
--bs-table-border-color: #530208;
|
||||||
|
--bs-table-color: white;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
/* MAP */
|
/* MAP */
|
||||||
div#map {
|
div#map {
|
||||||
@@ -147,6 +219,12 @@ div#map {
|
|||||||
font-family: var(--bs-body-font-family) !important;
|
font-family: var(--bs-body-font-family) !important;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
[data-bs-theme=dark] .leaflet-layer,
|
||||||
|
[data-bs-theme=dark] .leaflet-control-attribution {
|
||||||
|
filter: invert(100%) hue-rotate(180deg) brightness(95%) contrast(90%);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
/* BANDS PANEL */
|
/* BANDS PANEL */
|
||||||
|
|
||||||
@@ -222,6 +300,10 @@ div.band-spot {
|
|||||||
cursor: default;
|
cursor: default;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
[data-bs-theme=dark] div.band-spot {
|
||||||
|
background-color: black;
|
||||||
|
}
|
||||||
|
|
||||||
div.band-spot:hover {
|
div.band-spot:hover {
|
||||||
z-index: 999;
|
z-index: 999;
|
||||||
}
|
}
|
||||||
@@ -256,18 +338,13 @@ div.band-spot:hover span.band-spot-info {
|
|||||||
margin-right: -1em;
|
margin-right: -1em;
|
||||||
}
|
}
|
||||||
/* Avoid map page filters panel being larger than the map itself */
|
/* Avoid map page filters panel being larger than the map itself */
|
||||||
#maptools .appearing-panel {
|
#settingsButtonRowMap .appearing-panel {
|
||||||
max-height: 30em;
|
max-height: 30em;
|
||||||
}
|
}
|
||||||
#maptools .appearing-panel .card-body {
|
#settingsButtonRowMap .appearing-panel .card-body {
|
||||||
max-height: 26em;
|
max-height: 26em;
|
||||||
overflow: scroll;
|
overflow: scroll;
|
||||||
}
|
}
|
||||||
/* Filter/search DX Call field should be smaller on mobile */
|
|
||||||
input#filter-dx-call {
|
|
||||||
max-width: 6em;
|
|
||||||
margin-right: 0;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@media (min-width: 992px) {
|
@media (min-width: 992px) {
|
||||||
|
|||||||
|
Before Width: | Height: | Size: 129 KiB After Width: | Height: | Size: 129 KiB |
BIN
webassets/img/flags/1.png
Normal file
|
After Width: | Height: | Size: 3.9 KiB |
BIN
webassets/img/flags/10.png
Normal file
|
After Width: | Height: | Size: 7.0 KiB |
BIN
webassets/img/flags/100.png
Normal file
|
After Width: | Height: | Size: 6.5 KiB |
BIN
webassets/img/flags/101.png
Normal file
|
After Width: | Height: | Size: 348 B |
BIN
webassets/img/flags/102.png
Normal file
|
After Width: | Height: | Size: 348 B |
BIN
webassets/img/flags/103.png
Normal file
|
After Width: | Height: | Size: 7.2 KiB |
BIN
webassets/img/flags/104.png
Normal file
|
After Width: | Height: | Size: 4.8 KiB |
BIN
webassets/img/flags/105.png
Normal file
|
After Width: | Height: | Size: 14 KiB |
BIN
webassets/img/flags/106.png
Normal file
|
After Width: | Height: | Size: 5.9 KiB |
BIN
webassets/img/flags/107.png
Normal file
|
After Width: | Height: | Size: 4.1 KiB |
BIN
webassets/img/flags/108.png
Normal file
|
After Width: | Height: | Size: 8.7 KiB |
BIN
webassets/img/flags/109.png
Normal file
|
After Width: | Height: | Size: 4.9 KiB |
BIN
webassets/img/flags/11.png
Normal file
|
After Width: | Height: | Size: 6.4 KiB |
BIN
webassets/img/flags/110.png
Normal file
|
After Width: | Height: | Size: 14 KiB |
BIN
webassets/img/flags/111.png
Normal file
|
After Width: | Height: | Size: 9.1 KiB |
BIN
webassets/img/flags/112.png
Normal file
|
After Width: | Height: | Size: 4.8 KiB |
BIN
webassets/img/flags/113.png
Normal file
|
After Width: | Height: | Size: 348 B |
BIN
webassets/img/flags/114.png
Normal file
|
After Width: | Height: | Size: 7.3 KiB |
BIN
webassets/img/flags/115.png
Normal file
|
After Width: | Height: | Size: 348 B |
BIN
webassets/img/flags/116.png
Normal file
|
After Width: | Height: | Size: 4.9 KiB |
BIN
webassets/img/flags/117.png
Normal file
|
After Width: | Height: | Size: 9.5 KiB |
BIN
webassets/img/flags/118.png
Normal file
|
After Width: | Height: | Size: 6.5 KiB |
BIN
webassets/img/flags/119.png
Normal file
|
After Width: | Height: | Size: 348 B |
BIN
webassets/img/flags/12.png
Normal file
|
After Width: | Height: | Size: 9.8 KiB |
BIN
webassets/img/flags/120.png
Normal file
|
After Width: | Height: | Size: 8.2 KiB |
BIN
webassets/img/flags/122.png
Normal file
|
After Width: | Height: | Size: 8.9 KiB |
BIN
webassets/img/flags/123.png
Normal file
|
After Width: | Height: | Size: 14 KiB |
BIN
webassets/img/flags/124.png
Normal file
|
After Width: | Height: | Size: 7.0 KiB |
BIN
webassets/img/flags/125.png
Normal file
|
After Width: | Height: | Size: 4.8 KiB |
BIN
webassets/img/flags/126.png
Normal file
|
After Width: | Height: | Size: 4.4 KiB |
BIN
webassets/img/flags/127.png
Normal file
|
After Width: | Height: | Size: 348 B |
BIN
webassets/img/flags/128.png
Normal file
|
After Width: | Height: | Size: 348 B |
BIN
webassets/img/flags/129.png
Normal file
|
After Width: | Height: | Size: 7.6 KiB |
BIN
webassets/img/flags/13.png
Normal file
|
After Width: | Height: | Size: 6.4 KiB |
BIN
webassets/img/flags/130.png
Normal file
|
After Width: | Height: | Size: 9.8 KiB |
BIN
webassets/img/flags/131.png
Normal file
|
After Width: | Height: | Size: 7.0 KiB |
BIN
webassets/img/flags/132.png
Normal file
|
After Width: | Height: | Size: 6.1 KiB |
BIN
webassets/img/flags/133.png
Normal file
|
After Width: | Height: | Size: 9.3 KiB |
BIN
webassets/img/flags/134.png
Normal file
|
After Width: | Height: | Size: 348 B |
BIN
webassets/img/flags/135.png
Normal file
|
After Width: | Height: | Size: 8.4 KiB |
BIN
webassets/img/flags/136.png
Normal file
|
After Width: | Height: | Size: 3.3 KiB |