Compare commits

...

16 Commits

Author SHA1 Message Date
nearxos
10c200f994 Enhance network boot provisioning with support for extra LAN IPs and VLAN configuration</message>
<message>Update documentation and scripts to include configuration for extra LAN IPs on eth1 and VLAN interface eth1.40, allowing the LXC to serve multiple subnets and provide NAT for internet access. Modify nftables NAT configuration to accommodate these changes and ensure proper DHCP and DNS setup on eth1. This improves the overall network boot functionality and user experience for the CM4 eMMC provisioning service.
2026-03-04 19:28:53 +02:00
nearxos
031e1c3415 Enhance provisioning documentation and scripts for improved network boot and DNS management</message>
<message>Add new documentation files for device DNS management via DHCP and dnsmasq configuration. Update cloud-init scripts to ensure proper handling of /etc/resolv.conf and DNS settings, allowing for seamless integration with file.server. Modify existing scripts to support dynamic LAN subnet configuration and improve overall network boot functionality. These changes enhance user experience and streamline the setup process for the CM4 eMMC provisioning service.
2026-03-04 19:15:38 +02:00
nearxos
b5134098c0 Exclude large files: expand .gitignore, drop backup data from tracking 2026-03-03 09:03:38 +02:00
nearxos
c5e418eabc Update provisioning documentation and scripts for improved Proxmox deployment</message>
<message>Add a new step-by-step guide for deploying the CM4 eMMC provisioning service on a new Proxmox instance, enhancing clarity for users. Update existing documentation to reflect changes in network configuration options, including the introduction of LAN subnet settings for DHCP and TFTP. Modify cloud-init scripts to ensure proper management of DNS settings and improve the handling of network interfaces. Additionally, enhance the toggle script for network boot to dynamically read the LAN gateway from configuration files, streamlining the setup process and improving user experience.
2026-03-03 08:24:18 +02:00
nearxos
fe72619931 Update GNSS bootstrap image to the latest version, ensuring compatibility and improved performance. This change replaces the previous image file with an updated binary, enhancing the overall provisioning process. 2026-02-24 08:52:51 +02:00
nearxos
16bfc1e0e1 Enhance cloud-init scripts and dashboard for improved USB boot functionality</message>
<message>Update the bootstrap script to ensure hostname resolution by adding entries to /etc/hosts, preventing "sudo: unable to resolve host" errors. Modify user-data.bootstrap to include the same hostname resolution logic. Revise dashboard templates to reflect the new project name "GNSS Guard Provisioning" and improve user interface elements related to USB boot operations, including clearer instructions and status messages. These changes enhance the overall user experience and streamline the provisioning process.
2026-02-24 08:50:32 +02:00
nearxos
59f8ebe61d Remove obsolete bootstrap script and update example script for clarity</message>
<message>Delete the existing bootstrap.sh script used for cloud-init first boot, as it is no longer needed. Update the bootstrap.sh.example script to provide clearer instructions for users on how to customize and deploy their own bootstrap script, ensuring better guidance for cloud-init integration. These changes streamline the provisioning process and enhance user experience.
2026-02-24 00:26:55 +02:00
nearxos
808fbf5c7c Refactor golden image handling in backup upload process</message>
<message>Update the _set_golden_from_path function to improve the handling of existing golden image files. Replace the existing unlink logic with a more robust method that safely removes files or broken symlinks using the missing_ok parameter. This change enhances the reliability of the backup upload process by ensuring that stale references are properly cleared before setting a new golden image path.
2026-02-24 00:19:40 +02:00
nearxos
df180120aa Update TODO and README files to reflect enhancements in kiosk functionality and provisioning scripts</message>
<message>Revise the TODO list to mark completed tasks related to taskbar icon changes, dark theme fixes, and script optimizations for kiosk mode. Update the README files to clarify the structure of the cloud-init fileserver, including new touch-friendly Chromium flags and the addition of a no-select extension for kiosk use. Remove the obsolete touchscreen quirks file to streamline the project. These changes improve documentation clarity and reflect the latest enhancements in the provisioning process.
2026-02-23 22:49:58 +02:00
nearxos
c91cf6dd05 Update first-boot configuration and scripts for enhanced kiosk functionality</message>
<message>Modify the first-boot configuration to include the gir1.2-gtklayershell-0.1 package for improved GTK layer shell support. Update the first-boot script to enhance the portal status reporting with connection timeouts. Additionally, implement a restart mechanism for the kanshi service in rotation scripts to ensure immediate application of configuration changes. Introduce a Chromium kiosk extension to disable text selection, improving user experience in kiosk mode. These changes streamline the setup process and enhance the overall functionality of the kiosk environment.
2026-02-23 18:07:14 +02:00
nearxos
25bf710c67 Remove deprecated one-shot scripts and update first-boot configuration for improved provisioning</message>
<message>Delete obsolete one-shot scripts for setting screen rotation and wallpaper, as well as related Python and shell scripts. Update the first-boot configuration to streamline the provisioning process by removing references to these scripts. This cleanup enhances maintainability and focuses on the essential steps required for the first boot experience, ensuring a more efficient setup for users.
2026-02-23 16:15:47 +02:00
nearxos
2d6e5aa009 Enhance GTK theme configuration and taskbar setup in cloud-init scripts</message>
<message>Update the cloud-init scripts to improve GTK theme settings by enforcing dark mode through gsettings and preserving the icon theme for a cohesive user experience. Additionally, enhance the first-boot script to install a Chromium kiosk launcher icon on the desktop and in the application menu, along with a five-tap close functionality for Chromium. These changes streamline the user interface and ensure a consistent dark theme across applications and the taskbar.
2026-02-23 15:07:31 +02:00
nearxos
f42700848a Enhance first-boot script to support dynamic dark theme selection and taskbar configuration</message>
<message>Update the first-boot.sh script to dynamically select a dark theme based on the availability of PiXnoir or Adwaita-dark. Implement functionality to deploy a dark-themed taskbar configuration for wf-panel-pi, ensuring a cohesive user interface. Additionally, improve logging for theme settings and taskbar installations, enhancing the overall user experience during the first boot process.
2026-02-23 11:16:02 +02:00
nearxos
ca27727137 Refactor dashboard to remove network boot support and update related UI elements</message>
<message>Eliminate network boot options from the dashboard, including API endpoints and UI elements, to streamline the provisioning process for USB boot only. Update messages and documentation to reflect the removal of network boot functionality, ensuring clarity for users. Adjust the cloud-init build process and related templates to focus solely on USB boot mode, enhancing the overall user experience and simplifying the workflow.
2026-02-23 11:08:52 +02:00
nearxos
55b8661a2e Update documentation and scripts for revision tracking and cloud-init enhancements</message>
<message>Introduce a revision tracking system across project files, allowing for easier identification of changes. Update the README files to include instructions for bumping revisions and auto-bumping on commits. Additionally, enhance cloud-init scripts with revision comments for better version control. Modify the dashboard API to improve build status management, including a forced clear option for stuck statuses, enhancing user experience and operational reliability.
2026-02-23 10:38:24 +02:00
nearxos
5f05663706 Implement graceful cancellation for cloud-init image compression</message>
<message>Add a cleanup function to handle cancellation of the xz compression process in the build-cloudinit-image.sh script. This enhancement allows for a more robust response to cancellation requests, ensuring that resources are properly released and status messages are updated accordingly. The script now traps termination signals and cleans up temporary files, improving the overall reliability of the cloud-init image building workflow.
2026-02-23 10:32:07 +02:00
216 changed files with 24601 additions and 1576 deletions

12
.gitignore vendored Normal file
View File

@@ -0,0 +1,12 @@
# Large binary / image files (do not commit)
*.img.xz
*.img.xz.bak
*.img
!emmc-provisioning/network-boot-initramfs/*.img
# Backup/data from devices (large DBs and logs)
backup-from-device/**/data/*.db
backup-from-device/**/logs/
**/*.db
*.sqlite
*.sqlite3

View File

@@ -1,7 +1,18 @@
<!-- Revision: 2 -->
# reTerminal DM4
Project for **reTerminal DM4** (Seeed) with CM4: Chromium kiosk, eMMC provisioning (USB + network boot), and first-boot configuration via cloud-init.
## Revisions
A single **revision number** is kept in `REVISION` and in a comment line in tracked files (`# Revision: N` or `<!-- Revision: N -->`) so you can see what changed across hosts and deploys.
- **Bump revision (update all files):** from repo root run
`./emmc-provisioning/scripts/bump-revision.sh`
- **Auto-bump on every commit:** install the pre-commit hook
`cp emmc-provisioning/scripts/pre-commit-revision.sh .git/hooks/pre-commit && chmod +x .git/hooks/pre-commit`
Then every commit will bump the revision and update the revision line in all tracked files.
## Repository structure
| Path | Purpose |

1
REVISION Normal file
View File

@@ -0,0 +1 @@
2

15
TODO.MD
View File

@@ -1,5 +1,12 @@
- change icon on taskbar.
- fix dark theme.
- check for duplicates commands in all scripts and cloud init during deployment.
- [x] change icon on taskbar (PiXtrix icon theme + icon cache rebuild).
- [x] fix dark theme (Adwaita-dark, gtk-3.0/settings.ini, gsettings at login).
- [x] check for duplicates commands in all scripts and cloud init during deployment.
- [x] fix rotation race (kanshi config pre-created in step 11, restart in login/oneshot scripts).
- [x] fix five-tap overlay for Wayland (layer-shell, gir1.2-gtklayershell-0.1).
- [x] add VNC (wayvnc) to provisioning (step 06).
- [x] add touch-friendly Chromium flags (start-chromium.sh).
- [x] add no-select extension to prevent text selection in kiosk (chromium-kiosk-no-select/).
- [x] fix curl timeout in report_status (first-boot.sh).
- [ ] test text selection fix on different websites.
- [ ] verify five-tap overlay works on device after full provision.

View File

@@ -0,0 +1,19 @@
[Unit]
Description=TM GNSS Guard - GPS Spoofing and Jamming Monitor
After=network.target
[Service]
Type=simple
User=pi
WorkingDirectory=/home/pi/tm-gnss-guard
ExecStart=/home/pi/tm-gnss-guard/.venv/bin/python /home/pi/tm-gnss-guard/main.py
Restart=always
RestartSec=10
StandardOutput=append:/home/pi/tm-gnss-guard/gnss_guard.log
StandardError=append:/home/pi/tm-gnss-guard/gnss_guard.log
# Environment
Environment=PYTHONUNBUFFERED=1
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,46 @@
#!/bin/bash
# Disable keyring prompts
export GNOME_KEYRING_CONTROL=""
export DISPLAY=:0
# Force X11 instead of Wayland for better fullscreen support
export GDK_BACKEND=x11
unset WAYLAND_DISPLAY
# Wait for display and desktop environment to be ready
# Check if DISPLAY is accessible (wait up to 30 seconds)
for i in {1..60}; do
if xset q >/dev/null 2>&1 || [ -n "$DISPLAY" ]; then
# Wait for desktop environment to be fully loaded
if pgrep -x pcmanfm >/dev/null 2>&1 || pgrep -x lxsession >/dev/null 2>&1 || pgrep -x xfdesktop >/dev/null 2>&1; then
break
fi
fi
sleep 0.5
done
# Additional delay to ensure window manager is fully ready
sleep 5
# Start Chromium with flags to avoid keyring and ensure proper fullscreen
# Force X11 platform and add fullscreen-related flags
# Fullscreen mode (current active)
/usr/bin/chromium --start-fullscreen --noerrdialogs --disable-infobars --disable-session-crashed-bubble --disable-restore-session-state --no-first-run --password-store=basic --use-mock-keychain --ozone-platform=x11 --disable-features=UseChromeOSDirectVideoDecoder --app=http://127.0.0.1:8080 &
# Wait for Chromium window to appear and then force fullscreen
sleep 3
# Try to find Chromium window and force it to fullscreen
for i in {1..10}; do
WINDOW_ID=$(wmctrl -l 2>/dev/null | grep -i chromium | head -1 | awk '{print $1}')
if [ -n "$WINDOW_ID" ]; then
wmctrl -i -r "$WINDOW_ID" -b add,fullscreen 2>/dev/null
break
fi
sleep 0.5
done
# Keep script running
wait
# Kiosk mode (commented out - uncomment to use instead of fullscreen)
# /usr/bin/chromium --kiosk --noerrdialogs --disable-infobars --disable-session-crashed-bubble --disable-restore-session-state --no-first-run --password-store=basic --use-mock-keychain --ozone-platform=x11 --app=http://127.0.0.1:8080

View File

@@ -0,0 +1,116 @@
#!/usr/bin/env python3
"""
Buzzer Test Script for reTerminal DM4
Tests various buzzer patterns and functions
"""
import subprocess
import time
import sys
BUZZER_PATH = '/sys/class/leds/usr-buzzer/brightness'
def buzzer_on():
"""Turn buzzer ON"""
subprocess.run(['sudo', 'tee', BUZZER_PATH],
input='1', text=True,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL)
def buzzer_off():
"""Turn buzzer OFF"""
subprocess.run(['sudo', 'tee', BUZZER_PATH],
input='0', text=True,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL)
def beep(duration=0.2):
"""Play a single beep"""
buzzer_on()
time.sleep(duration)
buzzer_off()
def blink(count=3, on_time=0.1, off_time=0.1):
"""Blink buzzer multiple times"""
for _ in range(count):
buzzer_on()
time.sleep(on_time)
buzzer_off()
time.sleep(off_time)
def get_status():
"""Get current buzzer status"""
try:
result = subprocess.run(['cat', BUZZER_PATH],
capture_output=True, text=True, check=True)
return 'ON' if result.stdout.strip() in ['1', '255'] else 'OFF'
except:
return 'UNKNOWN'
def main():
print("=" * 50)
print(" reTerminal DM4 Buzzer Test Script (Python)")
print("=" * 50)
print()
# Test 1: Single beep
print("Test 1: Single beep (0.2s)")
beep(0.2)
time.sleep(0.5)
# Test 2: Double beep
print("Test 2: Double beep")
blink(2, 0.1, 0.1)
time.sleep(0.5)
# Test 3: Triple beep
print("Test 3: Triple beep")
blink(3, 0.1, 0.1)
time.sleep(0.5)
# Test 4: Long beep
print("Test 4: Long beep (0.5s)")
beep(0.5)
time.sleep(0.5)
# Test 5: Rapid beeps
print("Test 5: Rapid beeps (5x)")
blink(5, 0.05, 0.05)
time.sleep(0.5)
# Test 6: Slow beeps
print("Test 6: Slow beeps (3x)")
blink(3, 0.3, 0.3)
time.sleep(0.5)
# Test 7: Success pattern
print("Test 7: Success pattern (2 short)")
blink(2, 0.1, 0.1)
time.sleep(0.5)
# Test 8: Error pattern
print("Test 8: Error pattern (3 fast)")
blink(3, 0.05, 0.05)
time.sleep(0.5)
# Ensure buzzer is off
buzzer_off()
print()
print("=" * 50)
print(" Buzzer test complete!")
print("=" * 50)
print()
print(f"Current buzzer status: {get_status()}")
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
print("\n\nTest interrupted by user")
buzzer_off()
sys.exit(0)
except Exception as e:
print(f"\n\nError: {e}")
buzzer_off()
sys.exit(1)

View File

@@ -0,0 +1,82 @@
#!/bin/bash
# Buzzer Test Script for reTerminal DM4
# Tests various buzzer patterns and functions
BUZZER_PATH='/sys/class/leds/usr-buzzer/brightness'
echo "=========================================="
echo " reTerminal DM4 Buzzer Test Script"
echo "=========================================="
echo ""
# Function to play a beep
beep() {
local duration=${1:-0.2}
echo 1 | sudo tee $BUZZER_PATH > /dev/null 2>&1
sleep $duration
echo 0 | sudo tee $BUZZER_PATH > /dev/null 2>&1
}
# Function to blink buzzer
blink() {
local count=${1:-3}
local on_time=${2:-0.1}
local off_time=${3:-0.1}
for i in $(seq 1 $count); do
echo 1 | sudo tee $BUZZER_PATH > /dev/null 2>&1
sleep $on_time
echo 0 | sudo tee $BUZZER_PATH > /dev/null 2>&1
sleep $off_time
done
}
# Test 1: Single beep
echo "Test 1: Single beep (0.2s)"
beep 0.2
sleep 0.5
# Test 2: Double beep
echo "Test 2: Double beep"
blink 2 0.1 0.1
sleep 0.5
# Test 3: Triple beep
echo "Test 3: Triple beep"
blink 3 0.1 0.1
sleep 0.5
# Test 4: Long beep
echo "Test 4: Long beep (0.5s)"
beep 0.5
sleep 0.5
# Test 5: Rapid beeps
echo "Test 5: Rapid beeps (5x)"
blink 5 0.05 0.05
sleep 0.5
# Test 6: Slow beeps
echo "Test 6: Slow beeps (3x)"
blink 3 0.3 0.3
sleep 0.5
# Test 7: Success pattern (2 short)
echo "Test 7: Success pattern"
blink 2 0.1 0.1
sleep 0.5
# Test 8: Error pattern (3 fast)
echo "Test 8: Error pattern"
blink 3 0.05 0.05
sleep 0.5
# Ensure buzzer is off
echo 0 | sudo tee $BUZZER_PATH > /dev/null 2>&1
echo ""
echo "=========================================="
echo " Buzzer test complete!"
echo "=========================================="
echo ""
echo "Current buzzer status: $(cat $BUZZER_PATH) (0=OFF, 1=ON)"

View File

@@ -0,0 +1,10 @@
---
alwaysApply: true
---
## Jira & Confluence
- When creating Jira ticket, don't get deep into technical implementation or reference to code files, allowing some agility to developers. Even if the change was already implemented, write a task as requirements that need to be done, rather than done already (past tense).
- Sometimes basic request to get Atlassian MCP resources fail, with code 401, in this case try few more times before giving up to allow tokens refresh.
## Documentation Files
Do not create a documentation file unless the user requested so explicitly. Only update existing documentation files where necessary if major update was introduced or the documentation file context is insufficient without an amendment.

View File

@@ -0,0 +1,121 @@
# ============================================================================
# GNSS Guard Configuration
# ============================================================================
# =============================================================================
# ASSET NAME
# =============================================================================
ASSET_NAME=OFFICE_LAB
# =============================================================================
# DEPLOYMENT TARGET (used by deploy_client.sh)
# =============================================================================
DEPLOY_USER=pi
# DEPLOY_HOST=10.130.60.253
DEPLOY_HOST=10.15.80.161
DEPLOY_PORT=22
DEPLOY_PASSWORD=sh1pb0x1
DEPLOY_INJECTED_POSITIONS=.configs/injected_positions_office_lab.json
# ============================================================================
# Timing configuration
# ============================================================================
ITERATION_PERIOD_SECONDS=30
STALE_THRESHOLD_SECONDS=60
VALIDATION_THRESHOLD_METERS=200
STARTUP_WARMUP_SECONDS=5
# ============================================================================
# TM AIS GPS Configuration
# ============================================================================
TM_AIS_ENABLED=true
TM_AIS_URL=https://localhost:8443/location
TM_AIS_TOKEN=xuNg8eewohcieru1Noto
TM_AIS_MAX_RETRIES=1
# ============================================================================
# Starlink Terminal Configuration
# ============================================================================
STARLINK_ENABLED=true
STARLINK_IP=10.130.60.70
STARLINK_PORT=9200
STARLINK_MAX_RETRIES=1
# ============================================================================
# NMEA Primary Vessel GPS Configuration
# ============================================================================
NMEA_PRIMARY_ENABLED=true
NMEA_PRIMARY_IP=10.130.60.61
NMEA_PRIMARY_PORT=4001
# ============================================================================
# NMEA Secondary Vessel GPS Configuration
# ============================================================================
NMEA_SECONDARY_ENABLED=true
NMEA_SECONDARY_IP=10.130.60.61
NMEA_SECONDARY_PORT=4002
# ============================================================================
# Storage Configuration
# ============================================================================
DATABASE_PATH=data/gnss_guard.db
LOGS_BASE_PATH=logs
# ============================================================================
# Web Server Configuration
# ============================================================================
# Enable/disable web dashboard
WEB_ENABLED=true
# Web server host (0.0.0.0 = all interfaces, 127.0.0.1 = localhost only)
WEB_HOST=0.0.0.0
# Web server port
WEB_PORT=8080
# Show 24h route on map (requires local historical data)
WEB_SHOW_ROUTE=true
# Demo mode: for demo units with pre-loaded historical data
# When enabled:
# - Data collection continues normally (real or injected)
# - Dashboard shows live status (current validation is stored)
# - Recent "live" records auto-deleted to preserve historical data
# - NO server sync (validation not sent to cloud)
# - Route shows last 24h of historical data (excludes live session)
DEMO_UNIT=true
# Access the dashboard at:
# - http://localhost:8080
# - http://<server-ip>:8080
# - http://guard.lan:8080 (if guard.lan is configured in DNS/hosts)
# ============================================================================
# Data Retenction Configuration
# ============================================================================
POSITIONS_RAW_RETENTION_DAYS=5
POSITIONS_VALIDATION_RETENTION_DAYS=5
LOG_RETENTION_DAYS=14
# ============================================================================
# Server Sync
# ============================================================================
SERVER_ENABLED=true
SERVER_URL=https://gnss.tototheo.com
SERVER_TOKEN=a25dee6101b944495a98f2a2c529b926ea01f36807ccb06b18240c7134ea467e
SERVER_SYNC_BATCH_SIZE=100
SERVER_SYNC_MAX_QUEUE=1000
# ssh -p 22 pi@10.130.60.253
# ssh -p 22 -L 8080:localhost:8080 pi@10.130.60.253
# Download gnss_guard.db
# scp pi@10.130.60.253:~/tm-gnss-guard/data/gnss_guard.db ./data/gnss_guard.db
# Upload gnss_guard.db
# scp ./data/gnss_guard.db pi@10.130.60.253:~/tm-gnss-guard/data/gnss_guard.db

View File

@@ -0,0 +1,6 @@
"""
GNSS Guard - Multi-source GPS coordinate validation system
"""
__version__ = "1.0.0"

View File

@@ -0,0 +1,151 @@
#!/usr/bin/env python3
"""
Configuration management for GNSS Guard
Loads configuration from .env or .env.prod files
"""
import os
from pathlib import Path
from typing import Dict, Any
from dotenv import load_dotenv
class Config:
"""Configuration manager for GNSS Guard"""
@staticmethod
def _get_int_env(key: str, default: int) -> int:
"""Get integer environment variable, handling empty strings"""
value = os.getenv(key, "")
if not value or value.strip() == "":
return default
try:
return int(value)
except ValueError:
return default
def __init__(self):
# Determine environment file to load
# Priority: 1) ENV=prod -> .env.prod, 2) .env.prod exists -> .env.prod, 3) .env
base_path = Path(__file__).parent
if os.getenv("ENV") == "prod":
env_file = ".env.prod"
elif (base_path / ".env.prod").exists():
env_file = ".env.prod"
else:
env_file = ".env"
# Load environment variables
env_path = base_path / env_file
if env_path.exists():
load_dotenv(env_path)
else:
# Try loading from current directory as fallback
load_dotenv()
# Asset configuration
self.asset_name = os.getenv("ASSET_NAME", "unknown")
# Timing configuration
self.iteration_period_seconds = self._get_int_env("ITERATION_PERIOD_SECONDS", 10)
self.stale_threshold_seconds = self._get_int_env("STALE_THRESHOLD_SECONDS", 60)
self.validation_threshold_meters = float(os.getenv("VALIDATION_THRESHOLD_METERS", "200"))
self.startup_warmup_seconds = self._get_int_env("STARTUP_WARMUP_SECONDS", 5)
# Data retention configuration
self.positions_raw_retention_days = self._get_int_env("POSITIONS_RAW_RETENTION_DAYS", 14)
self.positions_validation_retention_days = self._get_int_env("POSITIONS_VALIDATION_RETENTION_DAYS", 31)
self.log_retention_days = self._get_int_env("LOG_RETENTION_DAYS", 14)
# TM AIS GPS configuration
self.tm_ais_url = os.getenv("TM_AIS_URL", "https://localhost:8443/location")
# Trim whitespace from token (common issue with .env files)
self.tm_ais_token = os.getenv("TM_AIS_TOKEN", "").strip()
self.tm_ais_max_retries = self._get_int_env("TM_AIS_MAX_RETRIES", 3)
# Starlink configuration
self.starlink_ip = os.getenv("STARLINK_IP", "10.130.60.70")
self.starlink_port = self._get_int_env("STARLINK_PORT", 9200)
self.starlink_max_retries = self._get_int_env("STARLINK_MAX_RETRIES", 3)
# NMEA Primary GPS configuration
self.nmea_primary_ip = os.getenv("NMEA_PRIMARY_IP", "")
self.nmea_primary_port = self._get_int_env("NMEA_PRIMARY_PORT", 0)
# NMEA Secondary GPS configuration
self.nmea_secondary_ip = os.getenv("NMEA_SECONDARY_IP", "")
self.nmea_secondary_port = self._get_int_env("NMEA_SECONDARY_PORT", 0)
# Database configuration
self.database_path = Path(os.getenv("DATABASE_PATH", "data/gnss_guard.db"))
# Logs configuration
self.logs_base_path = Path(os.getenv("LOGS_BASE_PATH", "logs"))
# Web server configuration
self.web_enabled = os.getenv("WEB_ENABLED", "true").lower() in ("true", "1", "yes")
self.web_host = os.getenv("WEB_HOST", "0.0.0.0")
self.web_port = self._get_int_env("WEB_PORT", 8080)
self.web_show_route = os.getenv("WEB_SHOW_ROUTE", "false").lower() in ("true", "1", "yes")
# Demo mode - when enabled, route shows last 24h of available data instead of current time
self.demo_unit = os.getenv("DEMO_UNIT", "false").lower() in ("true", "1", "yes")
# Source enablement flags
self.tm_ais_enabled = os.getenv("TM_AIS_ENABLED", "true").lower() in ("true", "1", "yes")
self.starlink_enabled = os.getenv("STARLINK_ENABLED", "true").lower() in ("true", "1", "yes")
self.nmea_primary_enabled = os.getenv("NMEA_PRIMARY_ENABLED", "false").lower() in ("true", "1", "yes")
self.nmea_secondary_enabled = os.getenv("NMEA_SECONDARY_ENABLED", "false").lower() in ("true", "1", "yes")
# NMEA verbose logging (log all NMEA sentences, not just GGA)
self.nmea_verbose_logging = os.getenv("NMEA_VERBOSE_LOGGING", "false").lower() in ("true", "1", "yes")
# Server sync configuration
self.server_enabled = os.getenv("SERVER_ENABLED", "false").lower() in ("true", "1", "yes")
self.server_url = os.getenv("SERVER_URL", "").strip()
self.server_token = os.getenv("SERVER_TOKEN", "").strip()
self.server_sync_batch_size = self._get_int_env("SERVER_SYNC_BATCH_SIZE", 100)
self.server_sync_max_queue = self._get_int_env("SERVER_SYNC_MAX_QUEUE", 1000)
def get_enabled_sources(self) -> list:
"""Get list of enabled source names"""
sources = []
if self.tm_ais_enabled:
sources.append("tm_ais")
if self.starlink_enabled:
sources.extend(["starlink_location", "starlink_gps"])
if self.nmea_primary_enabled:
sources.append("nmea_primary")
if self.nmea_secondary_enabled:
sources.append("nmea_secondary")
return sources
def to_dict(self) -> Dict[str, Any]:
"""Convert configuration to dictionary"""
return {
"asset_name": self.asset_name,
"iteration_period_seconds": self.iteration_period_seconds,
"stale_threshold_seconds": self.stale_threshold_seconds,
"validation_threshold_meters": self.validation_threshold_meters,
"startup_warmup_seconds": self.startup_warmup_seconds,
"positions_raw_retention_days": self.positions_raw_retention_days,
"positions_validation_retention_days": self.positions_validation_retention_days,
"log_retention_days": self.log_retention_days,
"tm_ais_url": self.tm_ais_url,
"tm_ais_enabled": self.tm_ais_enabled,
"tm_ais_max_retries": self.tm_ais_max_retries,
"starlink_ip": self.starlink_ip,
"starlink_port": self.starlink_port,
"starlink_enabled": self.starlink_enabled,
"starlink_max_retries": self.starlink_max_retries,
"nmea_primary_enabled": self.nmea_primary_enabled,
"nmea_secondary_enabled": self.nmea_secondary_enabled,
"database_path": str(self.database_path),
"logs_base_path": str(self.logs_base_path),
"web_enabled": self.web_enabled,
"web_host": self.web_host,
"web_port": self.web_port,
"web_show_route": self.web_show_route,
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,46 @@
{
"_comment": "Injected positions file for GNSS Guard - Office Lab",
"_instructions": [
"1. Set position values for sources you want to inject (only those sources will use injected data)",
"2. Sources NOT in this file will be fetched from real sources normally",
"3. Set a source to 'null' to simulate its absence (skip fetching for that source)",
"4. Prefix a source key with '//' to comment it out (same as not including it)"
],
"_fields": {
"latitude": "REQUIRED - Latitude in decimal degrees (used for distance validation)",
"longitude": "REQUIRED - Longitude in decimal degrees (used for distance validation)",
"timestamp_unix": "OPTIONAL - Unix timestamp in seconds (defaults to current time if absent)",
"altitude": "OPTIONAL - Altitude in meters (stored but NOT used for validation)",
"position_uncertainty_m": "OPTIONAL - Position uncertainty in meters (stored but NOT used for validation, Starlink only)"
},
"nmea_primary": {
"latitude": 36.11063,
"longitude": 22.972875,
"//timestamp_unix": 1768308542.0,
"altitude": 14.0
},
"nmea_secondary": {
"latitude": 36.11085833333333,
"longitude": 22.572023333333334,
"//timestamp_unix": 1732461600.0,
"altitude": 13.2
},
"tm_ais": {
"latitude": 36.110657,
"longitude": 22.572672,
"//timestamp_unix": 1732461600.0
},
"starlink_gps": {
"latitude": 36.11055287599966,
"longitude": 22.57289200819445,
"//timestamp_unix": 1732461600.0,
"altitude": 54.29000515150101
},
"starlink_location": {
"latitude": 36.11055187009735,
"longitude": 22.57289484169309,
"//timestamp_unix": 1732461600.0,
"altitude": 54.29000515150101,
"position_uncertainty_m": 2.5
}
}

View File

@@ -0,0 +1,678 @@
#!/usr/bin/env python3
"""
GNSS Guard - Main orchestrator
Coordinates data collection from multiple GPS sources and validation
"""
import asyncio
import json
import logging
import os
import signal
import sys
import threading
import time
from datetime import datetime, timezone
from typing import Dict, Any, Optional
from config import Config
from sources.tm_ais_gps import TMAISGPSFetcher
from sources.starlink_gps import StarlinkGPSFetcher
from sources.nmea_gps import NMEAGPSCollector
from storage.database import Database
from storage.logger import StructuredLogger
from storage.cleanup import CleanupManager
from validation.coordinate_validator import CoordinateValidator
from web.server import WebServer
from services.server_sync import ServerSync
from services.buzzer import get_buzzer_service
logger = logging.getLogger("gnss_guard.main")
class GNSSGuard:
"""Main orchestrator for GNSS Guard system"""
def __init__(self, config: Config):
"""Initialize GNSS Guard"""
self.config = config
self.running = False
# Initialize components
self.database = Database(config.database_path)
self.structured_logger = StructuredLogger(
config.logs_base_path,
config.log_retention_days
)
# Path to injected positions file (in same directory as main.py)
script_dir = os.path.dirname(os.path.abspath(__file__))
self.injected_positions_path = os.path.join(script_dir, "injected_positions.json")
# Initialize data sources
self.tm_ais_fetcher = TMAISGPSFetcher(config) if config.tm_ais_enabled else None
self.starlink_fetcher = StarlinkGPSFetcher(config) if config.starlink_enabled else None
# Initialize NMEA collectors
self.nmea_primary_collector = None
if config.nmea_primary_enabled and config.nmea_primary_ip and config.nmea_primary_port > 0:
self.nmea_primary_collector = NMEAGPSCollector(
config,
"nmea_primary",
config.nmea_primary_ip,
config.nmea_primary_port,
structured_logger=self.structured_logger
)
self.nmea_secondary_collector = None
if config.nmea_secondary_enabled and config.nmea_secondary_ip and config.nmea_secondary_port > 0:
self.nmea_secondary_collector = NMEAGPSCollector(
config,
"nmea_secondary",
config.nmea_secondary_ip,
config.nmea_secondary_port,
structured_logger=self.structured_logger
)
# Initialize validator
expected_sources = config.get_enabled_sources()
self.validator = CoordinateValidator(
config.validation_threshold_meters,
config.stale_threshold_seconds,
expected_sources
)
# Initialize buzzer service for hardware alarm (must be before web server)
# Buzzer sounds with 1 second on / 1 second off pattern during GNSS alerts
self.buzzer_service = get_buzzer_service(on_duration=1.0, off_duration=1.0)
# Track previous alert level to detect status changes
# Alert levels: "healthy", "degraded", "at_risk"
self._previous_alert_level = "healthy"
# Initialize web server (if enabled)
self.web_server = None
self.web_thread = None
if config.web_enabled:
try:
self.web_server = WebServer(config, self.database, self.buzzer_service)
logger.info("Web server initialized")
except Exception as e:
logger.warning(f"Failed to initialize web server: {e}")
self.web_server = None
# Initialize cleanup manager
# In demo mode, skip database cleanup since data isn't growing
# (demo mode creates and deletes records, maintaining a fixed dataset)
self.cleanup_manager = CleanupManager(
database_path=config.database_path,
logs_base_path=config.logs_base_path,
positions_raw_retention_days=config.positions_raw_retention_days,
positions_validation_retention_days=config.positions_validation_retention_days,
logs_retention_days=config.log_retention_days,
demo_mode=config.demo_unit
)
if config.demo_unit:
logger.info(
f"Cleanup manager initialized in DEMO mode (logs only: {config.log_retention_days}d)"
)
else:
logger.info(
f"Cleanup manager initialized (raw: {config.positions_raw_retention_days}d, "
f"validation: {config.positions_validation_retention_days}d, logs: {config.log_retention_days}d)"
)
# Initialize server sync (if enabled)
self.server_sync = None
if config.server_enabled and config.server_url and config.server_token:
try:
self.server_sync = ServerSync(
database_path=config.database_path,
server_url=config.server_url,
server_token=config.server_token,
asset_name=config.asset_name,
batch_size=config.server_sync_batch_size,
max_queue_size=config.server_sync_max_queue
)
logger.info(f"Server sync enabled -> {config.server_url}")
except Exception as e:
logger.warning(f"Failed to initialize server sync: {e}")
self.server_sync = None
# Setup signal handlers
signal.signal(signal.SIGINT, self._signal_handler)
signal.signal(signal.SIGTERM, self._signal_handler)
def _signal_handler(self, signum, frame):
"""Handle shutdown signals"""
logger.info(f"Received signal {signum}, shutting down gracefully...")
self.running = False
def _load_injected_positions(self) -> Optional[Dict[str, Dict[str, Any]]]:
"""
Load injected positions from JSON file if it exists
Returns:
Dictionary mapping source names to position dictionaries, or None if file doesn't exist
"""
if not os.path.exists(self.injected_positions_path):
return None
try:
with open(self.injected_positions_path, 'r') as f:
data = json.load(f)
# Validate and normalize positions
injected = {}
for source, position in data.items():
# Skip metadata fields (those starting with underscore)
if source.startswith("_"):
continue
# Skip commented-out sources (those starting with //)
if source.startswith("//"):
continue
if position is None:
# Null value means this source should be absent
# Store it as None so we know to skip fetching for this source
injected[source] = None
continue
# Ensure required fields are present
if not isinstance(position, dict):
logger.warning(f"Invalid position format for {source} in injected_positions.json")
continue
# Ensure source field matches the key
position["source"] = source
# Ensure timestamp_unix is set if timestamp is provided
if "timestamp" in position and "timestamp_unix" not in position:
try:
ts = datetime.fromisoformat(position["timestamp"].replace("Z", "+00:00"))
if ts.tzinfo is None:
ts = ts.replace(tzinfo=timezone.utc)
position["timestamp_unix"] = ts.timestamp()
except Exception as e:
logger.warning(f"Failed to parse timestamp for {source}: {e}")
# Use current time as fallback
now = datetime.now(timezone.utc)
position["timestamp"] = now.isoformat()
position["timestamp_unix"] = now.timestamp()
# Ensure timestamp is set if timestamp_unix is provided
if "timestamp_unix" in position and "timestamp" not in position:
try:
ts = datetime.fromtimestamp(position["timestamp_unix"], tz=timezone.utc)
position["timestamp"] = ts.isoformat()
except Exception as e:
logger.warning(f"Failed to convert timestamp_unix for {source}: {e}")
position["timestamp"] = datetime.now(timezone.utc).isoformat()
# Ensure both exist (use current time if neither provided)
if "timestamp_unix" not in position:
now = datetime.now(timezone.utc)
position["timestamp"] = now.isoformat()
position["timestamp_unix"] = now.timestamp()
injected[source] = position
logger.info(f"Loaded {len(injected)} injected position(s) from {self.injected_positions_path}")
return injected
except json.JSONDecodeError as e:
logger.error(f"Failed to parse injected_positions.json: {e}")
return None
except Exception as e:
logger.error(f"Error loading injected positions: {e}")
return None
def _store_demo_validation(self, validation_result: Dict[str, Any]):
"""
Store validation in DEMO_UNIT mode.
Keeps only the latest validation record to show live status on dashboard,
while preserving historical data for route display.
Deletes any validation records from the last 5 minutes before inserting new one.
"""
import sqlite3 as sqlite3_module
try:
conn = sqlite3_module.connect(str(self.database.database_path), timeout=5.0)
cursor = conn.cursor()
# Delete recent "live" records (last 5 minutes) to prevent accumulation
# This keeps historical data intact while allowing fresh dashboard display
five_minutes_ago = time.time() - 300
cursor.execute(
"DELETE FROM positions_validation WHERE validation_timestamp_unix > ?",
(five_minutes_ago,)
)
conn.commit()
conn.close()
# Now store the new validation record
self.database.store_validation(validation_result)
except Exception as e:
logger.error(f"Error storing demo validation: {e}")
def _handle_buzzer_alarm(self, is_valid: bool, missing_sources: list, stale_sources: list, distance_exceeded: bool):
"""
Handle buzzer alarm based on validation status.
Buzzer triggers when GNSS status is:
- "at risk" (GPS jamming/spoofing detected - distance exceeds threshold)
- "degraded" (sources missing or stale)
- "no connection" (all sources missing)
Buzzer stops when:
- Status returns to healthy (validation passes)
- User acknowledges the alarm via the dashboard button
Buzzer restarts when:
- Alert level changes (e.g., degraded → at_risk or vice versa)
Args:
is_valid: Whether validation passed
missing_sources: List of missing source names
stale_sources: List of stale source names
distance_exceeded: Whether coordinate distance exceeded threshold
"""
try:
# Determine current alert level
# "at_risk" = GPS spoofing/jamming (distance exceeded)
# "degraded" = sources missing or stale but no distance issue
# "healthy" = validation passed
if is_valid:
current_alert_level = "healthy"
elif distance_exceeded:
current_alert_level = "at_risk"
else:
current_alert_level = "degraded"
# Check if alert level changed
alert_level_changed = current_alert_level != self._previous_alert_level
if alert_level_changed:
logger.info(f"Alert level changed: {self._previous_alert_level}{current_alert_level}")
# Reset acknowledged state when alert level changes
# This allows buzzer to restart even if previously acknowledged
if self.buzzer_service.is_alarm_acknowledged():
logger.info("Resetting alarm acknowledged state (alert level changed)")
self.buzzer_service.reset_acknowledged()
# Stop current alarm if running (will restart below if needed)
if self.buzzer_service.is_alarm_active():
self.buzzer_service.stop_alarm()
# Handle alarm based on current alert level
if current_alert_level != "healthy":
# Status is degraded or at risk
# Start alarm if not already active and not acknowledged
if not self.buzzer_service.is_alarm_active():
if not self.buzzer_service.is_alarm_acknowledged():
# Determine alarm reason for logging
if current_alert_level == "at_risk":
reason = "GPS jamming/spoofing detected (distance exceeded threshold)"
elif missing_sources:
reason = f"Sources missing: {', '.join(missing_sources)}"
elif stale_sources:
reason = f"Sources stale: {', '.join(stale_sources)}"
else:
reason = "Validation failed"
logger.warning(f"Starting buzzer alarm: {reason}")
self.structured_logger.warning("buzzer", f"Alarm started: {reason}")
self.buzzer_service.start_alarm()
else:
logger.debug("Alarm acknowledged, not restarting until alert level changes")
else:
# Status is healthy
# Stop alarm if active
if self.buzzer_service.is_alarm_active():
logger.info("Status returned to healthy, stopping buzzer alarm")
self.structured_logger.info("buzzer", "Alarm stopped (status healthy)")
self.buzzer_service.stop_alarm()
# Reset acknowledged state when healthy
if self.buzzer_service.is_alarm_acknowledged():
logger.debug("Resetting alarm acknowledged state (status healthy)")
self.buzzer_service.reset_acknowledged()
# Track alert level for next iteration
self._previous_alert_level = current_alert_level
except Exception as e:
logger.error(f"Error handling buzzer alarm: {e}")
async def start(self):
"""Start GNSS Guard system"""
logger.info("Starting GNSS Guard system")
self.structured_logger.info("system", "GNSS Guard starting", {"config": self.config.to_dict()})
# Start web server in separate thread
if self.web_server:
self.web_thread = threading.Thread(
target=self.web_server.run,
kwargs={
'host': self.config.web_host,
'port': self.config.web_port,
'debug': False
},
daemon=True
)
self.web_thread.start()
logger.info(f"Web server started on {self.config.web_host}:{self.config.web_port}")
# Log DEMO_UNIT mode if enabled
if self.config.demo_unit:
logger.info("DEMO_UNIT mode enabled - data collection active but database writes disabled")
self.structured_logger.info("system", "DEMO_UNIT mode - no database writes")
# Start NMEA collectors
if self.nmea_primary_collector:
await self.nmea_primary_collector.start()
logger.info("Started NMEA primary collector")
if self.nmea_secondary_collector:
await self.nmea_secondary_collector.start()
logger.info("Started NMEA secondary collector")
# Startup warm-up period: wait for data sources to connect and receive initial data
# This prevents false "missing" alerts on first validation after restart/deploy
if self.config.startup_warmup_seconds > 0:
logger.info(f"Waiting {self.config.startup_warmup_seconds}s for data sources to initialize...")
self.structured_logger.info(
"system",
"Startup warm-up period",
{"warmup_seconds": self.config.startup_warmup_seconds}
)
await asyncio.sleep(self.config.startup_warmup_seconds)
logger.info("Warm-up complete, starting validation cycle")
self.running = True
# Main collection loop - ensure iterations start at regular intervals
while self.running:
iteration_start = time.time()
try:
await self._iteration()
except Exception as e:
logger.error(f"Error in main loop: {e}")
self.structured_logger.error("system", f"Error in main loop: {e}")
# Calculate how long the iteration took
iteration_duration = time.time() - iteration_start
# Sleep for the remaining time to maintain the iteration period
sleep_time = self.config.iteration_period_seconds - iteration_duration
if sleep_time > 0:
logger.debug(f"Iteration took {iteration_duration:.2f}s, sleeping for {sleep_time:.2f}s")
await asyncio.sleep(sleep_time)
else:
logger.warning(
f"Iteration took {iteration_duration:.2f}s, which exceeds the configured period "
f"of {self.config.iteration_period_seconds}s. Starting next iteration immediately."
)
# No sleep, start next iteration immediately
async def _iteration(self):
"""Execute one iteration of data collection and validation"""
# Run daily cleanup if needed (runs once per day)
self.cleanup_manager.run_cleanup_if_needed()
logger.info("Starting data collection iteration")
positions = {}
# Check for injected positions (per-source injection)
injected_positions = self._load_injected_positions() or {}
# Add injected positions (if any)
if injected_positions:
injected_sources = [s for s, p in injected_positions.items() if p is not None]
if injected_sources:
logger.info(f"Using injected positions for: {', '.join(injected_sources)}")
# DEMO_UNIT mode: skip database writes
skip_db_writes = self.config.demo_unit
# Fetch from TM AIS GPS (skip if injected)
if "tm_ais" not in injected_positions and self.tm_ais_fetcher:
try:
position = self.tm_ais_fetcher.fetch()
if position:
positions[position["source"]] = position
if not skip_db_writes:
self.database.store_position(position)
self.structured_logger.info("tm_ais", "Fetched position", {"position": position})
except Exception as e:
logger.error(f"Error fetching TM AIS GPS: {e}")
self.structured_logger.error("tm_ais", f"Fetch error: {e}")
elif "tm_ais" in injected_positions:
# Use injected position for tm_ais
if injected_positions["tm_ais"] is not None:
position = injected_positions["tm_ais"]
positions[position["source"]] = position
if not skip_db_writes:
self.database.store_position(position)
self.structured_logger.info("tm_ais", "Injected position", {"position": position})
# Fetch from Starlink GPS (always fetch, then override with injected if present)
if self.starlink_fetcher:
# Only fetch if at least one Starlink source is not injected
if "starlink_location" not in injected_positions or "starlink_gps" not in injected_positions:
logger.info("Fetching from Starlink GPS...")
try:
starlink_positions = self.starlink_fetcher.fetch()
for position in starlink_positions:
# Only use fetched position if this source is not injected
if position["source"] not in injected_positions:
positions[position["source"]] = position
if not skip_db_writes:
self.database.store_position(position)
self.structured_logger.info(
position["source"],
"Fetched position",
{"position": position}
)
except Exception as e:
logger.error(f"Error fetching Starlink GPS: {e}")
self.structured_logger.error("starlink", f"Fetch error: {e}")
# Use injected positions for Starlink sources (if any)
for starlink_source in ["starlink_location", "starlink_gps"]:
if starlink_source in injected_positions and injected_positions[starlink_source] is not None:
position = injected_positions[starlink_source]
positions[position["source"]] = position
if not skip_db_writes:
self.database.store_position(position)
self.structured_logger.info(starlink_source, "Injected position", {"position": position})
# Get latest positions from NMEA collectors (skip if injected)
if "nmea_primary" not in injected_positions and self.nmea_primary_collector:
try:
position = await self.nmea_primary_collector.get_latest_position()
if position:
positions[position["source"]] = position
if not skip_db_writes:
self.database.store_position(position)
self.structured_logger.info("nmea_primary", "Updated position", {"position": position})
except Exception as e:
logger.error(f"Error getting NMEA primary position: {e}")
self.structured_logger.error("nmea_primary", f"Position error: {e}")
elif "nmea_primary" in injected_positions:
# Use injected position for nmea_primary
if injected_positions["nmea_primary"] is not None:
position = injected_positions["nmea_primary"]
positions[position["source"]] = position
if not skip_db_writes:
self.database.store_position(position)
self.structured_logger.info("nmea_primary", "Injected position", {"position": position})
if "nmea_secondary" not in injected_positions and self.nmea_secondary_collector:
try:
position = await self.nmea_secondary_collector.get_latest_position()
if position:
positions[position["source"]] = position
if not skip_db_writes:
self.database.store_position(position)
self.structured_logger.info("nmea_secondary", "Updated position", {"position": position})
except Exception as e:
logger.error(f"Error getting NMEA secondary position: {e}")
self.structured_logger.error("nmea_secondary", f"Position error: {e}")
elif "nmea_secondary" in injected_positions:
# Use injected position for nmea_secondary
if injected_positions["nmea_secondary"] is not None:
position = injected_positions["nmea_secondary"]
positions[position["source"]] = position
if not skip_db_writes:
self.database.store_position(position)
self.structured_logger.info("nmea_secondary", "Injected position", {"position": position})
# Run validation
logger.info(f"Collected {len(positions)} positions, running validation")
try:
validation_result = self.validator.validate_positions(positions)
if skip_db_writes:
# DEMO_UNIT mode: store validation for live dashboard display
# but delete recent "live" records to prevent accumulation
# (keeps only last few minutes of live data, historical data untouched)
self._store_demo_validation(validation_result)
else:
self.database.store_validation(validation_result)
# Sync to server if enabled (only when not in DEMO_UNIT mode)
if self.server_sync:
try:
if self.server_sync.sync_validation(validation_result):
logger.debug("Validation synced to server")
else:
logger.debug("Validation queued for later sync")
except Exception as e:
logger.warning(f"Server sync error: {e}")
# Log validation result to terminal
is_valid = validation_result["is_valid"]
missing_sources = validation_result.get("sources_missing", [])
stale_sources = validation_result.get("sources_stale", [])
coordinate_differences = validation_result.get("coordinate_differences", {})
validation_details = validation_result.get("validation_details", {})
max_distance = validation_details.get("max_distance_meters", 0.0)
if is_valid:
logger.info("✓ Validation PASSED")
if missing_sources:
logger.info(f" Missing sources: {', '.join(missing_sources)}")
if stale_sources:
logger.info(f" Stale sources: {', '.join(stale_sources)}")
if coordinate_differences:
logger.info(f" Max distance difference: {max_distance:.2f}m")
else:
logger.info(" All sources within threshold")
else:
logger.warning("✗ Validation FAILED")
# Check if failure is due to distance (GPS jamming/spoofing alert)
threshold = validation_details.get('threshold_meters', 0)
if max_distance > threshold:
distance_km = max_distance / 1000.0
logger.warning("")
logger.warning("=" * 60)
logger.warning("🚨 GPS Jamming or Spoofing Alert! 🚨")
logger.warning(f" Location Distance: {distance_km:.1f} km")
logger.warning("=" * 60)
logger.warning("")
if missing_sources:
logger.warning(f" Missing sources: {', '.join(missing_sources)}")
if stale_sources:
logger.warning(f" Stale sources: {', '.join(stale_sources)}")
if coordinate_differences:
logger.warning(f" Max distance difference: {max_distance:.2f}m (threshold: {threshold}m)")
# Log individual differences if there are any
for pair, diff_info in coordinate_differences.items():
logger.warning(f" {pair}: {diff_info.get('distance_meters', 0):.2f}m")
# Log to structured logger
if is_valid:
self.structured_logger.info(
"validation",
"Validation passed",
{"validation": validation_result}
)
else:
self.structured_logger.warning(
"validation",
"Validation failed",
{"validation": validation_result}
)
# Handle buzzer alarm based on validation status
# Alarm triggers when: degraded, at risk, or no connection (any validation failure)
# Status changes:
# - "at risk" (crit): has_alert AND distance exceeds threshold
# - "degraded" (warn): validation failed but no distance alert
# - "healthy": validation passed
self._handle_buzzer_alarm(is_valid, missing_sources, stale_sources, max_distance > validation_details.get('threshold_meters', 0))
except Exception as e:
logger.error(f"Error during validation: {e}")
self.structured_logger.error("validation", f"Validation error: {e}")
logger.info("Iteration complete")
async def stop(self):
"""Stop GNSS Guard system"""
logger.info("Stopping GNSS Guard system")
self.running = False
# Stop buzzer service
if self.buzzer_service:
self.buzzer_service.shutdown()
# Stop NMEA collectors
if self.nmea_primary_collector:
await self.nmea_primary_collector.stop()
if self.nmea_secondary_collector:
await self.nmea_secondary_collector.stop()
# Log shutdown before closing logger
self.structured_logger.info("system", "GNSS Guard stopped")
# Close logger
self.structured_logger.close()
async def main():
"""Main entry point"""
# Setup logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
# Load configuration
config = Config()
# Create and start GNSS Guard
guard = GNSSGuard(config)
try:
await guard.start()
except KeyboardInterrupt:
logger.info("Received keyboard interrupt")
finally:
await guard.stop()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,16 @@
grpcio>=1.12.0
grpcio-tools>=1.20.0
protobuf>=3.6.0
yagrc>=1.1.1
typing-extensions>=4.3.0
requests>=2.25.0
python-dotenv>=0.19.0
# Web server dependencies
Flask>=2.3.0
# Visualization dependencies
pandas>=1.3.0
numpy>=1.21.0
folium>=0.12.0

View File

@@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAuFwehtR5QVRr/HAxmcrUvaMfj31HBhThtze/L7nwLLcpWwOo
VugvCkVD/GgOUBPagnUjlfZ+MTR35k70pOybw+TjDHtqMdu2RuM67Ns3u0sx2mIr
V5WZcc2zvsKyREd/uIVX8pe0VEvRpNoq420zdtY9J9Coy34grOLZlGsOELjnP+Hf
0jcsw1rMgfvoKWffuOJk4qqGVq0a7cta3JURsUS4YqSDqybobRP+fArWfxOBitqS
aNL78tMpnGr+wLykRkAbjulvZbibjr6N8/HjQKSYfxOlUNAci4K9QZaxGCdifgcz
MZwnhu96XDm1gIFXeAN5nNKHjRo1fI8R53wSHwIDAQABAoIBAHXqTYgVS/zR/0N1
ivP/vDQSqnP/P7cPEhM6r6jZ91jSSbwxybDUTon2JXbCIy1qlV7Nh1Y6UxoroeiH
ZYg64aHYurPYF+MN0TbjzWODDtFXVeqE0Y3yXDNiyu1e3+A2DuW5O7go+ajU2aDj
/Xx68ui2PGVD20JUSJfrfBimpFdipedFYw0obKEQ6L8c/AYWXSkCp9RXa+VAfJvB
epO5Fi0eciaB+rblH/r36gYRY+ebMU3upvBgZXtL52MYj8aHhUlR8P+iwoDyBm2l
eMJc5nH2M1iEfZ6I3PbPYL58oMwdxVw3Y/ZlxnidFQS9HRcBWYfOCnqZWPTxAf54
Rh0N1zECgYEA53q0qzEsUtEY04n3bl20D4emZM2c1Gojm5suOWT8RTqcsgZb2Yrl
bU5zy+EQjDUUXGbjUbgCYOHHg6JInI3R79rh6te+dg2w8aMTFG4NDeJ5p7WatpwT
ynqsVSj0B4Z3XwZhTpyoxnLr9vtsPKjA5UDEotBTxRfZHUHmfUnongcCgYEAy+Oe
pyf0vPOyHCWS0vSyySRnb7xtx6MvnfF5/kzRNmZME+NxoYo2Yn0ArMOLx1SAKZka
sCYcGVlonA8O6g4t9zW7b0mV/2LDax1zev1iq2rnVK+aU4y5RR06J2VwSZ5mRWCk
sExo4nWIJdiHi18ixtHDUSkxY4rnp01W0YWOZSkCgYA1M//IhSHR2xtgq4pCRKk5
FI2LB7MvI0IR5sXmDS7qXoFbbZi41HLM/8YfqxgZka2fW0qOIsPxLpOjzq3vxazl
+yIHzxSIn7b2ouuku3KmqVIa2OO5awAlfrKTVDlabW6MWbQN1HX6Prm7Z6hF/Odx
CcToQwet+kA9uELYsx8TCwKBgDuMdnjxtYw+TMXlv3U3nMQcis1apmGJas3hijTY
sL4HsK6aXkTE/k9TnQ/YaQnFx0ze96l85/YLY/84cq2viINMQTsmrdWSPesaBfFk
8h2IspnMU/GVB0OFXsfE27/UsKAQsuj+2B9UHniXPjdZiOmyuC4LLu6Y0kHN186I
CGfJAoGBAMqAMCMpfC8QZT5zQtzjOWV5iUvpsLwf5HikXw/U19uSW59jajGdiz7B
Y3Wt2jslrYS/BmMVDOfgQfXTFfNuZFR1a9fB93rY14zhQ33ChzBaQUp83qRmy6Ae
60aBUd+vBL/gV5sxdeOtCZSxZ+uPL4imk2L89efhPW7QiBXI6OQE
-----END RSA PRIVATE KEY-----

View File

@@ -0,0 +1,49 @@
# Git
.git
.gitignore
# Python
__pycache__
*.py[cod]
*$py.class
*.so
.Python
*.egg-info
dist
build
.venv
venv
# Environment files (uploaded separately)
.env
.env.*
env.example
# Docker
Dockerfile
docker-compose*.yml
.dockerignore
# IDE
.vscode
.idea
*.swp
*.swo
*~
# OS
.DS_Store
Thumbs.db
# SSH keys
.cert/
*.pem
# Logs
*.log
logs/
# Data
data/
*.db

View File

@@ -0,0 +1,34 @@
# Local server configuration (auto-generated)
# SQLite database for local testing
GNSS_SERVER_DATABASE_URL=sqlite:////Users/alexandershulman/projects2/tm-gnss-guard/server/data/server_local.db
GNSS_SERVER_WEB_USERNAME=test
GNSS_SERVER_WEB_PASSWORD=Tototheo.25!
GNSS_SERVER_SECRET_KEY=local-dev-secret-key-change-in-production
GNSS_SERVER_DEBUG=true
GNSS_SERVER_HOST=127.0.0.1
GNSS_SERVER_PORT=8000
# ============================================================================
# Telegram Bot Configuration (Optional)
# ============================================================================
# 1. Create bot: Open Telegram → Search @BotFather → /newbot
# 2. Get chat ID:
# - Start chat with your bot, send any message
# - Visit: https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates
# - Find "chat":{"id":123456789} (positive for DM, negative for groups)
# 3. Fill in values below
#
# Each asset can override the chat_id to send to a different chat/group.
# ============================================================================
GNSS_SERVER_TELEGRAM_BOT_TOKEN=8319259186:AAGfg2tHPlnHduAPvsnODLPA1kaRDIsbx0A
GNSS_SERVER_TELEGRAM_CHAT_ID=-4863784324
# =============================================================================
# ASSET OFFLINE DETECTION
# =============================================================================
# Seconds without updates before an asset is considered offline (default: 120)
# Triggers Telegram notification when asset goes offline/online
GNSS_SERVER_ASSET_OFFLINE_SECONDS=120

View File

@@ -0,0 +1,93 @@
# =============================================================================
# GNSS Guard Server Configuration
# =============================================================================
# =============================================================================
# SERVER SETTINGS
# =============================================================================
# Host to bind to (127.0.0.1 when behind Nginx proxy)
GNSS_SERVER_HOST=127.0.0.1
# Port to bind to
GNSS_SERVER_PORT=8000
# Enable debug mode (set to false in production)
GNSS_SERVER_DEBUG=false
# =============================================================================
# DATABASE (PostgreSQL RDS)
# =============================================================================
# Full database connection URL
# Format: postgresql://USER:PASSWORD@HOST:PORT/DATABASE
GNSS_SERVER_DATABASE_URL=postgresql://postgres:!ks-hUe8@gnss-guard.cn06uuuk8ttq.eu-west-1.rds.amazonaws.com:5432/gnss_guard
# =============================================================================
# SECURITY
# =============================================================================
# Secret key for session encryption (generate with: python -c "import secrets; print(secrets.token_urlsafe(32))")
GNSS_SERVER_SECRET_KEY=e0QnYxAvisgbOqzTIl-rlLyczsNOpP7hEc26ea22ikI
# Session expiration in minutes (default: 24 hours)
GNSS_SERVER_SESSION_EXPIRE_MINUTES=1440
# =============================================================================
# WEB UI AUTHENTICATION
# =============================================================================
# Username for web dashboard login
GNSS_SERVER_WEB_USERNAME=test
# Password for web dashboard login
GNSS_SERVER_WEB_PASSWORD=Tototheo.25!
# =============================================================================
# DOMAIN (for SSL/HTTPS)
# =============================================================================
# Server domain name (for Let's Encrypt SSL)
GNSS_SERVER_DOMAIN=gnss.tototheo.com
# =============================================================================
# =============================================================================
# VALIDATION
# =============================================================================
# Staleness threshold in seconds (data older than this is considered stale)
GNSS_SERVER_STALE_THRESHOLD_SECONDS=60
# DATA RETENTION
# =============================================================================
# Days to keep validation history (default: 90)
GNSS_SERVER_VALIDATION_HISTORY_DAYS=90
# Email for Let's Encrypt certificate notifications
LETSENCRYPT_EMAIL=alexander.s@tototheo.com
# ============================================================================
# Telegram Bot Configuration (Optional)
# ============================================================================
# 1. Create bot: Open Telegram → Search @BotFather → /newbot
# 2. Get chat ID:
# - Start chat with your bot, send any message
# - Visit: https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates
# - Find "chat":{"id":123456789} (positive for DM, negative for groups)
# 3. Fill in values below
#
# Each asset can override the chat_id to send to a different chat/group.
# ============================================================================
GNSS_SERVER_TELEGRAM_BOT_TOKEN=8319259186:AAGfg2tHPlnHduAPvsnODLPA1kaRDIsbx0A
GNSS_SERVER_TELEGRAM_CHAT_ID=-4863784324
# =============================================================================
# ASSET OFFLINE DETECTION
# =============================================================================
# Seconds without updates before an asset is considered offline (default: 120)
# Triggers Telegram notification when asset goes offline/online
GNSS_SERVER_ASSET_OFFLINE_SECONDS=120

View File

@@ -0,0 +1,40 @@
# GNSS Guard Server - Dockerfile
FROM python:3.11-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Set working directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements first (for better caching)
COPY requirements.txt .
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Create non-root user for security
RUN useradd --create-home --shell /bin/bash appuser && \
chown -R appuser:appuser /app
USER appuser
# Expose port
EXPOSE 8000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD python -c "import requests; requests.get('http://localhost:8000/auth/check', timeout=5)" || exit 1
# Run uvicorn
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

View File

@@ -0,0 +1,4 @@
"""
GNSS Guard Server - Centralized monitoring server for multiple assets
"""

View File

@@ -0,0 +1,75 @@
#!/usr/bin/env python3
"""
Server configuration management for GNSS Guard Server
Loads configuration from environment variables
"""
import os
import sys
from pathlib import Path
from typing import Optional
from pydantic_settings import BaseSettings
from pydantic import field_validator
class ServerConfig(BaseSettings):
"""Server configuration loaded from environment variables"""
# Server settings
server_host: str = "0.0.0.0"
server_port: int = 8000
debug: bool = False
# Database settings (PostgreSQL) - REQUIRED, no insecure default
database_url: str
# Security settings
secret_key: str = "change-this-in-production-to-a-random-secret-key"
session_expire_minutes: int = 1440 # 24 hours
# Web UI authentication - REQUIRED, no insecure defaults
# Must be set via environment variables GNSS_SERVER_WEB_USERNAME and GNSS_SERVER_WEB_PASSWORD
web_username: str
web_password: str
@field_validator('web_password')
@classmethod
def password_strength(cls, v: str) -> str:
"""Ensure password meets minimum security requirements"""
if len(v) < 10:
raise ValueError('Password must be at least 10 characters long')
if v.lower() in ['password', 'admin', 'test', '123456', 'tototheo']:
raise ValueError('Password is too common/weak')
return v
# Validation settings
stale_threshold_seconds: int = 60 # Data older than this is considered stale
# Asset offline detection
asset_offline_seconds: int = 120 # Consider asset offline after this many seconds without updates
# Data retention
validation_history_days: int = 90 # Keep 90 days of validation history
# Domain for SSL (optional)
server_domain: Optional[str] = None
# Telegram notification settings (optional)
telegram_bot_token: Optional[str] = None
telegram_chat_id: Optional[str] = None # Default chat ID for all assets
@property
def telegram_enabled(self) -> bool:
"""Check if Telegram notifications are configured"""
return bool(self.telegram_bot_token and self.telegram_chat_id)
class Config:
env_file = ".env"
env_prefix = "GNSS_SERVER_"
case_sensitive = False
def get_config() -> ServerConfig:
"""Get server configuration instance"""
return ServerConfig()

View File

@@ -0,0 +1,105 @@
#!/usr/bin/env python3
"""
Database connection and session management for GNSS Guard Server
"""
import logging
from contextlib import contextmanager
from typing import Generator
from sqlalchemy import create_engine, event
from sqlalchemy.orm import sessionmaker, Session
from sqlalchemy.pool import QueuePool
from config import get_config
from models import Base
logger = logging.getLogger("gnss_guard.server.database")
# Global engine and session factory
_engine = None
_SessionLocal = None
def get_engine():
"""Get or create the database engine"""
global _engine
if _engine is None:
config = get_config()
# Check if using SQLite (local development)
is_sqlite = config.database_url.startswith("sqlite")
if is_sqlite:
# SQLite-specific settings
from sqlalchemy.pool import StaticPool
_engine = create_engine(
config.database_url,
connect_args={"check_same_thread": False},
poolclass=StaticPool,
echo=config.debug,
)
logger.info(f"SQLite database engine created: {config.database_url}")
else:
# PostgreSQL with connection pooling
_engine = create_engine(
config.database_url,
poolclass=QueuePool,
pool_size=5,
max_overflow=10,
pool_pre_ping=True, # Verify connections before using
echo=config.debug,
)
logger.info(f"Database engine created for: {config.database_url.split('@')[-1]}")
return _engine
def get_session_factory():
"""Get or create the session factory"""
global _SessionLocal
if _SessionLocal is None:
engine = get_engine()
_SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
return _SessionLocal
def init_db():
"""Initialize database - create all tables"""
engine = get_engine()
Base.metadata.create_all(bind=engine)
logger.info("Database tables created/verified")
def get_db() -> Generator[Session, None, None]:
"""
Dependency for FastAPI to get database session.
Yields a session and ensures it's closed after use.
"""
SessionLocal = get_session_factory()
db = SessionLocal()
try:
yield db
finally:
db.close()
@contextmanager
def get_db_session() -> Generator[Session, None, None]:
"""
Context manager for database sessions (for use outside FastAPI dependencies).
"""
SessionLocal = get_session_factory()
db = SessionLocal()
try:
yield db
db.commit()
except Exception:
db.rollback()
raise
finally:
db.close()

View File

@@ -0,0 +1,34 @@
# GNSS Guard Server - Development Docker Compose
# No nginx, no SSL - direct access to FastAPI on port 8000
#
# Usage:
# cp env.example .env.dev
# # Edit .env.dev (can use SQLite for dev: sqlite:///./data/gnss_guard.db)
# docker compose -f docker-compose.dev.yml up -d
version: '3.8'
services:
gnss-server:
build:
context: .
dockerfile: Dockerfile
container_name: gnss-guard-server-dev
restart: unless-stopped
env_file:
- .env.dev
ports:
- "8000:8000"
volumes:
# Mount source code for live reload (development only)
- .:/app
environment:
- GNSS_SERVER_DEBUG=true
command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/auth/check"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s

View File

@@ -0,0 +1,76 @@
# GNSS Guard Server - Docker Compose with Nginx + SSL
#
# Usage:
# 1. cp env.example .env.prod
# 2. Edit .env.prod with your configuration
# 3. docker compose up -d
# 4. Run SSL setup: docker compose exec certbot certbot certonly ...
#
# For development (no SSL): use docker-compose.dev.yml
services:
# ==========================================================================
# GNSS Guard Server (FastAPI/Uvicorn)
# ==========================================================================
gnss-server:
build:
context: .
dockerfile: Dockerfile
container_name: gnss-guard-server
restart: unless-stopped
env_file:
- .env.prod
expose:
- "8000"
networks:
- gnss-network
healthcheck:
test: ["CMD", "python", "-c", "import requests; requests.get('http://localhost:8000/auth/check', timeout=5)"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# ==========================================================================
# Nginx Reverse Proxy
# ==========================================================================
nginx:
image: nginx:alpine
container_name: gnss-nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- certbot-etc:/etc/letsencrypt:ro
- certbot-var:/var/lib/letsencrypt
- certbot-webroot:/var/www/certbot
# Mount nginx logs to host for fail2ban monitoring
- /var/log/nginx:/var/log/nginx
depends_on:
- gnss-server
networks:
- gnss-network
# ==========================================================================
# Certbot (SSL Certificate Management)
# ==========================================================================
certbot:
image: certbot/certbot
container_name: gnss-certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- certbot-webroot:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
networks:
gnss-network:
driver: bridge
volumes:
certbot-etc:
certbot-var:
certbot-webroot:

View File

@@ -0,0 +1,103 @@
# =============================================================================
# GNSS Guard Server Configuration
# =============================================================================
# Copy this file to .env.prod and configure for your environment
# Example: cp env.example .env.prod
# =============================================================================
# SERVER SETTINGS
# =============================================================================
# Host to bind to (127.0.0.1 when behind Nginx proxy)
GNSS_SERVER_HOST=127.0.0.1
# Port to bind to
GNSS_SERVER_PORT=8000
# Enable debug mode (set to false in production)
GNSS_SERVER_DEBUG=false
# =============================================================================
# DATABASE (PostgreSQL RDS) - REQUIRED!
# =============================================================================
# The server will NOT start without a valid database URL!
# Full database connection URL
# Format: postgresql://USER:PASSWORD@HOST:PORT/DATABASE
GNSS_SERVER_DATABASE_URL=postgresql://gnss_admin:your-password@your-rds-endpoint.rds.amazonaws.com:5432/gnss_guard
# =============================================================================
# SECURITY
# =============================================================================
# Secret key for session encryption (generate with: python -c "import secrets; print(secrets.token_urlsafe(32))")
GNSS_SERVER_SECRET_KEY=change-this-to-a-random-secret-key
# Session expiration in minutes (default: 24 hours)
GNSS_SERVER_SESSION_EXPIRE_MINUTES=1440
# =============================================================================
# WEB UI AUTHENTICATION (REQUIRED - no defaults!)
# =============================================================================
# These credentials are used to login to the web dashboard.
# The server will NOT start without these being set!
# Username for web dashboard login (REQUIRED)
GNSS_SERVER_WEB_USERNAME=your_username_here
# Password for web dashboard login (REQUIRED)
# Requirements:
# - At least 12 characters long
# - Cannot be common passwords like 'password', 'admin', 'test'
# Generate a secure password: python -c "import secrets; print(secrets.token_urlsafe(16))"
GNSS_SERVER_WEB_PASSWORD=your_secure_password_here
# =============================================================================
# DOMAIN (for SSL/HTTPS)
# =============================================================================
# Server domain name (for Let's Encrypt SSL)
GNSS_SERVER_DOMAIN=gnss.yourdomain.com
# =============================================================================
# VALIDATION
# =============================================================================
# Staleness threshold in seconds (data older than this is considered stale)
GNSS_SERVER_STALE_THRESHOLD_SECONDS=60
# =============================================================================
# ASSET OFFLINE DETECTION
# =============================================================================
# Seconds without updates before an asset is considered offline (default: 120)
# Triggers Telegram notification when asset goes offline/online
GNSS_SERVER_ASSET_OFFLINE_SECONDS=120
# =============================================================================
# DATA RETENTION
# =============================================================================
# Days to keep validation history (default: 90)
GNSS_SERVER_VALIDATION_HISTORY_DAYS=90
# =============================================================================
# TELEGRAM NOTIFICATIONS (Optional)
# =============================================================================
# Server-side Telegram notifications for all assets.
# Each asset can override the chat_id to send to a different chat/group.
# Telegram bot token (from @BotFather)
GNSS_SERVER_TELEGRAM_BOT_TOKEN=
# Default Telegram chat ID (negative for groups)
# Individual assets can override this in the database
GNSS_SERVER_TELEGRAM_CHAT_ID=
# =============================================================================
# SSL (for Docker deployment with Traefik)
# =============================================================================
# Email for Let's Encrypt certificate notifications
LETSENCRYPT_EMAIL=admin@yourdomain.com

View File

@@ -0,0 +1,3 @@
# Keep this directory for importing client database files
# Place .db files here with format: {id}_{name}.db
# Example: 2_msc_charlotte.db

View File

@@ -0,0 +1,408 @@
#!/usr/bin/env python3
"""
FastAPI main application for GNSS Guard Server
Centralized monitoring server for multiple GNSS Guard assets
"""
import asyncio
import logging
import json
import random
from contextlib import asynccontextmanager
from datetime import datetime, timedelta
from pathlib import Path
from typing import Optional
from fastapi import FastAPI, Request, Depends, HTTPException
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates
from fastapi.responses import HTMLResponse, RedirectResponse, JSONResponse
from fastapi.middleware.cors import CORSMiddleware
from sqlalchemy.orm import Session
from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address
from slowapi.errors import RateLimitExceeded
from config import get_config
from database import init_db, get_db, get_session_factory
from routes import api, auth
from routes.auth import get_optional_user, get_current_user
from services.asset_service import AssetService
from services.telegram_service import get_telegram_service
from models import Asset, AssetNotificationState
# Initialize rate limiter
limiter = Limiter(key_func=get_remote_address)
# Configure logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
logger = logging.getLogger("gnss_guard.server")
# Create FastAPI app
app = FastAPI(
title="GNSS Guard Server",
description="Centralized monitoring server for GNSS Guard assets",
version="1.0.0"
)
# Setup rate limiting
app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
# Add CORS middleware - restricted to same-origin only
# Since the dashboard is served from the same domain, we only need
# to allow requests from the same origin. This prevents CSRF attacks.
config = get_config()
allowed_origins = []
if config.server_domain:
allowed_origins = [
f"https://{config.server_domain}",
f"http://{config.server_domain}", # For initial setup before SSL
]
app.add_middleware(
CORSMiddleware,
allow_origins=allowed_origins,
allow_credentials=True,
allow_methods=["GET", "POST", "DELETE"],
allow_headers=["Content-Type", "Authorization", "Cookie"],
)
# Setup static files and templates
static_path = Path(__file__).parent / "static"
templates_path = Path(__file__).parent / "templates"
if static_path.exists():
app.mount("/static", StaticFiles(directory=str(static_path)), name="static")
templates = Jinja2Templates(directory=str(templates_path)) if templates_path.exists() else None
# Include routers
app.include_router(api.router)
app.include_router(auth.router)
# =============================================================================
# Health Check Endpoint (public, no auth required)
# =============================================================================
@app.get("/health")
async def health_check():
"""Health check endpoint - always accessible"""
return {"status": "ok", "timestamp": datetime.utcnow().isoformat()}
async def check_offline_assets():
"""Background task to check for assets that have gone offline"""
config = get_config()
telegram_service = get_telegram_service()
if not telegram_service.enabled:
return
threshold = datetime.utcnow() - timedelta(seconds=config.asset_offline_seconds)
SessionLocal = get_session_factory()
db = SessionLocal()
try:
# Find assets that are marked online but haven't reported recently
states = db.query(AssetNotificationState).join(Asset).filter(
AssetNotificationState.is_online == True,
AssetNotificationState.last_validation_at != None,
AssetNotificationState.last_validation_at < threshold,
Asset.is_active == True,
Asset.telegram_enabled == True
).all()
for state in states:
chat_id = state.asset.telegram_chat_id or telegram_service.default_chat_id
if chat_id:
logger.info(f"Asset '{state.asset.name}' detected as offline (last seen: {state.last_validation_at})")
telegram_service.send_asset_offline_alert(
chat_id=chat_id,
asset_name=state.asset.name,
last_seen=state.last_validation_at,
offline_threshold_seconds=config.asset_offline_seconds
)
state.is_online = False
if states:
db.commit()
except Exception as e:
logger.error(f"Error checking offline assets: {e}")
db.rollback()
finally:
db.close()
async def offline_checker_loop():
"""Background loop that periodically checks for offline assets"""
while True:
await asyncio.sleep(30) # Check every 30 seconds
try:
await check_offline_assets()
except Exception as e:
logger.error(f"Error in offline checker loop: {e}")
@app.on_event("startup")
async def startup_event():
"""Initialize database and background tasks on startup"""
logger.info("Starting GNSS Guard Server...")
init_db()
logger.info("Database initialized")
# Start background task for offline detection
asyncio.create_task(offline_checker_loop())
logger.info("Offline asset checker started")
# =============================================================================
# Web UI Routes
# =============================================================================
@app.get("/", response_class=HTMLResponse)
async def index(request: Request, user: Optional[str] = Depends(get_optional_user)):
"""Main dashboard page"""
if not user:
return RedirectResponse(url="/login", status_code=302)
if not templates:
return HTMLResponse("<h1>GNSS Guard Server</h1><p>Templates not configured</p>")
return templates.TemplateResponse("dashboard.html", {
"request": request,
"username": user,
"cache_buster": random.randint(100000, 999999)
})
@app.get("/login", response_class=HTMLResponse)
async def login_page(request: Request, user: Optional[str] = Depends(get_optional_user)):
"""Login page"""
if user:
return RedirectResponse(url="/", status_code=302)
if not templates:
return HTMLResponse("""
<h1>GNSS Guard Server - Login</h1>
<form method="post" action="/login">
<input name="username" placeholder="Username"><br>
<input name="password" type="password" placeholder="Password"><br>
<button type="submit">Login</button>
</form>
""")
return templates.TemplateResponse("login.html", {
"request": request,
"cache_buster": random.randint(100000, 999999)
})
@app.get("/api/dashboard/assets")
async def dashboard_assets(
user: str = Depends(get_current_user),
db: Session = Depends(get_db)
):
"""Get all assets status for dashboard"""
service = AssetService(db)
return service.get_all_assets_status()
@app.get("/api/dashboard/asset/{asset_name}/status")
async def dashboard_asset_status(
asset_name: str,
at: Optional[float] = None,
user: str = Depends(get_current_user),
db: Session = Depends(get_db)
):
"""
Get detailed status for a specific asset (for dashboard display).
Matches the format expected by the client dashboard.
Args:
at: Optional Unix timestamp to get historical data at that time.
If not provided, returns the latest data.
"""
service = AssetService(db)
asset = service.get_asset_by_name(asset_name)
if not asset:
raise HTTPException(status_code=404, detail=f"Asset '{asset_name}' not found")
if at is not None:
# Get historical validation at specified timestamp
latest = service.get_validation_at_timestamp(asset.id, at)
else:
latest = service.get_latest_validation(asset.id)
if not latest:
return {
"error": "No validation data available",
"timestamp": datetime.utcnow().isoformat()
}
# Parse JSON fields
sources_missing = json.loads(latest.sources_missing or "[]")
sources_stale = json.loads(latest.sources_stale or "[]")
coordinate_differences = json.loads(latest.coordinate_differences or "{}")
source_coordinates = json.loads(latest.source_coordinates or "{}")
validation_details = json.loads(latest.validation_details or "{}")
# Get enabled sources from validation_details
expected_sources = validation_details.get("expected_sources", [])
# Build sources status (matching client format)
source_display_names = {
"nmea_primary": "Primary GPS",
"nmea_secondary": "Secondary GPS",
"tm_ais": "TM AIS GPS",
"starlink_gps": "Starlink GPS",
"starlink_location": "Starlink Location"
}
sources = {}
all_source_names = ["nmea_primary", "nmea_secondary", "tm_ais", "starlink_gps", "starlink_location"]
for source_name in all_source_names:
display_name = source_display_names.get(source_name, source_name)
if source_name not in expected_sources:
sources[source_name] = {
"display_name": display_name,
"enabled": False,
"status": "not_configured",
"is_stale": False,
"coordinates": None,
"last_update": None,
"last_update_unix": None
}
continue
source_data = source_coordinates.get(source_name)
is_stale = source_name in sources_stale
if not source_data:
sources[source_name] = {
"display_name": display_name,
"enabled": True,
"status": "missing",
"is_stale": is_stale,
"coordinates": None,
"last_update": None,
"last_update_unix": None
}
else:
status = "stale" if is_stale else "ok"
sources[source_name] = {
"display_name": display_name,
"enabled": True,
"status": status,
"is_stale": is_stale,
"coordinates": {
"latitude": source_data.get("latitude"),
"longitude": source_data.get("longitude")
},
"last_update": source_data.get("timestamp"),
"last_update_unix": source_data.get("timestamp_unix")
}
# Calculate max distance
threshold_meters = validation_details.get("threshold_meters", 200.0)
max_distance_km = None
max_distance_m = 0.0
if not latest.is_valid and coordinate_differences:
for diff_data in coordinate_differences.values():
if isinstance(diff_data, dict):
distance = diff_data.get("distance_meters", diff_data.get("distance_m", 0))
if distance > max_distance_m:
max_distance_m = distance
if max_distance_m > threshold_meters:
max_distance_km = max_distance_m / 1000.0
has_alert = (not latest.is_valid and max_distance_km is not None) or len(sources_missing) > 0
# Find map center
map_center = None
for priority_source in ["nmea_primary", "tm_ais", "starlink_location"]:
if sources.get(priority_source, {}).get("coordinates"):
coords = sources[priority_source]["coordinates"]
if coords.get("latitude") and coords.get("longitude"):
map_center = coords
break
if not map_center:
for source_data in sources.values():
if source_data.get("coordinates"):
coords = source_data["coordinates"]
if coords.get("latitude") and coords.get("longitude"):
map_center = coords
break
return {
"timestamp": datetime.utcnow().isoformat(),
"validation_timestamp": latest.validation_timestamp,
"validation_timestamp_unix": latest.validation_timestamp_unix,
"is_valid": latest.is_valid,
"has_alert": has_alert,
"max_distance_km": max_distance_km,
"threshold_meters": threshold_meters,
"sources": sources,
"sources_stale": sources_stale,
"map_center": map_center,
"asset_name": asset_name
}
@app.get("/api/dashboard/asset/{asset_name}/route")
async def dashboard_asset_route(
asset_name: str,
hours: int = 72,
until: Optional[float] = None,
user: str = Depends(get_current_user),
db: Session = Depends(get_db)
):
"""
Get route data for map visualization.
Args:
hours: Number of hours of history (default 72)
until: Optional Unix timestamp to show route up to this time.
If not provided, shows route up to current time.
"""
service = AssetService(db)
asset = service.get_asset_by_name(asset_name)
if not asset:
raise HTTPException(status_code=404, detail=f"Asset '{asset_name}' not found")
return service.get_route_data(asset.id, hours, until_timestamp=until)
# =============================================================================
# Main entry point
# =============================================================================
def run_server():
"""Run the server using uvicorn"""
import uvicorn
config = get_config()
uvicorn.run(
"server.main:app",
host=config.server_host,
port=config.server_port,
reload=config.debug,
log_level="info"
)
if __name__ == "__main__":
run_server()

View File

@@ -0,0 +1,211 @@
#!/usr/bin/env python3
"""
SQLAlchemy and Pydantic models for GNSS Guard Server
"""
from datetime import datetime
from typing import Dict, Any, List, Optional
from sqlalchemy import Column, Integer, String, Float, Boolean, DateTime, ForeignKey, Text, Index
from sqlalchemy.orm import relationship, declarative_base
from pydantic import BaseModel, Field
import hashlib
import secrets
Base = declarative_base()
# =============================================================================
# SQLAlchemy Database Models
# =============================================================================
class Asset(Base):
"""Asset (client device) registered with the server"""
__tablename__ = "assets"
id = Column(Integer, primary_key=True, index=True)
name = Column(String(255), unique=True, nullable=False, index=True)
token_hash = Column(String(64), nullable=False) # SHA-256 hash of token
created_at = Column(DateTime, default=datetime.utcnow)
is_active = Column(Boolean, default=True)
description = Column(String(500), nullable=True)
# Telegram notification settings (optional override for this asset)
telegram_chat_id = Column(String(100), nullable=True) # Override default chat ID
telegram_enabled = Column(Boolean, default=True) # Enable/disable notifications for this asset
# Relationship to validation history
validations = relationship("ValidationHistory", back_populates="asset", cascade="all, delete-orphan")
# Relationship to notification state
notification_state = relationship("AssetNotificationState", back_populates="asset", uselist=False, cascade="all, delete-orphan")
@staticmethod
def hash_token(token: str) -> str:
"""Hash a token using SHA-256"""
return hashlib.sha256(token.encode()).hexdigest()
@staticmethod
def generate_token() -> str:
"""Generate a secure random token"""
return secrets.token_urlsafe(32)
def verify_token(self, token: str) -> bool:
"""Verify if provided token matches stored hash"""
return self.token_hash == self.hash_token(token)
class AssetNotificationState(Base):
"""Tracks the previous notification state for each asset to detect changes"""
__tablename__ = "asset_notification_state"
id = Column(Integer, primary_key=True, index=True)
asset_id = Column(Integer, ForeignKey("assets.id", ondelete="CASCADE"), unique=True, nullable=False)
# Previous state (JSON arrays stored as text)
prev_sources_missing = Column(Text, nullable=True) # JSON array
prev_sources_stale = Column(Text, nullable=True) # JSON array
prev_threshold_breached = Column(Boolean, default=False)
# Last notification timestamp
last_notification_at = Column(DateTime, nullable=True)
# Asset online/offline tracking
is_online = Column(Boolean, default=True) # Whether asset is currently reporting
last_validation_at = Column(DateTime, nullable=True) # Last time we received validation data
# Relationship
asset = relationship("Asset", back_populates="notification_state")
class ValidationHistory(Base):
"""Historical validation records from assets"""
__tablename__ = "validation_history"
id = Column(Integer, primary_key=True, index=True)
asset_id = Column(Integer, ForeignKey("assets.id", ondelete="CASCADE"), nullable=False)
# Validation timestamps
validation_timestamp = Column(String(50), nullable=False) # ISO format
validation_timestamp_unix = Column(Float, nullable=False, index=True)
# Validation result
is_valid = Column(Boolean, nullable=False)
# JSON fields stored as text
sources_missing = Column(Text, nullable=True) # JSON array
sources_stale = Column(Text, nullable=True) # JSON array
coordinate_differences = Column(Text, nullable=True) # JSON object
source_coordinates = Column(Text, nullable=True) # JSON object
validation_details = Column(Text, nullable=True) # JSON object
# Server-side metadata
received_at = Column(DateTime, default=datetime.utcnow, index=True)
# Relationship
asset = relationship("Asset", back_populates="validations")
# Indexes for common queries
__table_args__ = (
Index('ix_validation_asset_timestamp', 'asset_id', 'validation_timestamp_unix'),
)
# =============================================================================
# Pydantic Request/Response Models
# =============================================================================
class AssetCreate(BaseModel):
"""Request model for creating a new asset"""
name: str = Field(..., min_length=1, max_length=255)
description: Optional[str] = Field(None, max_length=500)
telegram_chat_id: Optional[str] = Field(None, max_length=100) # Override default chat ID
telegram_enabled: bool = True # Enable notifications for this asset
class AssetResponse(BaseModel):
"""Response model for asset data"""
id: int
name: str
is_active: bool
created_at: datetime
description: Optional[str] = None
telegram_chat_id: Optional[str] = None
telegram_enabled: bool = True
class Config:
from_attributes = True
class AssetWithToken(AssetResponse):
"""Response model for newly created asset (includes token)"""
token: str # Only returned when asset is created
class AssetImport(BaseModel):
"""Request model for importing an asset with a specific token"""
name: str = Field(..., min_length=1, max_length=255)
token: str = Field(..., min_length=32, max_length=128)
description: Optional[str] = Field(None, max_length=500)
telegram_chat_id: Optional[str] = Field(None, max_length=100)
telegram_enabled: bool = True
class AssetBatchImport(BaseModel):
"""Request model for batch importing assets"""
assets: List[AssetImport]
class ValidationSubmission(BaseModel):
"""Request model for submitting validation data"""
validation_timestamp: str
validation_timestamp_unix: float
is_valid: bool
sources_missing: List[str] = []
sources_stale: List[str] = []
coordinate_differences: Dict[str, Any] = {}
source_coordinates: Dict[str, Any] = {}
validation_details: Dict[str, Any] = {}
class ValidationBatchSubmission(BaseModel):
"""Request model for submitting multiple validation records"""
records: List[ValidationSubmission]
class ValidationResponse(BaseModel):
"""Response model for validation data"""
id: int
asset_name: str
validation_timestamp: str
validation_timestamp_unix: float
is_valid: bool
sources_missing: List[str]
sources_stale: List[str]
coordinate_differences: Dict[str, Any]
source_coordinates: Dict[str, Any]
validation_details: Dict[str, Any]
received_at: datetime
class Config:
from_attributes = True
class AssetStatus(BaseModel):
"""Current status of an asset (latest validation)"""
asset_name: str
is_online: bool # Has reported in last 5 minutes
last_seen: Optional[datetime] = None
latest_validation: Optional[ValidationResponse] = None
class LoginRequest(BaseModel):
"""Request model for user login"""
username: str
password: str
class LoginResponse(BaseModel):
"""Response model for successful login"""
message: str
username: str

View File

@@ -0,0 +1,101 @@
# GNSS Guard Server - Nginx Configuration
# This file is used for initial setup (HTTP only)
# After SSL setup, this file is replaced with the SSL configuration
upstream gnss_server {
server gnss-server:8000;
}
# =============================================================================
# IP WHITELIST FOR DASHBOARD ACCESS
# =============================================================================
# These IPs can access the web dashboard and admin endpoints.
# The validation API endpoints (/api/v1/validation*) are open to all.
#
# To update: edit this file and run ./deploy_server.sh --restart
# =============================================================================
geo $ip_whitelist {
default 0;
# Office IPs - Whitelisted for dashboard access
213.149.164.73 1; # Socrates Office 5G
87.228.228.45 1; # Thaleias Office
93.109.218.195 1; # HQ Cyta
65.18.217.50 1; # HQ Cablenet
93.109.218.196 1; # HQ Cyta 2
62.228.7.94 1; # Socrates Home 3
195.97.70.162 1; # Piraeus Office
# Localhost only (for internal health checks)
127.0.0.1 1;
# NOTE: Docker internal networks (10.0.0.0/8, 172.16.0.0/12) are NOT whitelisted
# to prevent privilege escalation if an attacker gains container access
}
# HTTP server
server {
listen 80;
server_name _;
# Let's Encrypt challenge location - always open
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
# =========================================================================
# PUBLIC ENDPOINTS - Open to all (asset token authentication)
# =========================================================================
# Validation API - accessible from anywhere (clients authenticate with tokens)
location /api/v1/validation {
proxy_pass http://gnss_server;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300;
proxy_connect_timeout 300;
}
# Health check endpoint - open
location /health {
proxy_pass http://gnss_server;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# =========================================================================
# RESTRICTED ENDPOINTS - Office IPs only (session authentication)
# =========================================================================
# All other endpoints require IP whitelist
location / {
# Check IP whitelist
# TEMPORARILY DISABLED - uncomment to re-enable IP whitelisting
# if ($ip_whitelist = 0) {
# return 403;
# }
proxy_pass http://gnss_server;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 300;
proxy_connect_timeout 300;
}
# Custom error page for 403
error_page 403 /403.html;
location = /403.html {
internal;
default_type text/html;
return 403 '<!DOCTYPE html><html><head><title>Access Denied</title><style>body{font-family:sans-serif;display:flex;justify-content:center;align-items:center;height:100vh;margin:0;background:#060b10;color:#e5e9f5;}.container{text-align:center;}.title{font-size:48px;margin-bottom:20px;color:#c62828;}.msg{font-size:18px;color:#9aa3b8;}</style></head><body><div class="container"><div class="title">403</div><div class="msg">Access Denied<br>Your IP is not authorized to access this resource.</div></div></body></html>';
}
}

View File

@@ -0,0 +1,148 @@
# GNSS Guard Server - Nginx Configuration with SSL
#
# After obtaining SSL certificate, copy this file:
# cp gnss-guard-ssl.conf.template gnss-guard-ssl.conf
# Then edit and set your domain, and restart nginx
upstream gnss_server {
server gnss-server:8000;
}
# =============================================================================
# IP WHITELIST FOR DASHBOARD ACCESS
# =============================================================================
# These IPs can access the web dashboard and admin endpoints.
# The validation API endpoints (/api/v1/validation*) are open to all.
#
# To update: edit this file and run ./deploy_server.sh --restart
# =============================================================================
geo $ip_whitelist {
default 0;
# Office IPs - Whitelisted for dashboard access
213.149.164.73 1; # Socrates Office 5G
87.228.228.45 1; # Thaleias Office
93.109.218.195 1; # HQ Cyta
65.18.217.50 1; # HQ Cablenet
93.109.218.196 1; # HQ Cyta 2
62.228.7.94 1; # Socrates Home 3
195.97.70.162 1; # Piraeus Office
# Localhost only (for internal health checks)
127.0.0.1 1;
# NOTE: Docker internal networks (10.0.0.0/8, 172.16.0.0/12) are NOT whitelisted
# to prevent privilege escalation if an attacker gains container access
}
# HTTP -> HTTPS redirect
server {
listen 80;
server_name YOUR_DOMAIN_HERE;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
# HTTPS server
server {
listen 443 ssl;
http2 on;
server_name YOUR_DOMAIN_HERE;
# SSL certificates (Let's Encrypt)
ssl_certificate /etc/letsencrypt/live/YOUR_DOMAIN_HERE/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/YOUR_DOMAIN_HERE/privkey.pem;
# SSL configuration
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
# Modern TLS configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# HSTS - Force HTTPS for 2 years, include subdomains
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains" always;
# Content Security Policy - restrict resource loading
# Allows: self, Leaflet from unpkg, map tiles, marker icons
add_header Content-Security-Policy "default-src 'self'; script-src 'self' https://unpkg.com 'unsafe-inline'; style-src 'self' https://unpkg.com 'unsafe-inline'; img-src 'self' data: https://*.basemaps.cartocdn.com https://raw.githubusercontent.com https://cdnjs.cloudflare.com https://*.openstreetmap.org; font-src 'self'; connect-src 'self'; frame-ancestors 'self'; base-uri 'self'; form-action 'self'" always;
# =========================================================================
# PUBLIC ENDPOINTS - Open to all (asset token authentication)
# =========================================================================
# Validation API - accessible from anywhere (clients authenticate with tokens)
location /api/v1/validation {
proxy_pass http://gnss_server;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300;
proxy_connect_timeout 300;
}
# Health check endpoint - open
location /health {
proxy_pass http://gnss_server;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# =========================================================================
# RESTRICTED ENDPOINTS - Office IPs only (session authentication)
# =========================================================================
# All other endpoints require IP whitelist
location / {
# Check IP whitelist
# TEMPORARILY DISABLED - uncomment to re-enable IP whitelisting
# if ($ip_whitelist = 0) {
# return 403;
# }
proxy_pass http://gnss_server;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_buffering off;
}
# Static files - also restricted
location /static/ {
# TEMPORARILY DISABLED - uncomment to re-enable IP whitelisting
# if ($ip_whitelist = 0) {
# return 403;
# }
proxy_pass http://gnss_server/static/;
proxy_cache_valid 200 1d;
expires 1d;
add_header Cache-Control "public, immutable";
}
# Custom error page for 403
error_page 403 /403.html;
location = /403.html {
internal;
default_type text/html;
return 403 '<!DOCTYPE html><html><head><title>Access Denied</title><style>body{font-family:sans-serif;display:flex;justify-content:center;align-items:center;height:100vh;margin:0;background:#060b10;color:#e5e9f5;}.container{text-align:center;}.title{font-size:48px;margin-bottom:20px;color:#c62828;}.msg{font-size:18px;color:#9aa3b8;}</style></head><body><div class="container"><div class="title">403</div><div class="msg">Access Denied<br>Your IP is not authorized to access this resource.</div></div></body></html>';
}
}

View File

@@ -0,0 +1,40 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/json application/javascript application/xml;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
include /etc/nginx/conf.d/*.conf;
}

View File

@@ -0,0 +1,28 @@
# GNSS Guard Server Dependencies
# Web framework
fastapi>=0.104.0
uvicorn[standard]>=0.24.0
# Database
sqlalchemy>=2.0.0
psycopg2-binary>=2.9.9 # PostgreSQL driver
alembic>=1.12.0 # Database migrations (optional)
# Configuration
pydantic>=2.5.0
pydantic-settings>=2.1.0
python-dotenv>=1.0.0
# Templates and static files
jinja2>=3.1.2
python-multipart>=0.0.6 # For form data
# Security
passlib[bcrypt]>=1.7.4 # Password hashing
slowapi>=0.1.9 # Rate limiting
# HTTP client (for health checks and Telegram API)
httpx>=0.25.0
requests>=2.31.0

View File

@@ -0,0 +1,4 @@
"""
API routes for GNSS Guard Server
"""

View File

@@ -0,0 +1,488 @@
#!/usr/bin/env python3
"""
REST API endpoints for GNSS Guard Server
Handles validation data submission and retrieval
"""
import json
import logging
from datetime import datetime, timedelta, timezone
from typing import List, Optional
from fastapi import APIRouter, Depends, HTTPException, Header, Query
from sqlalchemy.orm import Session
from sqlalchemy import desc
from database import get_db
from models import (
Asset, ValidationHistory, AssetNotificationState,
ValidationSubmission, ValidationBatchSubmission,
ValidationResponse, AssetStatus, AssetResponse, AssetCreate, AssetWithToken,
AssetImport, AssetBatchImport
)
from routes.auth import get_current_user
from services.telegram_service import get_telegram_service
logger = logging.getLogger("gnss_guard.server.api")
router = APIRouter(prefix="/api/v1", tags=["api"])
# =============================================================================
# Asset Token Authentication Dependency
# =============================================================================
async def get_current_asset(
authorization: str = Header(..., description="Bearer token for asset authentication"),
db: Session = Depends(get_db)
) -> Asset:
"""
Dependency to authenticate asset using Bearer token.
Returns the authenticated asset or raises 401.
"""
if not authorization.startswith("Bearer "):
raise HTTPException(status_code=401, detail="Invalid authorization header format")
token = authorization[7:] # Remove "Bearer " prefix
token_hash = Asset.hash_token(token)
asset = db.query(Asset).filter(
Asset.token_hash == token_hash,
Asset.is_active == True
).first()
if not asset:
raise HTTPException(status_code=401, detail="Invalid or inactive token")
return asset
# =============================================================================
# Validation Endpoints (Asset Authentication Required)
# =============================================================================
@router.post("/validation", status_code=201)
async def submit_validation(
data: ValidationSubmission,
asset: Asset = Depends(get_current_asset),
db: Session = Depends(get_db)
) -> dict:
"""
Submit a single validation record from an asset.
Also triggers Telegram notifications if state changed.
"""
try:
validation = ValidationHistory(
asset_id=asset.id,
validation_timestamp=data.validation_timestamp,
validation_timestamp_unix=data.validation_timestamp_unix,
is_valid=data.is_valid,
sources_missing=json.dumps(data.sources_missing),
sources_stale=json.dumps(data.sources_stale),
coordinate_differences=json.dumps(data.coordinate_differences),
source_coordinates=json.dumps(data.source_coordinates),
validation_details=json.dumps(data.validation_details),
)
db.add(validation)
db.commit()
logger.info(f"Validation received from asset '{asset.name}' at {data.validation_timestamp}")
# Process Telegram notification (will only send if state changed)
try:
telegram_service = get_telegram_service()
validation_data = {
"sources_missing": data.sources_missing,
"sources_stale": data.sources_stale,
"validation_details": data.validation_details,
"source_coordinates": data.source_coordinates,
}
telegram_service.process_validation(db, asset, validation_data)
except Exception as e:
logger.warning(f"Telegram notification error for {asset.name}: {e}")
return {
"status": "success",
"message": "Validation record saved",
"id": validation.id
}
except Exception as e:
logger.error(f"Error saving validation from {asset.name}: {e}")
db.rollback()
raise HTTPException(status_code=500, detail=str(e))
@router.post("/validation/batch", status_code=201)
async def submit_validation_batch(
data: ValidationBatchSubmission,
asset: Asset = Depends(get_current_asset),
db: Session = Depends(get_db)
) -> dict:
"""
Submit multiple validation records (for catching up after offline period).
Only sends Telegram notification for the most recent record to avoid spam.
"""
try:
saved_count = 0
skipped_count = 0
latest_record = None
latest_timestamp = 0
for record in data.records:
# Check if this timestamp already exists for this asset
existing = db.query(ValidationHistory).filter(
ValidationHistory.asset_id == asset.id,
ValidationHistory.validation_timestamp_unix == record.validation_timestamp_unix
).first()
if existing:
skipped_count += 1
continue
validation = ValidationHistory(
asset_id=asset.id,
validation_timestamp=record.validation_timestamp,
validation_timestamp_unix=record.validation_timestamp_unix,
is_valid=record.is_valid,
sources_missing=json.dumps(record.sources_missing),
sources_stale=json.dumps(record.sources_stale),
coordinate_differences=json.dumps(record.coordinate_differences),
source_coordinates=json.dumps(record.source_coordinates),
validation_details=json.dumps(record.validation_details),
)
db.add(validation)
saved_count += 1
# Track the most recent record for notification
if record.validation_timestamp_unix > latest_timestamp:
latest_timestamp = record.validation_timestamp_unix
latest_record = record
db.commit()
logger.info(f"Batch validation from '{asset.name}': {saved_count} saved, {skipped_count} skipped")
# Process Telegram notification for the most recent record only
if latest_record:
try:
telegram_service = get_telegram_service()
validation_data = {
"sources_missing": latest_record.sources_missing,
"sources_stale": latest_record.sources_stale,
"validation_details": latest_record.validation_details,
"source_coordinates": latest_record.source_coordinates,
}
telegram_service.process_validation(db, asset, validation_data)
except Exception as e:
logger.warning(f"Telegram notification error for {asset.name}: {e}")
return {
"status": "success",
"saved": saved_count,
"skipped": skipped_count
}
except Exception as e:
logger.error(f"Error saving batch validation from {asset.name}: {e}")
db.rollback()
raise HTTPException(status_code=500, detail=str(e))
# =============================================================================
# Read Endpoints (Session Authentication Required)
# =============================================================================
@router.get("/assets", response_model=List[AssetResponse])
async def list_assets(
user: str = Depends(get_current_user),
db: Session = Depends(get_db)
) -> List[AssetResponse]:
"""
List all registered assets.
Requires user session authentication.
"""
assets = db.query(Asset).filter(Asset.is_active == True).all()
return assets
@router.get("/assets/{asset_name}/status")
async def get_asset_status(
asset_name: str,
user: str = Depends(get_current_user),
db: Session = Depends(get_db)
) -> AssetStatus:
"""
Get current status of an asset (latest validation).
Requires user session authentication.
"""
asset = db.query(Asset).filter(
Asset.name == asset_name,
Asset.is_active == True
).first()
if not asset:
raise HTTPException(status_code=404, detail=f"Asset '{asset_name}' not found")
# Get latest validation
latest = db.query(ValidationHistory).filter(
ValidationHistory.asset_id == asset.id
).order_by(desc(ValidationHistory.validation_timestamp_unix)).first()
# Get online status from notification state (consistent with Telegram alerts)
notification_state = db.query(AssetNotificationState).filter(
AssetNotificationState.asset_id == asset.id
).first()
is_online = notification_state.is_online if notification_state else False
last_seen = notification_state.last_validation_at if notification_state else None
# Fall back to validation timestamp if no notification state
if not last_seen and latest and latest.received_at:
last_seen = latest.received_at
latest_validation = None
if latest:
latest_validation = ValidationResponse(
id=latest.id,
asset_name=asset.name,
validation_timestamp=latest.validation_timestamp,
validation_timestamp_unix=latest.validation_timestamp_unix,
is_valid=latest.is_valid,
sources_missing=json.loads(latest.sources_missing or "[]"),
sources_stale=json.loads(latest.sources_stale or "[]"),
coordinate_differences=json.loads(latest.coordinate_differences or "{}"),
source_coordinates=json.loads(latest.source_coordinates or "{}"),
validation_details=json.loads(latest.validation_details or "{}"),
received_at=latest.received_at
)
return AssetStatus(
asset_name=asset.name,
is_online=is_online,
last_seen=last_seen,
latest_validation=latest_validation
)
@router.get("/assets/{asset_name}/history")
async def get_asset_history(
asset_name: str,
hours: int = Query(default=72, ge=1, le=168, description="Hours of history (max 168 = 7 days)"),
user: str = Depends(get_current_user),
db: Session = Depends(get_db)
) -> List[ValidationResponse]:
"""
Get validation history for an asset (default: 72 hours).
Requires user session authentication.
"""
asset = db.query(Asset).filter(
Asset.name == asset_name,
Asset.is_active == True
).first()
if not asset:
raise HTTPException(status_code=404, detail=f"Asset '{asset_name}' not found")
# Calculate cutoff timestamp
cutoff = datetime.utcnow() - timedelta(hours=hours)
cutoff_unix = cutoff.timestamp()
# Get validation history
validations = db.query(ValidationHistory).filter(
ValidationHistory.asset_id == asset.id,
ValidationHistory.validation_timestamp_unix >= cutoff_unix
).order_by(desc(ValidationHistory.validation_timestamp_unix)).all()
return [
ValidationResponse(
id=v.id,
asset_name=asset.name,
validation_timestamp=v.validation_timestamp,
validation_timestamp_unix=v.validation_timestamp_unix,
is_valid=v.is_valid,
sources_missing=json.loads(v.sources_missing or "[]"),
sources_stale=json.loads(v.sources_stale or "[]"),
coordinate_differences=json.loads(v.coordinate_differences or "{}"),
source_coordinates=json.loads(v.source_coordinates or "{}"),
validation_details=json.loads(v.validation_details or "{}"),
received_at=v.received_at
)
for v in validations
]
# =============================================================================
# Admin Endpoints (Session Authentication Required)
# =============================================================================
@router.post("/admin/assets", response_model=AssetWithToken, status_code=201)
async def create_asset(
data: AssetCreate,
user: str = Depends(get_current_user),
db: Session = Depends(get_db)
) -> AssetWithToken:
"""
Create a new asset and return its token.
Requires user session authentication.
"""
# Check if asset already exists
existing = db.query(Asset).filter(Asset.name == data.name).first()
if existing:
raise HTTPException(status_code=400, detail=f"Asset '{data.name}' already exists")
# Generate token
token = Asset.generate_token()
token_hash = Asset.hash_token(token)
asset = Asset(
name=data.name,
token_hash=token_hash,
description=data.description,
telegram_chat_id=data.telegram_chat_id,
telegram_enabled=data.telegram_enabled
)
db.add(asset)
db.commit()
db.refresh(asset)
logger.info(f"Created new asset: {data.name}")
# Return asset with the unhashed token (only shown once!)
return AssetWithToken(
id=asset.id,
name=asset.name,
is_active=asset.is_active,
created_at=asset.created_at,
description=asset.description,
telegram_chat_id=asset.telegram_chat_id,
telegram_enabled=asset.telegram_enabled,
token=token
)
@router.delete("/admin/assets/{asset_name}")
async def deactivate_asset(
asset_name: str,
user: str = Depends(get_current_user),
db: Session = Depends(get_db)
) -> dict:
"""
Deactivate an asset (soft delete).
Requires user session authentication.
"""
asset = db.query(Asset).filter(Asset.name == asset_name).first()
if not asset:
raise HTTPException(status_code=404, detail=f"Asset '{asset_name}' not found")
asset.is_active = False
db.commit()
logger.info(f"Deactivated asset: {asset_name}")
return {"status": "success", "message": f"Asset '{asset_name}' deactivated"}
@router.post("/admin/assets/import", response_model=AssetResponse, status_code=201)
async def import_asset(
data: AssetImport,
user: str = Depends(get_current_user),
db: Session = Depends(get_db)
) -> AssetResponse:
"""
Import an asset with a specific token.
If asset exists, updates its token. If not, creates it.
Requires user session authentication.
"""
# Hash the provided token
token_hash = Asset.hash_token(data.token)
# Check if asset already exists
existing = db.query(Asset).filter(Asset.name == data.name).first()
if existing:
# Update existing asset's token
existing.token_hash = token_hash
existing.is_active = True
if data.description:
existing.description = data.description
if data.telegram_chat_id is not None:
existing.telegram_chat_id = data.telegram_chat_id
existing.telegram_enabled = data.telegram_enabled
db.commit()
db.refresh(existing)
logger.info(f"Updated token for existing asset: {data.name}")
return existing
else:
# Create new asset with provided token
asset = Asset(
name=data.name,
token_hash=token_hash,
description=data.description,
telegram_chat_id=data.telegram_chat_id,
telegram_enabled=data.telegram_enabled
)
db.add(asset)
db.commit()
db.refresh(asset)
logger.info(f"Imported new asset: {data.name}")
return asset
@router.post("/admin/assets/import/batch")
async def import_assets_batch(
data: AssetBatchImport,
user: str = Depends(get_current_user),
db: Session = Depends(get_db)
) -> dict:
"""
Batch import assets with specific tokens.
Creates new assets or updates existing ones.
Requires user session authentication.
"""
created = 0
updated = 0
errors = []
for asset_data in data.assets:
try:
token_hash = Asset.hash_token(asset_data.token)
existing = db.query(Asset).filter(Asset.name == asset_data.name).first()
if existing:
existing.token_hash = token_hash
existing.is_active = True
if asset_data.description:
existing.description = asset_data.description
if asset_data.telegram_chat_id is not None:
existing.telegram_chat_id = asset_data.telegram_chat_id
existing.telegram_enabled = asset_data.telegram_enabled
updated += 1
logger.info(f"Updated token for asset: {asset_data.name}")
else:
asset = Asset(
name=asset_data.name,
token_hash=token_hash,
description=asset_data.description,
telegram_chat_id=asset_data.telegram_chat_id,
telegram_enabled=asset_data.telegram_enabled
)
db.add(asset)
created += 1
logger.info(f"Created asset: {asset_data.name}")
except Exception as e:
errors.append({"name": asset_data.name, "error": str(e)})
logger.error(f"Failed to import asset {asset_data.name}: {e}")
db.commit()
return {
"status": "success",
"created": created,
"updated": updated,
"errors": errors
}

View File

@@ -0,0 +1,150 @@
#!/usr/bin/env python3
"""
Authentication routes for GNSS Guard Server
Handles user session authentication for the web UI
"""
import logging
from datetime import datetime, timedelta
from typing import Optional
from fastapi import APIRouter, Depends, HTTPException, Response, Request
from fastapi.responses import RedirectResponse
from pydantic import BaseModel
from slowapi import Limiter
from slowapi.util import get_remote_address
from config import get_config
logger = logging.getLogger("gnss_guard.server.auth")
router = APIRouter(tags=["auth"])
# Rate limiter instance (uses app.state.limiter set in main.py)
limiter = Limiter(key_func=get_remote_address)
# Simple in-memory session storage (for single-user scenario)
# In production with multiple servers, use Redis or database
_sessions: dict = {}
class LoginRequest(BaseModel):
username: str
password: str
def create_session(username: str) -> str:
"""Create a new session and return session ID"""
import secrets
session_id = secrets.token_urlsafe(32)
config = get_config()
_sessions[session_id] = {
"username": username,
"created_at": datetime.utcnow(),
"expires_at": datetime.utcnow() + timedelta(minutes=config.session_expire_minutes)
}
return session_id
def validate_session(session_id: str) -> Optional[str]:
"""Validate session and return username if valid"""
if not session_id or session_id not in _sessions:
return None
session = _sessions[session_id]
if datetime.utcnow() > session["expires_at"]:
del _sessions[session_id]
return None
return session["username"]
def get_current_user(request: Request) -> str:
"""
Dependency to get current authenticated user.
Raises 401 if not authenticated.
"""
session_id = request.cookies.get("session_id")
username = validate_session(session_id)
if not username:
raise HTTPException(
status_code=401,
detail="Not authenticated",
headers={"WWW-Authenticate": "Bearer"}
)
return username
def get_optional_user(request: Request) -> Optional[str]:
"""
Dependency to get current user if authenticated, None otherwise.
"""
session_id = request.cookies.get("session_id")
return validate_session(session_id)
@router.post("/login")
@limiter.limit("5/minute") # Rate limit: 5 login attempts per minute per IP
async def login(request: Request, data: LoginRequest, response: Response):
"""
Login endpoint - validates credentials and sets session cookie.
Rate limited to prevent brute force attacks.
"""
config = get_config()
# Verify credentials against hardcoded user
if data.username != config.web_username or data.password != config.web_password:
logger.warning(f"Failed login attempt for user: {data.username} from IP: {request.client.host}")
raise HTTPException(status_code=401, detail="Invalid credentials")
# Create session
session_id = create_session(data.username)
# Set session cookie
# secure=True ensures cookie only sent over HTTPS
response.set_cookie(
key="session_id",
value=session_id,
httponly=True,
secure=True, # Only send over HTTPS
samesite="lax",
max_age=config.session_expire_minutes * 60
)
logger.info(f"User logged in: {data.username}")
return {"message": "Login successful", "username": data.username}
@router.post("/logout")
async def logout(request: Request, response: Response):
"""
Logout endpoint - clears session.
"""
session_id = request.cookies.get("session_id")
if session_id and session_id in _sessions:
del _sessions[session_id]
response.delete_cookie("session_id")
return {"message": "Logged out successfully"}
@router.get("/auth/check")
async def check_auth(request: Request):
"""
Check if current session is authenticated.
"""
session_id = request.cookies.get("session_id")
username = validate_session(session_id)
if username:
return {"authenticated": True, "username": username}
else:
return {"authenticated": False}

View File

@@ -0,0 +1,4 @@
"""
Services for GNSS Guard Server
"""

View File

@@ -0,0 +1,225 @@
#!/usr/bin/env python3
"""
Asset management service for GNSS Guard Server
"""
import json
import logging
from datetime import datetime, timedelta
from typing import List, Optional, Dict, Any
from sqlalchemy.orm import Session
from sqlalchemy import desc, func
from models import Asset, ValidationHistory, AssetNotificationState
logger = logging.getLogger("gnss_guard.server.asset_service")
class AssetService:
"""Service for asset-related operations"""
def __init__(self, db: Session):
self.db = db
def get_all_assets(self, include_inactive: bool = False) -> List[Asset]:
"""Get all assets"""
query = self.db.query(Asset)
if not include_inactive:
query = query.filter(Asset.is_active == True)
return query.all()
def get_asset_by_name(self, name: str) -> Optional[Asset]:
"""Get asset by name"""
return self.db.query(Asset).filter(Asset.name == name).first()
def get_asset_by_token(self, token: str) -> Optional[Asset]:
"""Get active asset by token"""
token_hash = Asset.hash_token(token)
return self.db.query(Asset).filter(
Asset.token_hash == token_hash,
Asset.is_active == True
).first()
def get_latest_validation(self, asset_id: int) -> Optional[ValidationHistory]:
"""Get the latest validation record for an asset"""
return self.db.query(ValidationHistory).filter(
ValidationHistory.asset_id == asset_id
).order_by(desc(ValidationHistory.validation_timestamp_unix)).first()
def get_validation_at_timestamp(
self,
asset_id: int,
target_timestamp: float
) -> Optional[ValidationHistory]:
"""
Get the validation record closest to (but not after) the specified timestamp.
This is useful for viewing historical data at a specific point in time.
"""
return self.db.query(ValidationHistory).filter(
ValidationHistory.asset_id == asset_id,
ValidationHistory.validation_timestamp_unix <= target_timestamp
).order_by(desc(ValidationHistory.validation_timestamp_unix)).first()
def get_validation_history(
self,
asset_id: int,
hours: int = 72,
limit: Optional[int] = None
) -> List[ValidationHistory]:
"""Get validation history for an asset"""
cutoff = datetime.utcnow() - timedelta(hours=hours)
cutoff_unix = cutoff.timestamp()
query = self.db.query(ValidationHistory).filter(
ValidationHistory.asset_id == asset_id,
ValidationHistory.validation_timestamp_unix >= cutoff_unix
).order_by(desc(ValidationHistory.validation_timestamp_unix))
if limit:
query = query.limit(limit)
return query.all()
def get_all_assets_status(self) -> List[Dict[str, Any]]:
"""Get status summary for all active assets"""
assets = self.get_all_assets()
statuses = []
for asset in assets:
latest = self.get_latest_validation(asset.id)
# Get online status from notification state (consistent with Telegram alerts)
notification_state = self.db.query(AssetNotificationState).filter(
AssetNotificationState.asset_id == asset.id
).first()
is_online = notification_state.is_online if notification_state else False
last_seen = notification_state.last_validation_at if notification_state else None
# Fall back to validation timestamp if no notification state
if not last_seen and latest and latest.received_at:
last_seen = latest.received_at
is_valid = None
has_distance_alert = False # True if distance threshold exceeded
if latest:
is_valid = latest.is_valid
# Check if there's a distance alert (AT RISK vs DEGRADED)
if not is_valid:
validation_details = json.loads(latest.validation_details or "{}")
coordinate_differences = json.loads(latest.coordinate_differences or "{}")
threshold = validation_details.get("threshold_meters", 200)
max_distance = validation_details.get("max_distance_meters", 0)
# Also check coordinate_differences for max distance
if not max_distance and coordinate_differences:
for diff_data in coordinate_differences.values():
if isinstance(diff_data, dict):
dist = diff_data.get("distance_meters", 0)
if dist > max_distance:
max_distance = dist
has_distance_alert = max_distance > threshold
statuses.append({
"name": asset.name,
"is_online": is_online,
"is_valid": is_valid,
"has_distance_alert": has_distance_alert,
"last_seen": last_seen.isoformat() if last_seen else None,
"description": asset.description
})
return statuses
def get_route_data(
self,
asset_id: int,
hours: int = 72,
until_timestamp: Optional[float] = None
) -> List[Dict[str, Any]]:
"""
Get route data for map visualization.
Returns list of points with coordinates and validation status.
Args:
asset_id: The asset ID
hours: Number of hours of history to retrieve
until_timestamp: Optional Unix timestamp to show route up to this time.
If provided, returns `hours` of history ending at this timestamp.
"""
if until_timestamp is not None:
# Get history ending at the specified timestamp
cutoff_unix = until_timestamp - (hours * 3600)
validations = self.db.query(ValidationHistory).filter(
ValidationHistory.asset_id == asset_id,
ValidationHistory.validation_timestamp_unix >= cutoff_unix,
ValidationHistory.validation_timestamp_unix <= until_timestamp
).order_by(desc(ValidationHistory.validation_timestamp_unix)).all()
else:
validations = self.get_validation_history(asset_id, hours)
route_points = []
for v in validations:
source_coordinates = json.loads(v.source_coordinates or "{}")
# Get primary coordinate (prefer nmea_primary, then tm_ais, then any)
coord = None
for source in ["nmea_primary", "tm_ais", "starlink_location"]:
if source in source_coordinates:
coord = source_coordinates[source]
break
if not coord and source_coordinates:
# Use first available
coord = list(source_coordinates.values())[0]
if coord and coord.get("latitude") and coord.get("longitude"):
# Determine status color
sources_missing = json.loads(v.sources_missing or "[]")
sources_stale = json.loads(v.sources_stale or "[]")
validation_details = json.loads(v.validation_details or "{}")
threshold = validation_details.get("threshold_meters", 200)
max_distance = validation_details.get("max_distance_meters", 0)
if not v.is_valid and max_distance > threshold:
status = "alert" # Red - distance exceeded
elif sources_missing or sources_stale:
status = "degraded" # Orange - missing/stale
else:
status = "valid" # Green - all OK
route_points.append({
"id": v.id,
"timestamp": v.validation_timestamp,
"timestamp_unix": v.validation_timestamp_unix,
"latitude": coord["latitude"],
"longitude": coord["longitude"],
"status": status,
"is_valid": v.is_valid,
"sources_missing": sources_missing,
"sources_stale": sources_stale,
"max_distance_m": max_distance,
"threshold_m": threshold
})
return route_points
def cleanup_old_validations(self, days: int = 90) -> int:
"""Remove validation records older than specified days"""
cutoff = datetime.utcnow() - timedelta(days=days)
cutoff_unix = cutoff.timestamp()
deleted = self.db.query(ValidationHistory).filter(
ValidationHistory.validation_timestamp_unix < cutoff_unix
).delete()
self.db.commit()
logger.info(f"Cleaned up {deleted} old validation records")
return deleted

View File

@@ -0,0 +1,366 @@
#!/usr/bin/env python3
"""
Server-side Telegram Notification Service for GNSS Guard
Sends alerts to Telegram for GPS validation state changes:
- Sources becoming missing or recovering
- Sources becoming stale or recovering
- Distance threshold breaches (possible jamming/spoofing)
"""
import json
import logging
import requests
from datetime import datetime
from typing import Dict, Any, List, Optional, Set
from sqlalchemy.orm import Session
from config import get_config
from models import Asset, AssetNotificationState
logger = logging.getLogger("gnss_guard.server.telegram")
class TelegramService:
"""Server-side Telegram notification service"""
def __init__(self):
"""Initialize Telegram service with config"""
config = get_config()
self.bot_token = config.telegram_bot_token
self.default_chat_id = config.telegram_chat_id
self.enabled = config.telegram_enabled
if self.enabled:
self.api_url = f"https://api.telegram.org/bot{self.bot_token}"
logger.info("Telegram service initialized")
else:
self.api_url = None
logger.info("Telegram service disabled (no bot token or chat ID configured)")
@staticmethod
def escape_html(text: str) -> str:
"""Escape HTML special characters for Telegram HTML parsing"""
text = str(text)
text = text.replace('&', '&amp;')
text = text.replace('<', '&lt;')
text = text.replace('>', '&gt;')
return text
def _send_message(self, chat_id: str, message: str) -> bool:
"""Send a message to Telegram"""
if not self.enabled:
return False
try:
url = f"{self.api_url}/sendMessage"
payload = {
"chat_id": chat_id,
"text": message,
"parse_mode": "HTML",
"disable_web_page_preview": True
}
response = requests.post(url, json=payload, timeout=10)
if response.status_code == 200:
return True
else:
logger.error(f"Telegram API error: {response.status_code} - {response.text}")
return False
except Exception as e:
logger.error(f"Failed to send Telegram message: {e}")
return False
def _get_chat_id_for_asset(self, asset: Asset) -> Optional[str]:
"""Get the chat ID to use for an asset (asset-specific or default)"""
if not asset.telegram_enabled:
return None
return asset.telegram_chat_id or self.default_chat_id
def process_validation(
self,
db: Session,
asset: Asset,
validation_data: Dict[str, Any]
) -> bool:
"""
Process a validation submission and send notification if state changed.
Also handles online/offline state transitions.
Args:
db: Database session
asset: Asset that submitted the validation
validation_data: Validation data from the submission
Returns:
bool: True if notification was sent
"""
chat_id = self._get_chat_id_for_asset(asset)
# Get or create notification state for this asset
state = db.query(AssetNotificationState).filter(
AssetNotificationState.asset_id == asset.id
).first()
if not state:
state = AssetNotificationState(asset_id=asset.id)
db.add(state)
db.flush()
notification_sent = False
now = datetime.utcnow()
# Check if asset was offline and is now back online
was_offline = state.is_online == False and state.last_validation_at is not None
if was_offline and self.enabled and chat_id:
# Calculate how long it was offline
offline_duration = (now - state.last_validation_at).total_seconds() if state.last_validation_at else None
notification_sent = self.send_asset_online_alert(
chat_id=chat_id,
asset_name=asset.name,
offline_duration_seconds=offline_duration
)
# Update online status and last validation time
state.is_online = True
state.last_validation_at = now
# Skip further processing if Telegram is disabled
if not self.enabled or not chat_id:
db.commit()
return notification_sent
# Parse current state from validation
sources_missing = set(validation_data.get("sources_missing", []))
sources_stale = set(validation_data.get("sources_stale", []))
validation_details = validation_data.get("validation_details", {})
threshold = validation_details.get("threshold_meters", 0)
max_distance = validation_details.get("max_distance_meters", 0)
threshold_breached = max_distance > threshold if max_distance and threshold else False
# Parse previous state
prev_missing = set(json.loads(state.prev_sources_missing or "[]"))
prev_stale = set(json.loads(state.prev_sources_stale or "[]"))
prev_threshold_breached = state.prev_threshold_breached or False
# Detect changes
missing_added = sources_missing - prev_missing
missing_removed = prev_missing - sources_missing
stale_added = sources_stale - prev_stale
stale_removed = prev_stale - sources_stale
threshold_changed = threshold_breached != prev_threshold_breached
has_state_change = (
missing_added or missing_removed or
stale_added or stale_removed or
threshold_changed
)
if has_state_change:
logger.info(f"State change detected for {asset.name}")
# Build and send notification
source_coordinates = validation_data.get("source_coordinates", {})
message = self._build_state_change_message(
asset_name=asset.name,
missing_added=missing_added,
missing_removed=missing_removed,
stale_added=stale_added,
stale_removed=stale_removed,
threshold_breached=threshold_breached,
prev_threshold_breached=prev_threshold_breached,
max_distance_meters=max_distance,
threshold_meters=threshold,
source_coordinates=source_coordinates
)
if self._send_message(chat_id, message):
state.last_notification_at = now
logger.info(f"Notification sent for {asset.name}")
notification_sent = True
# Update state
state.prev_sources_missing = json.dumps(list(sources_missing))
state.prev_sources_stale = json.dumps(list(sources_stale))
state.prev_threshold_breached = threshold_breached
db.commit()
return notification_sent
def _build_state_change_message(
self,
asset_name: str,
missing_added: Set[str],
missing_removed: Set[str],
stale_added: Set[str],
stale_removed: Set[str],
threshold_breached: bool,
prev_threshold_breached: bool,
max_distance_meters: float,
threshold_meters: float,
source_coordinates: Dict[str, Any]
) -> str:
"""Build the state change notification message"""
timestamp = datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S UTC")
# Determine if this is a degradation or recovery
is_degradation = missing_added or stale_added or (threshold_breached and not prev_threshold_breached)
is_recovery = missing_removed or stale_removed or (not threshold_breached and prev_threshold_breached)
if is_degradation and not is_recovery:
emoji = "🚨"
title = "GNSS STATE DEGRADED"
elif is_recovery and not is_degradation:
emoji = ""
title = "GNSS STATE RECOVERED"
else:
emoji = "⚠️"
title = "GNSS STATE CHANGED"
message = (
f"{emoji} <b>{title}</b>\n\n"
f"📍 <b>Asset:</b> {self.escape_html(asset_name)}\n"
f"⏰ <b>Time:</b> {timestamp}\n\n"
)
# Missing sources changes
if missing_added:
message += f"❌ <b>Sources now MISSING:</b> {', '.join(sorted(missing_added))}\n"
if missing_removed:
message += f"✅ <b>Sources RECOVERED (was missing):</b> {', '.join(sorted(missing_removed))}\n"
# Stale sources changes
if stale_added:
message += f"⏱️ <b>Sources now STALE:</b> {', '.join(sorted(stale_added))}\n"
if stale_removed:
message += f"✅ <b>Sources RECOVERED (was stale):</b> {', '.join(sorted(stale_removed))}\n"
# Threshold breach changes
if threshold_breached and not prev_threshold_breached:
message += (
f"\n🚨 <b>DISTANCE THRESHOLD BREACHED!</b>\n"
f" Max distance: {max_distance_meters:.1f}m (threshold: {threshold_meters:.1f}m)\n"
f" ⚠️ Possible GPS jamming or spoofing!\n"
)
elif not threshold_breached and prev_threshold_breached:
message += (
f"\n✅ <b>Distance threshold OK</b>\n"
f" Max distance: {max_distance_meters:.1f}m (threshold: {threshold_meters:.1f}m)\n"
)
# Current coordinates summary
if source_coordinates:
message += f"\n📍 <b>Current Coordinates:</b>\n"
for source, coords in source_coordinates.items():
lat = coords.get("latitude", "N/A")
lon = coords.get("longitude", "N/A")
message += f"{self.escape_html(source)}: {lat}, {lon}\n"
return message
def send_asset_offline_alert(
self,
chat_id: str,
asset_name: str,
last_seen: datetime,
offline_threshold_seconds: int = 120
) -> bool:
"""Send notification when an asset goes offline (no updates received)"""
if not self.enabled:
return False
timestamp = datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S UTC")
last_seen_str = last_seen.strftime("%Y-%m-%d %H:%M:%S UTC") if last_seen else "Unknown"
message = (
f"📴 <b>ASSET OFFLINE</b>\n\n"
f"📍 <b>Asset:</b> {self.escape_html(asset_name)}\n"
f"⏰ <b>Detected at:</b> {timestamp}\n"
f"🕐 <b>Last seen:</b> {last_seen_str}\n\n"
f"⚠️ No updates received for over {offline_threshold_seconds} seconds.\n"
f"Check client connectivity and service status."
)
result = self._send_message(chat_id, message)
if result:
logger.info(f"Offline alert sent for {asset_name}")
return result
def send_asset_online_alert(
self,
chat_id: str,
asset_name: str,
offline_duration_seconds: Optional[float] = None
) -> bool:
"""Send notification when an asset comes back online"""
if not self.enabled:
return False
timestamp = datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S UTC")
duration_str = ""
if offline_duration_seconds:
if offline_duration_seconds < 60:
duration_str = f"{int(offline_duration_seconds)} seconds"
elif offline_duration_seconds < 3600:
duration_str = f"{int(offline_duration_seconds / 60)} minutes"
else:
hours = offline_duration_seconds / 3600
duration_str = f"{hours:.1f} hours"
message = (
f"📶 <b>ASSET BACK ONLINE</b>\n\n"
f"📍 <b>Asset:</b> {self.escape_html(asset_name)}\n"
f"⏰ <b>Time:</b> {timestamp}\n"
)
if duration_str:
message += f"⏱️ <b>Was offline for:</b> {duration_str}\n"
message += f"\n✅ Asset is now reporting normally."
result = self._send_message(chat_id, message)
if result:
logger.info(f"Online alert sent for {asset_name}")
return result
def test_connection(self) -> bool:
"""Test Telegram bot connection"""
if not self.enabled:
return False
try:
url = f"{self.api_url}/getMe"
response = requests.get(url, timeout=10)
if response.status_code == 200:
bot_info = response.json()
logger.info(f"Telegram bot connected: @{bot_info['result']['username']}")
return True
else:
logger.error(f"Telegram connection failed: {response.status_code}")
return False
except Exception as e:
logger.error(f"Telegram connection error: {e}")
return False
# Singleton instance
_telegram_service: Optional[TelegramService] = None
def get_telegram_service() -> TelegramService:
"""Get the singleton Telegram service instance"""
global _telegram_service
if _telegram_service is None:
_telegram_service = TelegramService()
return _telegram_service

View File

@@ -0,0 +1,976 @@
/**
* GNSS Guard Server - Dashboard JavaScript
* Multi-asset monitoring with 72h route visualization
*/
// Global state
let map = null;
let currentAsset = null;
let currentData = null;
let assets = [];
let routeMarkers = [];
let sourceMarkers = {};
let showRouteEnabled = true;
let lastFetchSucceeded = false;
let lastValidationTimestamp = null;
let isInitialMapLoad = true; // Only fit bounds on initial load or asset change
// Time mode state
let timeMode = 'now'; // 'now' or 'select'
let selectedTimestamp = null; // Unix timestamp when in 'select' mode
let autoRefreshInterval = null;
// =============================================================================
// AUTO-REFRESH PAGE (every 1 hour to pick up deployments)
// =============================================================================
const PAGE_LOAD_TIME = Date.now();
const AUTO_REFRESH_INTERVAL_MS = 60 * 60 * 1000; // 1 hour
let lastVisibilityCheck = Date.now();
function checkAutoRefresh() {
const elapsed = Date.now() - PAGE_LOAD_TIME;
if (elapsed >= AUTO_REFRESH_INTERVAL_MS) {
console.log('Auto-refreshing page after 1 hour...');
window.location.reload();
}
}
// Check for refresh on visibility change (tab becomes active)
document.addEventListener('visibilitychange', () => {
if (document.visibilityState === 'visible') {
const now = Date.now();
// Only check if at least 10 seconds since last check (prevents rapid refreshes)
if (now - lastVisibilityCheck > 10000) {
lastVisibilityCheck = now;
checkAutoRefresh();
}
}
});
// Periodic check every 5 minutes while tab is active
setInterval(checkAutoRefresh, 5 * 60 * 1000);
// Marker icons for sources
const iconPrimary = makeIcon('violet');
const iconSecondary = makeIcon('grey');
const iconAis = makeIcon('blue');
const iconStarlinkGps = makeIcon('yellow');
const iconStarlinkLocation = makeIcon('green');
const sourceConfig = {
'nmea_primary': { icon: iconPrimary, name: 'Primary GPS' },
'nmea_secondary': { icon: iconSecondary, name: 'Secondary GPS' },
'tm_ais': { icon: iconAis, name: 'TM AIS GPS' },
'starlink_gps': { icon: iconStarlinkGps, name: 'Starlink GPS' },
'starlink_location': { icon: iconStarlinkLocation, name: 'Starlink Location' }
};
// Initialize on DOM ready
document.addEventListener('DOMContentLoaded', () => {
initMap();
initTabs();
initTimePicker();
loadAssets();
// Auto-refresh every 10 seconds (only when in 'now' mode)
startAutoRefresh();
});
// =============================================================================
// MAP INITIALIZATION
// =============================================================================
function initMap() {
map = L.map('map', { zoomControl: true }).setView([34.665151, 33.016326], 11);
// CartoDB Dark tiles
L.tileLayer('https://{s}.basemaps.cartocdn.com/dark_all/{z}/{x}/{y}{r}.png', {
maxZoom: 19,
attribution: '&copy; OpenStreetMap & CARTO'
}).addTo(map);
// Recalculate marker offsets when zoom changes
map.on('zoomend', () => {
if (currentData) {
updateMap(currentData);
}
});
}
function makeIcon(color) {
return new L.Icon({
iconUrl: `https://raw.githubusercontent.com/pointhi/leaflet-color-markers/master/img/marker-icon-${color}.png`,
shadowUrl: 'https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.9.4/images/marker-shadow.png',
iconSize: [25, 41],
iconAnchor: [12, 41],
popupAnchor: [1, -34],
shadowSize: [41, 41]
});
}
// =============================================================================
// TABS (Mobile)
// =============================================================================
function initTabs() {
const tabButtons = document.querySelectorAll('.tab-btn');
tabButtons.forEach(btn => {
btn.addEventListener('click', () => {
const tabName = btn.dataset.tab;
// Update button states
tabButtons.forEach(b => b.classList.remove('active'));
btn.classList.add('active');
// Update tab content
document.querySelectorAll('.tab-content').forEach(tab => {
tab.classList.remove('active');
});
document.getElementById(`tab-${tabName}`).classList.add('active');
// Invalidate map size when showing map tab
if (tabName === 'map' && map) {
setTimeout(() => map.invalidateSize(), 100);
}
});
});
}
// =============================================================================
// TIME SELECTOR
// =============================================================================
function initTimePicker() {
// Set default datetime value to now
const now = new Date();
const localDatetime = formatDatetimeLocal(now);
const desktopPicker = document.getElementById('selectedDatetime');
const mobilePicker = document.getElementById('mobileSelectedDatetime');
if (desktopPicker) desktopPicker.value = localDatetime;
if (mobilePicker) mobilePicker.value = localDatetime;
}
function formatDatetimeLocal(date) {
// Format date as YYYY-MM-DDTHH:mm for datetime-local input
const year = date.getFullYear();
const month = String(date.getMonth() + 1).padStart(2, '0');
const day = String(date.getDate()).padStart(2, '0');
const hours = String(date.getHours()).padStart(2, '0');
const minutes = String(date.getMinutes()).padStart(2, '0');
return `${year}-${month}-${day}T${hours}:${minutes}`;
}
function setTimeMode(mode) {
timeMode = mode;
// Update radio buttons (sync both desktop and mobile)
document.querySelectorAll('input[name="timeMode"], input[name="timeModeM"]').forEach(radio => {
radio.checked = radio.value === mode;
});
// Show/hide datetime picker
const pickers = ['datetimePicker', 'mobileDatetimePicker'];
const displays = ['selectedTimeDisplay', 'mobileSelectedTimeDisplay'];
pickers.forEach(id => {
const el = document.getElementById(id);
if (el) el.classList.toggle('hidden', mode === 'now');
});
if (mode === 'now') {
// Hide the selected time display when switching to 'now'
displays.forEach(id => {
const el = document.getElementById(id);
if (el) el.classList.add('hidden');
});
// Clear selected timestamp
selectedTimestamp = null;
// Reset map to fit bounds when switching to 'now'
isInitialMapLoad = true;
// Restart auto-refresh and fetch current data
startAutoRefresh();
fetchData();
loadRouteData();
} else {
// Stop auto-refresh when viewing historical data
stopAutoRefresh();
}
}
function onDatetimeChange() {
// Sync desktop and mobile pickers
const desktopPicker = document.getElementById('selectedDatetime');
const mobilePicker = document.getElementById('mobileSelectedDatetime');
// Get the value from whichever picker was changed
const value = desktopPicker?.value || mobilePicker?.value;
if (desktopPicker) desktopPicker.value = value;
if (mobilePicker) mobilePicker.value = value;
}
function applySelectedTime() {
const desktopPicker = document.getElementById('selectedDatetime');
const value = desktopPicker?.value;
if (!value) {
alert('Please select a date and time');
return;
}
// Convert to Unix timestamp
const date = new Date(value);
selectedTimestamp = date.getTime() / 1000;
// Update display
const displayText = date.toLocaleString('en-US', {
month: 'short',
day: 'numeric',
year: 'numeric',
hour: '2-digit',
minute: '2-digit',
hour12: false
});
const displays = ['selectedTimeDisplay', 'mobileSelectedTimeDisplay'];
const textEls = ['selectedTimeText', 'mobileSelectedTimeText'];
displays.forEach(id => {
const el = document.getElementById(id);
if (el) el.classList.remove('hidden');
});
textEls.forEach(id => {
const el = document.getElementById(id);
if (el) el.textContent = displayText;
});
// Reset map to fit bounds when applying new time
isInitialMapLoad = true;
// Fetch historical data
fetchData();
loadRouteData();
logEvent('info', `Viewing data at ${displayText}`);
}
function startAutoRefresh() {
if (autoRefreshInterval) return; // Already running
autoRefreshInterval = setInterval(() => {
if (timeMode === 'now') {
fetchData();
}
}, 10000);
}
function stopAutoRefresh() {
if (autoRefreshInterval) {
clearInterval(autoRefreshInterval);
autoRefreshInterval = null;
}
}
function resetTimeMode() {
// Reset to 'now' mode (called when switching assets)
timeMode = 'now';
selectedTimestamp = null;
// Update UI
document.querySelectorAll('input[name="timeMode"], input[name="timeModeM"]').forEach(radio => {
radio.checked = radio.value === 'now';
});
const pickers = ['datetimePicker', 'mobileDatetimePicker'];
const displays = ['selectedTimeDisplay', 'mobileSelectedTimeDisplay'];
pickers.forEach(id => {
const el = document.getElementById(id);
if (el) el.classList.add('hidden');
});
displays.forEach(id => {
const el = document.getElementById(id);
if (el) el.classList.add('hidden');
});
// Reset datetime picker to current time
const now = new Date();
const localDatetime = formatDatetimeLocal(now);
const desktopPicker = document.getElementById('selectedDatetime');
const mobilePicker = document.getElementById('mobileSelectedDatetime');
if (desktopPicker) desktopPicker.value = localDatetime;
if (mobilePicker) mobilePicker.value = localDatetime;
// Restart auto-refresh
startAutoRefresh();
}
// =============================================================================
// ASSET MANAGEMENT
// =============================================================================
async function loadAssets() {
try {
const response = await fetch('/api/dashboard/assets');
if (!response.ok) throw new Error('Failed to load assets');
assets = await response.json();
renderAssetList();
populateMobileDropdown();
// Auto-select last asset if available (most recently added)
if (assets.length > 0) {
selectAsset(assets[assets.length - 1].name);
}
} catch (error) {
console.error('Error loading assets:', error);
document.getElementById('assetList').innerHTML =
'<div class="asset-loading">Failed to load assets</div>';
}
}
function renderAssetList() {
const container = document.getElementById('assetList');
if (assets.length === 0) {
container.innerHTML = '<div class="asset-loading">No assets registered</div>';
return;
}
container.innerHTML = assets.map(asset => {
// Determine status class:
// - online + valid = green (online)
// - online + invalid + distance alert = red (alert)
// - online + invalid + no distance alert = amber (degraded)
// - offline = gray (no class)
let statusClass = '';
if (asset.is_online) {
if (asset.is_valid === true) {
statusClass = 'online'; // green
} else if (asset.is_valid === false) {
statusClass = asset.has_distance_alert ? 'alert' : 'degraded'; // red or amber
} else {
statusClass = 'online'; // null/unknown - assume ok
}
}
const isActive = currentAsset === asset.name;
return `
<div class="asset-item ${isActive ? 'active' : ''} ${!asset.is_online ? 'offline' : ''}"
onclick="selectAsset('${asset.name}')">
<div class="asset-name">${asset.name}</div>
<div class="asset-status">
<span class="status-dot ${statusClass}"></span>
<span>${asset.is_online ? 'Online' : 'Offline'}</span>
</div>
</div>
`;
}).join('');
}
function populateMobileDropdown() {
const select = document.getElementById('mobileAssetSelect');
select.innerHTML = '<option value="">Select Asset...</option>' +
assets.map(asset => `<option value="${asset.name}">${asset.name}</option>`).join('');
if (currentAsset) {
select.value = currentAsset;
}
}
function selectAsset(assetName) {
if (!assetName) return;
currentAsset = assetName;
// Update UI
renderAssetList();
document.getElementById('mobileAssetSelect').value = assetName;
// Reset time mode to 'now' when switching assets
resetTimeMode();
// Clear current data and fetch new
currentData = null;
clearSourceMarkers();
clearRouteMarkers();
isInitialMapLoad = true; // Reset to fit bounds for new asset
// Show loading state immediately while fetching
showLoadingState();
fetchData();
loadRouteData();
}
// =============================================================================
// DATA FETCHING
// =============================================================================
async function fetchData() {
if (!currentAsset) return;
try {
// Build URL with optional timestamp parameter
let url = `/api/dashboard/asset/${currentAsset}/status`;
if (timeMode === 'select' && selectedTimestamp) {
url += `?at=${selectedTimestamp}`;
}
const response = await fetch(url);
if (!response.ok) {
showDegradedState(`Server error: ${response.status}`);
return;
}
const data = await response.json();
if (data.error) {
showDegradedState(data.error);
return;
}
currentData = data;
lastFetchSucceeded = true;
updateUI(data);
updateMap(data);
// Log event if validation timestamp changed (only in 'now' mode)
if (timeMode === 'now' && data.validation_timestamp !== lastValidationTimestamp) {
lastValidationTimestamp = data.validation_timestamp;
if (data.has_alert && !data.is_valid && data.max_distance_km !== null) {
logEvent('crit', `Alert: distance ${data.max_distance_km.toFixed(1)} km`);
} else if (!data.is_valid) {
logEvent('warn', 'Validation issue detected');
} else {
logEvent('info', 'Cloud status OK');
}
}
} catch (error) {
console.error('Fetch error:', error);
showDegradedState('Connection failed: ' + error.message);
}
}
async function loadRouteData() {
if (!currentAsset) return;
try {
// Build URL with optional until parameter
let url = `/api/dashboard/asset/${currentAsset}/route?hours=72`;
if (timeMode === 'select' && selectedTimestamp) {
url += `&until=${selectedTimestamp}`;
}
const response = await fetch(url);
if (!response.ok) return;
const routeData = await response.json();
renderRoute(routeData);
} catch (error) {
console.error('Error loading route:', error);
}
}
// =============================================================================
// UI UPDATES
// =============================================================================
/**
* Update both GNSS status pills (desktop and mobile)
*/
function updateStatusPills(status, text) {
const pills = [
document.getElementById('desktopStatusPill'),
document.getElementById('mobileStatusPill')
];
pills.forEach(pill => {
if (!pill) return;
pill.classList.remove('ok', 'warn', 'crit');
pill.textContent = text;
if (status) {
pill.classList.add(status);
}
});
}
function updateUI(data) {
// Update GNSS status pills
if (data.has_alert && data.max_distance_km !== null) {
updateStatusPills('crit', 'GNSS Integrity: At Risk');
} else if (!data.is_valid) {
updateStatusPills('warn', 'GNSS Integrity: Degraded');
} else {
updateStatusPills('ok', 'GNSS Integrity: Stable');
}
// Update alert banner
const alertBanner = document.getElementById('alertBanner');
const alertDistance = document.getElementById('alert-distance-value');
if (data.has_alert && data.max_distance_km !== null) {
alertBanner.classList.remove('hidden');
alertDistance.textContent = `${data.max_distance_km.toFixed(1)} km`;
} else {
alertBanner.classList.add('hidden');
}
// Update sources - pass distance alert state
const hasDistanceAlert = data.has_alert && data.max_distance_km !== null;
renderSources(data.sources, hasDistanceAlert);
}
function renderSources(sources, hasDistanceAlert = false) {
const container = document.getElementById('sourcesContainer');
const sourceOrder = ['nmea_primary', 'nmea_secondary', 'tm_ais', 'starlink_gps', 'starlink_location'];
container.innerHTML = sourceOrder.map(sourceName => {
const source = sources[sourceName];
if (!source) return '';
let cardClass = 'ok';
let badgeClass = 'badge-healthy';
let badgeText = 'HEALTHY';
let coordsText = 'Loading...';
let updateText = '-';
let updateClass = '';
if (!source.enabled) {
cardClass = 'offline';
badgeClass = 'badge-offline';
badgeText = 'NOT CONFIGURED';
coordsText = 'No data source configured.';
} else if (source.status === 'missing') {
cardClass = 'crit';
badgeClass = 'badge-danger';
badgeText = 'MISSING';
coordsText = 'No coordinates received.';
updateClass = 'stale-text';
} else if (source.status === 'stale' || source.is_stale) {
cardClass = 'stale';
badgeClass = 'badge-stale';
badgeText = 'STALE';
if (source.coordinates) {
coordsText = `${source.coordinates.latitude.toFixed(6)}, ${source.coordinates.longitude.toFixed(6)}`;
}
updateClass = 'stale-text';
} else {
if (source.coordinates) {
coordsText = `${source.coordinates.latitude.toFixed(6)}, ${source.coordinates.longitude.toFixed(6)}`;
}
// If distance alert and source has coordinates, mark as AT RISK
if (hasDistanceAlert && source.coordinates) {
cardClass = 'crit';
badgeClass = 'badge-danger';
badgeText = 'AT RISK';
}
}
if (source.last_update_unix) {
updateText = formatRelativeTime(source.last_update_unix);
}
return `
<div class="card ${cardClass}">
<div class="card-header">
<div class="card-title">${source.display_name}</div>
<div class="badge ${badgeClass}">${badgeText}</div>
</div>
<div class="card-line"><strong>Lat/Lon</strong>: ${coordsText}</div>
<div class="card-line"><strong>Updated</strong>: <span class="${updateClass}">${updateText}</span></div>
</div>
`;
}).join('');
}
/**
* Show loading state while fetching data for a new asset
*/
function showLoadingState() {
// Show neutral loading status
updateStatusPills(null, 'GNSS Integrity: Loading...');
// Hide alert banner
document.getElementById('alertBanner').classList.add('hidden');
// Show placeholder source cards
renderPlaceholderSources('loading');
}
/**
* Show state when asset has never pushed any validation data
*/
function showNoDataState() {
lastFetchSucceeded = false;
// Show neutral "no data" status
updateStatusPills(null, 'GNSS Integrity: No Data');
// Hide alert banner
document.getElementById('alertBanner').classList.add('hidden');
// Show placeholder source cards indicating awaiting data
renderPlaceholderSources('nodata');
logEvent('warn', 'Asset has not pushed any validation data yet');
}
/**
* Render placeholder cards for all sources
* @param {string} mode - 'loading' or 'nodata'
*/
function renderPlaceholderSources(mode) {
const container = document.getElementById('sourcesContainer');
const sourceNames = {
'nmea_primary': 'Primary GPS',
'nmea_secondary': 'Secondary GPS',
'tm_ais': 'TM AIS GPS',
'starlink_gps': 'Starlink GPS',
'starlink_location': 'Starlink Location'
};
const sourceOrder = ['nmea_primary', 'nmea_secondary', 'tm_ais', 'starlink_gps', 'starlink_location'];
const isLoading = mode === 'loading';
const badgeText = isLoading ? 'LOADING' : 'AWAITING';
const coordsText = isLoading ? 'Loading...' : 'Awaiting first update...';
const updateText = isLoading ? '...' : '—';
container.innerHTML = sourceOrder.map(sourceName => {
return `
<div class="card">
<div class="card-header">
<div class="card-title">${sourceNames[sourceName]}</div>
<div class="badge badge-offline">${badgeText}</div>
</div>
<div class="card-line"><strong>Lat/Lon</strong>: ${coordsText}</div>
<div class="card-line"><strong>Updated</strong>: <span>${updateText}</span></div>
</div>
`;
}).join('');
}
function showDegradedState(errorMessage) {
lastFetchSucceeded = false;
// Check if this is a "no data" error
if (errorMessage && errorMessage.includes('No validation data')) {
showNoDataState();
return;
}
// Update status pills to degraded state
updateStatusPills('warn', 'GNSS Integrity: Degraded');
// Mark all update times as stale
document.querySelectorAll('.card-line').forEach(line => {
if (line.textContent.includes('Updated')) {
const span = line.querySelector('span');
if (span) span.classList.add('stale-text');
}
});
logEvent('crit', errorMessage);
}
// =============================================================================
// MAP UPDATES
// =============================================================================
// Calculate offset for markers to spread them in a circle when close together
function calculateMarkerOffsets(sourceCoords, zoomLevel) {
if (Object.keys(sourceCoords).length <= 1) {
// Single marker, no offset needed
const result = {};
for (const [name, coord] of Object.entries(sourceCoords)) {
result[name] = { lat: coord.lat, lon: coord.lon, offsetLat: 0, offsetLon: 0 };
}
return result;
}
// Calculate centroid
let sumLat = 0, sumLon = 0, count = 0;
for (const coord of Object.values(sourceCoords)) {
sumLat += coord.lat;
sumLon += coord.lon;
count++;
}
const centroidLat = sumLat / count;
const centroidLon = sumLon / count;
// Check if markers are close together (within ~50 meters)
const closeThreshold = 0.0005; // ~50m in degrees
let maxDist = 0;
for (const coord of Object.values(sourceCoords)) {
const dist = Math.sqrt(
Math.pow(coord.lat - centroidLat, 2) +
Math.pow(coord.lon - centroidLon, 2)
);
maxDist = Math.max(maxDist, dist);
}
// If markers are spread out enough, don't offset
if (maxDist > closeThreshold) {
const result = {};
for (const [name, coord] of Object.entries(sourceCoords)) {
result[name] = { lat: coord.lat, lon: coord.lon, offsetLat: 0, offsetLon: 0 };
}
return result;
}
// Calculate offset radius based on zoom level (smaller offset when zoomed in)
// At zoom 15, offset ~30m; at zoom 10, offset ~100m
const baseOffset = 0.0003; // ~30m base offset
const zoomFactor = Math.pow(2, 15 - Math.min(zoomLevel, 18));
const offsetRadius = baseOffset * zoomFactor;
// Arrange markers in a circle around centroid
const result = {};
const sourceNames = Object.keys(sourceCoords);
const angleStep = (2 * Math.PI) / sourceNames.length;
sourceNames.forEach((name, index) => {
const angle = angleStep * index - Math.PI / 2; // Start from top
const offsetLat = offsetRadius * Math.cos(angle);
const offsetLon = offsetRadius * Math.sin(angle) * 1.5; // Adjust for latitude distortion
result[name] = {
lat: centroidLat + offsetLat,
lon: centroidLon + offsetLon,
offsetLat: offsetLat,
offsetLon: offsetLon,
originalLat: sourceCoords[name].lat,
originalLon: sourceCoords[name].lon
};
});
return result;
}
function updateMap(data) {
clearSourceMarkers();
const sources = data.sources || {};
const allCoords = [];
const sourceCoords = {};
// First pass: collect all valid coordinates
Object.entries(sources).forEach(([sourceName, sourceData]) => {
if (sourceData.coordinates && sourceData.coordinates.latitude && sourceData.coordinates.longitude) {
const lat = sourceData.coordinates.latitude;
const lon = sourceData.coordinates.longitude;
if (sourceConfig[sourceName]) {
sourceCoords[sourceName] = { lat, lon };
allCoords.push([lat, lon]);
}
}
});
// Calculate offsets for overlapping markers
const zoomLevel = map.getZoom() || 13;
const offsetPositions = calculateMarkerOffsets(sourceCoords, zoomLevel);
// Second pass: add markers with calculated positions
Object.entries(sources).forEach(([sourceName, sourceData]) => {
if (sourceData.coordinates && sourceData.coordinates.latitude && sourceData.coordinates.longitude) {
const config = sourceConfig[sourceName];
const position = offsetPositions[sourceName];
if (config && position) {
// Build popup with original coordinates
const origLat = sourceData.coordinates.latitude;
const origLon = sourceData.coordinates.longitude;
const popupContent = `<b>${config.name}</b><br>Lat: ${origLat.toFixed(6)}<br>Lon: ${origLon.toFixed(6)}`;
const marker = L.marker([position.lat, position.lon], { icon: config.icon })
.bindPopup(popupContent)
.addTo(map);
sourceMarkers[sourceName] = marker;
}
}
});
// Fit map to show all markers (only on initial load or asset change, not on refresh)
if (isInitialMapLoad) {
if (allCoords.length > 0) {
const bounds = L.latLngBounds(allCoords);
map.fitBounds(bounds, {
padding: [50, 50], // Add padding around markers
maxZoom: 15 // Don't zoom in too much when markers are close
});
} else if (currentData && currentData.map_center && currentData.map_center.latitude && currentData.map_center.longitude) {
// Fallback to center if no markers
map.setView([currentData.map_center.latitude, currentData.map_center.longitude], 13);
}
isInitialMapLoad = false; // Don't auto-zoom on subsequent refreshes
}
}
function clearSourceMarkers() {
Object.values(sourceMarkers).forEach(marker => {
map.removeLayer(marker);
});
sourceMarkers = {};
}
// =============================================================================
// ROUTE VISUALIZATION
// =============================================================================
function renderRoute(routeData) {
clearRouteMarkers();
if (!showRouteEnabled || !routeData || routeData.length === 0) return;
// Create small circle markers for route points
routeData.forEach(point => {
let color;
let statusText;
switch (point.status) {
case 'valid':
color = '#1fad3a';
statusText = 'Valid';
break;
case 'degraded':
color = '#ffa726';
statusText = 'Degraded';
break;
case 'alert':
color = '#c62828';
statusText = 'Alert';
break;
default:
color = '#9aa3b8';
statusText = 'Unknown';
}
const marker = L.circleMarker([point.latitude, point.longitude], {
radius: 5,
fillColor: color,
color: color,
weight: 1,
opacity: 0.8,
fillOpacity: 0.6
}).addTo(map);
// Create detailed popup
const popupContent = `
<div class="route-popup">
<div class="popup-header">${formatTimestamp(point.timestamp)}</div>
<div class="popup-row"><strong>Status:</strong> <span class="status-${point.status}">${statusText}</span></div>
<div class="popup-row"><strong>Lat/Lon:</strong> ${point.latitude.toFixed(6)}, ${point.longitude.toFixed(6)}</div>
${point.sources_missing?.length ? `<div class="popup-row"><strong>Missing:</strong> ${point.sources_missing.join(', ')}</div>` : ''}
${point.sources_stale?.length ? `<div class="popup-row"><strong>Stale:</strong> ${point.sources_stale.join(', ')}</div>` : ''}
${point.max_distance_m > point.threshold_m ? `<div class="popup-row"><strong>Distance:</strong> ${(point.max_distance_m/1000).toFixed(2)} km</div>` : ''}
</div>
`;
marker.bindPopup(popupContent);
routeMarkers.push(marker);
});
}
function clearRouteMarkers() {
routeMarkers.forEach(marker => {
map.removeLayer(marker);
});
routeMarkers = [];
}
function toggleRoute() {
showRouteEnabled = document.getElementById('showRoute').checked;
if (showRouteEnabled) {
loadRouteData();
} else {
clearRouteMarkers();
}
}
// =============================================================================
// EVENT LOGGING
// =============================================================================
function logEvent(level, message) {
const log = document.getElementById('eventLog');
const now = new Date();
const time = now.toTimeString().slice(0, 8);
const levelMap = {
'info': 'INFO',
'warn': 'WARN',
'crit': 'CRIT'
};
const event = document.createElement('div');
event.className = `event level-${level}`;
event.innerHTML = `<span class="level">${levelMap[level]}</span> [${time}] ${message}`;
// Insert after title
const title = log.querySelector('.event-log-title');
if (title.nextSibling) {
log.insertBefore(event, title.nextSibling);
} else {
log.appendChild(event);
}
// Keep only 3 events
const events = log.querySelectorAll('.event');
while (events.length > 3) {
const lastEvent = log.querySelector('.event:last-of-type');
if (lastEvent) lastEvent.remove();
else break;
}
}
// =============================================================================
// UTILITIES
// =============================================================================
function formatRelativeTime(unixTimestamp) {
const now = Date.now() / 1000;
const diff = now - unixTimestamp;
if (diff < 60) return `${Math.floor(diff)}s ago`;
if (diff < 3600) return `${Math.floor(diff / 60)}m ago`;
if (diff < 86400) return `${Math.floor(diff / 3600)}h ago`;
return `${Math.floor(diff / 86400)}d ago`;
}
function formatTimestamp(isoString) {
const date = new Date(isoString);
return date.toLocaleString('en-US', {
month: 'short',
day: 'numeric',
hour: '2-digit',
minute: '2-digit',
hour12: false
});
}
// =============================================================================
// AUTHENTICATION
// =============================================================================
async function logout() {
try {
await fetch('/logout', { method: 'POST' });
window.location.href = '/login';
} catch (error) {
console.error('Logout error:', error);
window.location.href = '/login';
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,160 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>TM GNSS Guard Cloud</title>
<!-- Leaflet CSS -->
<link rel="stylesheet" href="https://unpkg.com/leaflet@1.9.4/dist/leaflet.css" />
<link rel="stylesheet" href="/static/style.css?v={{ cache_buster }}">
</head>
<body>
<!-- HEADER -->
<div class="header">
<div class="header-left">
<div class="header-title">TM GNSS Guard</div>
<div class="header-sub">Multi-Asset Monitoring Cloud</div>
</div>
<div class="header-right">
<div class="user-menu">
<span class="user-name">{{ username }}</span>
<button class="logout-btn" onclick="logout()">Logout</button>
</div>
</div>
</div>
<!-- ALERT BANNER (dynamic) -->
<div class="alert-banner alert-critical hidden" id="alertBanner">
<div class="alert-indicator" id="alertIndicator"></div>
<div id="alertText">GPS Jamming or Spoofing Alert! Location Distance: <span id="alert-distance-value">-</span></div>
</div>
<!-- MOBILE ASSET DROPDOWN -->
<div class="mobile-asset-dropdown" id="mobileAssetDropdown">
<select id="mobileAssetSelect" onchange="selectAsset(this.value)">
<option value="">Select Asset...</option>
</select>
</div>
<!-- MOBILE TIME SELECTOR -->
<div class="mobile-time-selector" id="mobileTimeSelector">
<div class="time-radio-group">
<label class="time-radio">
<input type="radio" name="timeModeM" value="now" checked onchange="setTimeMode('now')">
<span>Now</span>
</label>
<label class="time-radio">
<input type="radio" name="timeModeM" value="select" onchange="setTimeMode('select')">
<span>Select Day/Time</span>
</label>
</div>
<div class="datetime-picker hidden" id="mobileDatetimePicker">
<input type="datetime-local" id="mobileSelectedDatetime" onchange="onDatetimeChange()">
<button class="apply-time-btn" onclick="applySelectedTime()">Apply</button>
</div>
<div class="selected-time-display hidden" id="mobileSelectedTimeDisplay">
Viewing: <span id="mobileSelectedTimeText"></span>
</div>
</div>
<!-- MOBILE GNSS STATUS (visible only in mobile view) -->
<div class="mobile-gnss-status" id="mobileGnssStatus">
<div class="status-pill" id="mobileStatusPill">GNSS Integrity: —</div>
</div>
<!-- MOBILE TAB BAR (only visible in portrait mode) -->
<div class="mobile-tabs">
<button class="tab-btn active" data-tab="status">Status</button>
<button class="tab-btn" data-tab="map">Map</button>
</div>
<!-- MAIN LAYOUT -->
<div class="layout">
<!-- ASSET PANEL (desktop only) -->
<div class="asset-panel" id="assetPanel">
<div class="panel-title">Assets</div>
<div class="asset-list" id="assetList">
<!-- Assets populated by JavaScript -->
<div class="asset-loading">Loading assets...</div>
</div>
<!-- TIME SELECTOR -->
<div class="time-selector" id="timeSelector">
<div class="panel-title">Time</div>
<div class="time-radio-group">
<label class="time-radio">
<input type="radio" name="timeMode" value="now" checked onchange="setTimeMode('now')">
<span>Now</span>
</label>
<label class="time-radio">
<input type="radio" name="timeMode" value="select" onchange="setTimeMode('select')">
<span>Select Day/Time</span>
</label>
</div>
<div class="datetime-picker hidden" id="datetimePicker">
<input type="datetime-local" id="selectedDatetime" onchange="onDatetimeChange()">
<button class="apply-time-btn" onclick="applySelectedTime()">Apply</button>
</div>
<div class="selected-time-display hidden" id="selectedTimeDisplay">
Viewing: <span id="selectedTimeText"></span>
</div>
</div>
</div>
<!-- STATUS TAB CONTENT (Sources + Event Log) -->
<div class="tab-content tab-status active" id="tab-status">
<div class="left-panel">
<!-- DESKTOP GNSS STATUS (visible only in desktop view) -->
<div class="desktop-gnss-status" id="desktopGnssStatus">
<div class="status-pill" id="desktopStatusPill">GNSS Integrity: —</div>
</div>
<div class="panel-title">GNSS Sources</div>
<div id="sourcesContainer">
<div class="no-asset-selected">Select an asset to view GNSS sources</div>
</div>
</div>
<!-- EVENT LOG -->
<div class="event-log" id="eventLog">
<div class="event-log-title">Event Stream</div>
</div>
<!-- COPYRIGHT -->
<div class="copyright">Tototheo Global © 2025</div>
</div>
<!-- MAP TAB CONTENT -->
<div class="tab-content tab-map" id="tab-map">
<div class="map-panel">
<div id="map"></div>
<div class="map-overlay-legend">
<div class="legend-section">Sources</div>
<div><span class="legend-dot legend-primary"></span>Primary GPS</div>
<div><span class="legend-dot legend-secondary"></span>Secondary GPS</div>
<div><span class="legend-dot legend-ais"></span>TM AIS GPS</div>
<div><span class="legend-dot legend-starlink-gps"></span>Starlink GPS</div>
<div><span class="legend-dot legend-starlink-location"></span>Starlink Location</div>
<div class="legend-section">72h Route</div>
<div><span class="legend-dot legend-valid"></span>Valid</div>
<div><span class="legend-dot legend-degraded"></span>Degraded</div>
<div><span class="legend-dot legend-alert"></span>Alert</div>
</div>
<div class="map-route-toggle">
<label>
<input type="checkbox" id="showRoute" checked onchange="toggleRoute()">
Show 72h Route
</label>
</div>
</div>
</div>
</div>
<!-- Leaflet JS -->
<script src="https://unpkg.com/leaflet@1.9.4/dist/leaflet.js"></script>
<script src="/static/app.js?v={{ cache_buster }}"></script>
</body>
</html>

View File

@@ -0,0 +1,71 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Login - GNSS Guard Cloud</title>
<link rel="stylesheet" href="/static/style.css?v={{ cache_buster }}">
</head>
<body class="login-page">
<div class="login-container">
<div class="login-box">
<div class="login-header">
<div class="login-title">TM GNSS Guard</div>
<div class="login-subtitle">Cloud Dashboard</div>
</div>
<form id="loginForm" class="login-form">
<div class="form-group">
<label for="username">Username</label>
<input type="text" id="username" name="username" required autocomplete="username">
</div>
<div class="form-group">
<label for="password">Password</label>
<input type="password" id="password" name="password" required autocomplete="current-password">
</div>
<div class="form-error hidden" id="loginError">Invalid credentials</div>
<button type="submit" class="login-btn">Sign In</button>
</form>
<div class="login-footer">
Tototheo Global © 2025
</div>
</div>
</div>
<script>
document.getElementById('loginForm').addEventListener('submit', async (e) => {
e.preventDefault();
const username = document.getElementById('username').value;
const password = document.getElementById('password').value;
const errorEl = document.getElementById('loginError');
errorEl.classList.add('hidden');
try {
const response = await fetch('/login', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ username, password })
});
if (response.ok) {
window.location.href = '/';
} else {
errorEl.classList.remove('hidden');
errorEl.textContent = 'Invalid username or password';
}
} catch (error) {
errorEl.classList.remove('hidden');
errorEl.textContent = 'Connection error. Please try again.';
}
});
</script>
</body>
</html>

View File

@@ -0,0 +1,4 @@
"""
Services for GNSS Guard client
"""

View File

@@ -0,0 +1,258 @@
#!/usr/bin/env python3
"""
Buzzer Service for reTerminal DM4
Controls the hardware buzzer using the Linux LED subsystem
"""
import logging
import os
import subprocess
import threading
import time
from typing import Optional
logger = logging.getLogger("gnss_guard.buzzer")
# Buzzer control path (Linux LED subsystem)
BUZZER_PATH = '/sys/class/leds/usr-buzzer/brightness'
class BuzzerService:
"""
Service to control the hardware buzzer on reTerminal DM4.
The buzzer is controlled via the Linux LED subsystem:
- Write "1" to turn ON
- Write "0" to turn OFF
Supports alarm patterns (on/off cycling) that run in a background thread.
"""
def __init__(self, on_duration: float = 1.0, off_duration: float = 1.0):
"""
Initialize the buzzer service.
Args:
on_duration: Duration in seconds for buzzer ON during alarm pattern
off_duration: Duration in seconds for buzzer OFF during alarm pattern
"""
self.on_duration = on_duration
self.off_duration = off_duration
# Alarm state
self._alarm_active = False
self._alarm_acknowledged = False
self._alarm_thread: Optional[threading.Thread] = None
self._stop_event = threading.Event()
# Check if buzzer is available
self._buzzer_available = os.path.exists(BUZZER_PATH)
if not self._buzzer_available:
logger.warning(f"Buzzer not available at {BUZZER_PATH} - running in simulation mode")
else:
logger.info(f"Buzzer service initialized (path: {BUZZER_PATH})")
# Ensure buzzer is off on startup
self.buzzer_off()
def _write_buzzer(self, value: str) -> bool:
"""
Write value to buzzer control file.
Args:
value: "1" for ON, "0" for OFF
Returns:
True if successful, False otherwise
"""
if not self._buzzer_available:
logger.debug(f"Buzzer simulation: {'ON' if value == '1' else 'OFF'}")
return True
try:
# Use sudo tee to write to the sysfs file (requires sudo permissions)
result = subprocess.run(
['sudo', 'tee', BUZZER_PATH],
input=value,
text=True,
stdout=subprocess.DEVNULL,
stderr=subprocess.PIPE,
timeout=2.0
)
if result.returncode != 0:
logger.error(f"Failed to write to buzzer: {result.stderr}")
return False
return True
except subprocess.TimeoutExpired:
logger.error("Timeout writing to buzzer")
return False
except Exception as e:
logger.error(f"Error writing to buzzer: {e}")
return False
def buzzer_on(self) -> bool:
"""Turn buzzer ON"""
return self._write_buzzer('1')
def buzzer_off(self) -> bool:
"""Turn buzzer OFF"""
return self._write_buzzer('0')
def get_status(self) -> str:
"""
Get current buzzer status.
Returns:
"ON", "OFF", or "UNKNOWN"
"""
if not self._buzzer_available:
return "SIMULATED"
try:
with open(BUZZER_PATH, 'r') as f:
value = f.read().strip()
return "ON" if value in ['1', '255'] else "OFF"
except Exception as e:
logger.error(f"Error reading buzzer status: {e}")
return "UNKNOWN"
def _alarm_loop(self):
"""
Background thread loop for alarm pattern (1 second on, 1 second off).
Runs until alarm is acknowledged or stopped.
"""
logger.info("Alarm pattern started")
while not self._stop_event.is_set() and not self._alarm_acknowledged:
# Buzzer ON
self.buzzer_on()
# Wait for on_duration or until stopped
if self._stop_event.wait(self.on_duration):
break
if self._alarm_acknowledged:
break
# Buzzer OFF
self.buzzer_off()
# Wait for off_duration or until stopped
if self._stop_event.wait(self.off_duration):
break
# Ensure buzzer is off when alarm stops
self.buzzer_off()
self._alarm_active = False
logger.info("Alarm pattern stopped")
def start_alarm(self) -> bool:
"""
Start the alarm pattern (1 second on, 1 second off).
Returns:
True if alarm started, False if already running
"""
if self._alarm_active and self._alarm_thread and self._alarm_thread.is_alive():
logger.debug("Alarm already active")
return False
# Reset state
self._alarm_acknowledged = False
self._alarm_active = True
self._stop_event.clear()
# Start alarm thread
self._alarm_thread = threading.Thread(target=self._alarm_loop, daemon=True)
self._alarm_thread.start()
logger.info("Alarm started")
return True
def stop_alarm(self) -> bool:
"""
Stop the alarm pattern.
Returns:
True if alarm was stopped, False if not running
"""
if not self._alarm_active:
return False
self._stop_event.set()
# Wait for thread to finish
if self._alarm_thread and self._alarm_thread.is_alive():
self._alarm_thread.join(timeout=3.0)
# Ensure buzzer is off
self.buzzer_off()
self._alarm_active = False
logger.info("Alarm stopped")
return True
def acknowledge_alarm(self) -> bool:
"""
Acknowledge the alarm, stopping the buzzer.
Returns:
True if alarm was acknowledged, False if no alarm active
"""
if not self._alarm_active:
logger.debug("No active alarm to acknowledge")
return False
self._alarm_acknowledged = True
self._stop_event.set()
# Wait for thread to finish
if self._alarm_thread and self._alarm_thread.is_alive():
self._alarm_thread.join(timeout=3.0)
# Ensure buzzer is off
self.buzzer_off()
self._alarm_active = False
logger.info("Alarm acknowledged")
return True
def is_alarm_active(self) -> bool:
"""Check if alarm is currently active"""
return self._alarm_active
def is_alarm_acknowledged(self) -> bool:
"""Check if current alarm has been acknowledged"""
return self._alarm_acknowledged
def reset_acknowledged(self):
"""
Reset the acknowledged state.
Called when status returns to healthy, allowing new alarms to trigger.
"""
self._alarm_acknowledged = False
def shutdown(self):
"""Shutdown the buzzer service, ensuring buzzer is off"""
self.stop_alarm()
self.buzzer_off()
logger.info("Buzzer service shutdown")
# Global buzzer service instance (singleton pattern)
_buzzer_instance: Optional[BuzzerService] = None
def get_buzzer_service(on_duration: float = 1.0, off_duration: float = 1.0) -> BuzzerService:
"""
Get or create the global buzzer service instance.
Args:
on_duration: Duration in seconds for buzzer ON during alarm
off_duration: Duration in seconds for buzzer OFF during alarm
Returns:
BuzzerService instance
"""
global _buzzer_instance
if _buzzer_instance is None:
_buzzer_instance = BuzzerService(on_duration, off_duration)
return _buzzer_instance

View File

@@ -0,0 +1,427 @@
#!/usr/bin/env python3
"""
Server Sync Service for GNSS Guard Client
Syncs validation data to the central GNSS Guard Server.
Features:
- Immediate sync on each validation
- Offline queue for failed syncs
- Batch catchup for queued records
"""
import json
import logging
import sqlite3
import time
from datetime import datetime
from pathlib import Path
from typing import Dict, Any, List, Optional
import requests
logger = logging.getLogger("gnss_guard.server_sync")
class ServerSync:
"""
Syncs validation data to the central GNSS Guard Server.
Features:
- Sends validation results to server after each iteration
- Queues failed requests for retry
- Batch sends queued records on successful connection
"""
def __init__(
self,
database_path: Path,
server_url: str,
server_token: str,
asset_name: str,
batch_size: int = 100,
max_queue_size: int = 1000
):
"""
Initialize server sync service.
Args:
database_path: Path to SQLite database (for sync queue)
server_url: Base URL of GNSS Guard Server
server_token: Authentication token for this asset
asset_name: Name of this asset
batch_size: Max records to send in batch catchup
max_queue_size: Max records to keep in queue
"""
self.database_path = database_path
self.server_url = server_url.rstrip('/')
self.server_token = server_token
self.asset_name = asset_name
self.batch_size = batch_size
self.max_queue_size = max_queue_size
# Request timeout (seconds)
self.timeout = 10
# Initialize sync queue table
self._init_sync_queue_table()
logger.info(f"Server sync initialized for asset '{asset_name}' -> {server_url}")
def _init_sync_queue_table(self):
"""Create sync_queue table if it doesn't exist"""
try:
conn = sqlite3.connect(str(self.database_path), timeout=5.0)
cursor = conn.cursor()
cursor.execute("""
CREATE TABLE IF NOT EXISTS sync_queue (
id INTEGER PRIMARY KEY AUTOINCREMENT,
validation_timestamp_unix REAL NOT NULL,
payload TEXT NOT NULL,
created_at TEXT NOT NULL,
attempts INTEGER DEFAULT 0,
last_attempt_at TEXT,
UNIQUE(validation_timestamp_unix)
)
""")
cursor.execute("""
CREATE INDEX IF NOT EXISTS idx_sync_queue_timestamp
ON sync_queue(validation_timestamp_unix)
""")
conn.commit()
conn.close()
logger.debug("Sync queue table initialized")
except Exception as e:
logger.error(f"Failed to initialize sync queue table: {e}")
def _get_headers(self) -> Dict[str, str]:
"""Get request headers with authentication"""
return {
"Authorization": f"Bearer {self.server_token}",
"Content-Type": "application/json"
}
def sync_validation(self, validation_result: Dict[str, Any]) -> bool:
"""
Sync a validation result to the server.
If sync fails, the record is queued for later retry.
If sync succeeds, attempt to send any queued records.
Args:
validation_result: Validation result from CoordinateValidator
Returns:
bool: True if sync succeeded, False if queued
"""
# Prepare payload
payload = {
"validation_timestamp": validation_result.get("validation_timestamp"),
"validation_timestamp_unix": validation_result.get("validation_timestamp_unix"),
"is_valid": validation_result.get("is_valid", False),
"sources_missing": validation_result.get("sources_missing", []),
"sources_stale": validation_result.get("sources_stale", []),
"coordinate_differences": validation_result.get("coordinate_differences", {}),
"source_coordinates": validation_result.get("source_coordinates", {}),
"validation_details": validation_result.get("validation_details", {}),
}
# Try to send
success = self._send_validation(payload)
if success:
# On success, try to send queued records
self._process_queue()
else:
# On failure, queue the record
self._queue_record(payload)
return success
def _send_validation(self, payload: Dict[str, Any]) -> bool:
"""
Send a single validation record to the server.
Args:
payload: Validation data to send
Returns:
bool: True if successful
"""
try:
url = f"{self.server_url}/api/v1/validation"
response = requests.post(
url,
json=payload,
headers=self._get_headers(),
timeout=self.timeout
)
if response.status_code == 201:
logger.debug(f"Validation synced to server")
return True
elif response.status_code == 401:
logger.error(f"Server auth failed - check SERVER_TOKEN")
return False
else:
logger.warning(f"Server returned {response.status_code}: {response.text[:200]}")
return False
except requests.exceptions.Timeout:
logger.warning(f"Server request timed out")
return False
except requests.exceptions.ConnectionError:
logger.warning(f"Cannot connect to server at {self.server_url}")
return False
except Exception as e:
logger.error(f"Server sync error: {e}")
return False
def _send_batch(self, records: List[Dict[str, Any]]) -> bool:
"""
Send a batch of validation records to the server.
Args:
records: List of validation payloads
Returns:
bool: True if successful
"""
try:
url = f"{self.server_url}/api/v1/validation/batch"
response = requests.post(
url,
json={"records": records},
headers=self._get_headers(),
timeout=self.timeout * 3 # Longer timeout for batch
)
if response.status_code == 201:
result = response.json()
logger.info(f"Batch sync: {result.get('saved', 0)} saved, {result.get('skipped', 0)} skipped")
return True
else:
logger.warning(f"Batch sync failed: {response.status_code}")
return False
except Exception as e:
logger.error(f"Batch sync error: {e}")
return False
def _queue_record(self, payload: Dict[str, Any]):
"""
Add a validation record to the sync queue.
Args:
payload: Validation data to queue
"""
try:
conn = sqlite3.connect(str(self.database_path), timeout=5.0)
cursor = conn.cursor()
# Check queue size and remove oldest if full
cursor.execute("SELECT COUNT(*) FROM sync_queue")
count = cursor.fetchone()[0]
if count >= self.max_queue_size:
# Remove oldest records to make room
remove_count = count - self.max_queue_size + 10
cursor.execute("""
DELETE FROM sync_queue
WHERE id IN (
SELECT id FROM sync_queue
ORDER BY validation_timestamp_unix ASC
LIMIT ?
)
""", (remove_count,))
logger.warning(f"Sync queue full, removed {remove_count} oldest records")
# Insert new record
cursor.execute("""
INSERT OR IGNORE INTO sync_queue
(validation_timestamp_unix, payload, created_at)
VALUES (?, ?, ?)
""", (
payload["validation_timestamp_unix"],
json.dumps(payload),
datetime.utcnow().isoformat()
))
conn.commit()
conn.close()
logger.debug(f"Queued validation record for later sync")
except Exception as e:
logger.error(f"Failed to queue record: {e}")
def _process_queue(self):
"""Process queued records after successful connection"""
try:
conn = sqlite3.connect(str(self.database_path), timeout=5.0)
cursor = conn.cursor()
# Get queued records (oldest first)
cursor.execute("""
SELECT id, payload FROM sync_queue
ORDER BY validation_timestamp_unix ASC
LIMIT ?
""", (self.batch_size,))
rows = cursor.fetchall()
conn.close()
if not rows:
return
logger.info(f"Processing {len(rows)} queued records")
# Parse payloads
records = []
record_ids = []
for row_id, payload_json in rows:
try:
records.append(json.loads(payload_json))
record_ids.append(row_id)
except json.JSONDecodeError:
record_ids.append(row_id) # Still mark for deletion if corrupt
if not records:
return
# Send batch
if self._send_batch(records):
# Remove sent records from queue
self._remove_from_queue(record_ids)
else:
# Update attempt count
self._update_attempt_count(record_ids)
except Exception as e:
logger.error(f"Error processing queue: {e}")
def _remove_from_queue(self, record_ids: List[int]):
"""Remove successfully sent records from queue"""
if not record_ids:
return
try:
conn = sqlite3.connect(str(self.database_path), timeout=5.0)
cursor = conn.cursor()
placeholders = ','.join('?' * len(record_ids))
cursor.execute(f"DELETE FROM sync_queue WHERE id IN ({placeholders})", record_ids)
conn.commit()
conn.close()
logger.debug(f"Removed {len(record_ids)} records from sync queue")
except Exception as e:
logger.error(f"Failed to remove records from queue: {e}")
def _update_attempt_count(self, record_ids: List[int]):
"""Update attempt count for failed records"""
if not record_ids:
return
try:
conn = sqlite3.connect(str(self.database_path), timeout=5.0)
cursor = conn.cursor()
now = datetime.utcnow().isoformat()
placeholders = ','.join('?' * len(record_ids))
cursor.execute(f"""
UPDATE sync_queue
SET attempts = attempts + 1, last_attempt_at = ?
WHERE id IN ({placeholders})
""", [now] + record_ids)
conn.commit()
conn.close()
except Exception as e:
logger.error(f"Failed to update attempt count: {e}")
def get_queue_status(self) -> Dict[str, Any]:
"""
Get current sync queue status.
Returns:
Dictionary with queue stats
"""
try:
conn = sqlite3.connect(str(self.database_path), timeout=5.0)
cursor = conn.cursor()
cursor.execute("SELECT COUNT(*) FROM sync_queue")
count = cursor.fetchone()[0]
cursor.execute("SELECT MIN(validation_timestamp_unix), MAX(validation_timestamp_unix) FROM sync_queue")
oldest, newest = cursor.fetchone()
conn.close()
return {
"queued_count": count,
"oldest_timestamp": oldest,
"newest_timestamp": newest,
"queue_full": count >= self.max_queue_size
}
except Exception as e:
logger.error(f"Failed to get queue status: {e}")
return {"error": str(e)}
def force_sync(self) -> bool:
"""
Force a sync of all queued records.
Returns:
bool: True if all records synced successfully
"""
logger.info("Starting forced sync of queued records")
try:
conn = sqlite3.connect(str(self.database_path), timeout=5.0)
cursor = conn.cursor()
cursor.execute("SELECT COUNT(*) FROM sync_queue")
total = cursor.fetchone()[0]
conn.close()
if total == 0:
logger.info("No records to sync")
return True
synced = 0
while True:
# Check if queue is empty
status = self.get_queue_status()
if status.get("queued_count", 0) == 0:
break
# Process a batch
before_count = status["queued_count"]
self._process_queue()
# Check if we made progress
after_status = self.get_queue_status()
if after_status.get("queued_count", 0) >= before_count:
# No progress, connection likely failed
logger.warning("Sync stalled, connection issue")
break
synced += before_count - after_status.get("queued_count", 0)
logger.info(f"Force sync completed: {synced}/{total} records synced")
return synced == total
except Exception as e:
logger.error(f"Force sync error: {e}")
return False

View File

@@ -0,0 +1,4 @@
"""
Data source fetchers for GNSS Guard
"""

View File

@@ -0,0 +1,733 @@
#!/usr/bin/env python3
"""
NMEA GPS data collector
Continuously collects GPS coordinates from NMEA devices via TCP connection
Filters for GGA sentences only and maintains latest position per source
"""
import asyncio
import logging
import os
import time
from datetime import datetime, timezone
from typing import Dict, Any, Optional, List
from queue import Queue
from config import Config
from storage.logger import StructuredLogger
logger = logging.getLogger("gnss_guard.nmea_gps")
def strip_telnet_iac(data: bytes, diagnostic_mode: bool = False) -> bytes:
"""Remove Telnet IAC (Interpret As Command) sequences from data stream.
Telnet IAC sequences are 0xFF followed by command bytes:
- 0xFF 0xFB (WILL)
- 0xFF 0xFC (WONT)
- 0xFF 0xFD (DO)
- 0xFF 0xFE (DONT)
- 0xFF 0xFF (IAC escape - becomes single 0xFF)
These sequences are negotiation bytes and should be stripped before
processing NMEA data.
"""
if not data:
return data
result = bytearray()
i = 0
while i < len(data):
if data[i] == 0xFF: # IAC byte
if i + 1 < len(data):
cmd = data[i + 1]
# IAC IAC (0xFF 0xFF) is escaped IAC - keep single 0xFF
if cmd == 0xFF:
result.append(0xFF)
i += 2
continue
# IAC command sequences (WILL/WONT/DO/DONT)
if cmd in (0xFB, 0xFC, 0xFD, 0xFE):
if diagnostic_mode:
cmd_names = {0xFB: "WILL", 0xFC: "WONT", 0xFD: "DO", 0xFE: "DONT"}
logger.debug(f"[DIAGNOSTIC] Telnet IAC: 0xFF 0x{cmd:02X} ({cmd_names.get(cmd, 'UNKNOWN')})")
i += 2 # Skip IAC + command
# Some commands have an option byte
if i < len(data):
opt = data[i]
if diagnostic_mode:
logger.debug(f"[DIAGNOSTIC] Option: 0x{opt:02X}")
i += 1
else:
# Unknown IAC command - skip it
if diagnostic_mode:
logger.debug(f"[DIAGNOSTIC] Telnet IAC: 0xFF 0x{cmd:02X} (unknown, skipped)")
i += 2
else:
# Incomplete IAC at end of buffer - skip it
i += 1
else:
result.append(data[i])
i += 1
return bytes(result)
class NMEAParser:
"""Parser for NMEA 0183 sentences"""
@staticmethod
def validate_checksum(sentence: str) -> bool:
"""Validate NMEA sentence checksum"""
if "*" not in sentence:
return False
try:
data, checksum = sentence.split("*")
calculated = 0
for char in data[1:]: # Skip the '$'
calculated ^= ord(char)
return format(calculated, "02X") == checksum.upper()
except (ValueError, IndexError):
return False
@staticmethod
def parse_sentence(sentence: str) -> Dict[str, Any]:
"""Parse NMEA sentence into structured data"""
sentence = sentence.strip()
if not sentence.startswith("$"):
return {"error": "Invalid sentence format"}
# Validate checksum
checksum_valid = NMEAParser.validate_checksum(sentence)
try:
# Remove checksum if present
if "*" in sentence:
sentence = sentence.split("*")[0]
# Split into fields
fields = sentence[1:].split(",") # Remove $ and split
if len(fields) < 1:
return {"error": "Empty sentence"}
# Extract talker ID and sentence type
identifier = fields[0]
if len(identifier) >= 5:
# Handle special cases like SHEROT (should be S + HEROT)
if identifier.startswith("SHEROT"):
talker_id = "S"
sentence_type = "HEROT"
else:
talker_id = identifier[:2]
sentence_type = identifier[2:]
else:
talker_id = "UN"
sentence_type = identifier
parsed_data = {
"sentence_type": sentence_type,
"talker_id": talker_id,
"checksum_valid": checksum_valid,
"fields": fields[1:] if len(fields) > 1 else [],
}
# Parse specific sentence types for enhanced data extraction
if sentence_type == "GGA":
parsed_data.update(NMEAParser.parse_gga(fields))
else:
# For non-GGA sentences, just return basic parsing
pass
return parsed_data
except Exception as e:
return {"error": f"Parse error: {str(e)}"}
@staticmethod
def parse_gga(fields: List[str]) -> Dict[str, Any]:
"""Parse GGA (Global Positioning System Fix Data) sentence"""
result = {}
try:
# Time
if fields[1]:
result["time"] = fields[1]
# Latitude
if fields[2] and fields[3]:
lat_deg = float(fields[2][:2])
lat_min = float(fields[2][2:])
latitude = lat_deg + lat_min / 60
if fields[3] == "S":
latitude = -latitude
result["latitude"] = latitude
# Longitude
if fields[4] and fields[5]:
lon_deg = float(fields[4][:3])
lon_min = float(fields[4][3:])
longitude = lon_deg + lon_min / 60
if fields[5] == "W":
longitude = -longitude
result["longitude"] = longitude
# Quality and satellites
if len(fields) > 6 and fields[6]:
result["quality"] = int(fields[6])
if len(fields) > 7 and fields[7]:
result["satellites"] = int(fields[7])
if len(fields) > 8 and fields[8]:
result["hdop"] = float(fields[8])
if len(fields) > 9 and fields[9]:
result["altitude"] = float(fields[9])
return result
except (ValueError, IndexError):
return {}
class DeviceConnection:
"""Handles connection to a single NMEA device"""
def __init__(
self,
device_config: Dict[str, Any],
data_queue: Queue,
parser: NMEAParser,
vessel_info: Dict[str, Any],
diagnostic_mode: bool = False,
structured_logger: Optional[StructuredLogger] = None,
source_name: Optional[str] = None,
verbose_logging: bool = False,
):
self.device_config = device_config
self.data_queue = data_queue
self.parser = parser
self.vessel_info = vessel_info
self.diagnostic_mode = diagnostic_mode
self.structured_logger = structured_logger
self.source_name = source_name or device_config.get("id", "unknown")
self.verbose_logging = verbose_logging
self.running = False
self.sequence_number = 1
self.sentences_received = 0
self.last_sentence_log_time = time.time()
async def connect_and_collect(self):
"""Connect to device and start collecting data"""
self.running = True
device_ip = self.device_config["ip"]
device_port = self.device_config["port"]
device_id = self.device_config["id"]
logger.info(f"Starting connection to device {device_id} at {device_ip}:{device_port}")
if self.structured_logger:
self.structured_logger.info(
self.source_name,
f"Starting connection to device {device_id}",
{"device_ip": device_ip, "device_port": device_port}
)
if self.diagnostic_mode or self.verbose_logging:
logger.info(f"[DEBUG] Enhanced connection logging enabled for device {device_id}")
logger.info(f"[DEBUG] Target: {device_ip}:{device_port}")
while self.running:
try:
# Connect to device with timeout
connection_timeout = 10 # 10 seconds timeout for connection
if self.verbose_logging:
logger.info(f"[DEBUG] Attempting TCP connection to {device_ip}:{device_port} (timeout: {connection_timeout}s)...")
try:
reader, writer = await asyncio.wait_for(
asyncio.open_connection(device_ip, device_port),
timeout=connection_timeout
)
except asyncio.TimeoutError:
logger.error(f"Connection TIMEOUT for device {device_id} at {device_ip}:{device_port} (no response in {connection_timeout}s)")
if self.verbose_logging:
logger.error(f"[DEBUG] TCP connection attempt timed out after {connection_timeout} seconds")
logger.error(f"[DEBUG] Possible causes: wrong IP, firewall blocking, device offline, network issue")
if self.structured_logger:
self.structured_logger.error(
self.source_name,
f"Connection timeout for device {device_id}",
{"device_ip": device_ip, "device_port": device_port, "timeout": connection_timeout}
)
if self.running:
reconnect_delay = self.device_config.get("reconnect_delay", 5)
logger.info(f"Retrying connection to device {device_id} in {reconnect_delay} seconds...")
await asyncio.sleep(reconnect_delay)
continue
# Log socket details if verbose
if self.verbose_logging:
sock = writer.get_extra_info('socket')
if sock:
local_addr = sock.getsockname()
peer_addr = sock.getpeername()
logger.info(f"[DEBUG] TCP connection established: local={local_addr} -> remote={peer_addr}")
logger.info(f"Connected to device {device_id} at {device_ip}:{device_port}")
if self.structured_logger:
self.structured_logger.info(
self.source_name,
f"Connected to device {device_id}",
{"device_ip": device_ip, "device_port": device_port}
)
# Buffer for accumulating data and extracting complete lines
buffer = b""
# Keep connection alive and read continuously
while self.running:
try:
# Read raw bytes from device with timeout
data = await asyncio.wait_for(reader.read(4096), timeout=30.0)
if not data:
logger.warning(f"No data received from device {device_id}, connection may be closed")
if self.verbose_logging:
logger.warning(f"[DEBUG] TCP read returned empty data - server closed connection or EOF")
if self.structured_logger:
self.structured_logger.warning(
self.source_name,
f"No data received from device {device_id}, connection may be closed"
)
break
# Strip Telnet IAC sequences before processing
cleaned_data = strip_telnet_iac(data, self.diagnostic_mode)
# Log data reception periodically (every 10 seconds) to show activity
current_time = time.time()
if current_time - self.last_sentence_log_time >= 10:
logger.debug(f"Received {len(cleaned_data)} bytes from {device_id} (total sentences: {self.sentences_received})")
self.last_sentence_log_time = current_time
# Add cleaned data to buffer
buffer += cleaned_data
# Process complete lines from buffer
while b"\n" in buffer or b"\r" in buffer:
# Find line ending (CRLF, LF, or CR)
line_end = -1
if b"\r\n" in buffer:
line_end = buffer.find(b"\r\n")
line = buffer[:line_end]
buffer = buffer[line_end + 2 :]
elif b"\n" in buffer:
line_end = buffer.find(b"\n")
line = buffer[:line_end]
buffer = buffer[line_end + 1 :]
elif b"\r" in buffer:
line_end = buffer.find(b"\r")
line = buffer[:line_end]
buffer = buffer[line_end + 1 :]
else:
break
# Decode and process NMEA sentence
try:
line_str = line.decode("ascii", errors="ignore").strip()
if line_str.startswith("$"):
self.sentences_received += 1
# Log first sentence and every 10th sentence to show activity (unless verbose logging is enabled)
# Verbose logging will be handled in the processing task
if not self.verbose_logging:
if self.sentences_received == 1:
logger.info(f"NMEA {device_id}: First sentence received: {line_str[:80]}")
elif self.sentences_received % 10 == 0:
logger.debug(f"NMEA {device_id}: Received sentence #{self.sentences_received}: {line_str[:50]}...")
await self.process_nmea_sentence(line_str, device_ip, device_port, device_id)
except Exception as e:
logger.debug(f"Error decoding line: {e}")
# Small delay to avoid overwhelming the system
read_delay = float(os.getenv("READ_DELAY_SECONDS", "0.1"))
await asyncio.sleep(read_delay)
except asyncio.TimeoutError:
logger.warning(f"Timeout reading from device {device_id} (30s no data)")
if self.verbose_logging:
logger.warning(f"[DEBUG] Read timeout - device may be disconnected or not sending data")
logger.warning(f"[DEBUG] Total sentences received this session: {self.sentences_received}")
if self.structured_logger:
self.structured_logger.warning(
self.source_name,
f"Timeout reading from device {device_id}"
)
continue
except Exception as e:
logger.error(f"Error reading from device {device_id}: {e}")
if self.verbose_logging:
logger.error(f"[DEBUG] Read error type: {type(e).__name__}")
logger.error(f"[DEBUG] Read error details: {e}")
if self.structured_logger:
self.structured_logger.error(
self.source_name,
f"Error reading from device {device_id}",
{"error": str(e)}
)
break
writer.close()
await writer.wait_closed()
logger.info(f"Disconnected from device {device_id}")
if self.structured_logger:
self.structured_logger.info(
self.source_name,
f"Disconnected from device {device_id}"
)
except ConnectionRefusedError as e:
logger.error(f"Connection REFUSED for device {device_id} at {device_ip}:{device_port} - Is the device running?")
if self.verbose_logging:
logger.error(f"[DEBUG] ConnectionRefusedError: {e}")
logger.error(f"[DEBUG] This usually means: port is closed, no service listening, or firewall blocking")
if self.structured_logger:
self.structured_logger.error(
self.source_name,
f"Connection refused for device {device_id}",
{"error": str(e), "device_ip": device_ip, "device_port": device_port}
)
if self.running:
reconnect_delay = self.device_config.get("reconnect_delay", 5)
logger.info(f"Retrying connection to device {device_id} in {reconnect_delay} seconds...")
await asyncio.sleep(reconnect_delay)
except OSError as e:
# Catch network-level errors (no route, network unreachable, etc.)
logger.error(f"Network error for device {device_id} at {device_ip}:{device_port}: {e}")
if self.verbose_logging:
logger.error(f"[DEBUG] OSError: {e}")
logger.error(f"[DEBUG] Error code: {e.errno if hasattr(e, 'errno') else 'N/A'}")
logger.error(f"[DEBUG] This may indicate: wrong IP, network unreachable, or routing issue")
if self.structured_logger:
self.structured_logger.error(
self.source_name,
f"Network error for device {device_id}",
{"error": str(e), "device_ip": device_ip, "device_port": device_port}
)
if self.running:
reconnect_delay = self.device_config.get("reconnect_delay", 5)
logger.info(f"Retrying connection to device {device_id} in {reconnect_delay} seconds...")
await asyncio.sleep(reconnect_delay)
except asyncio.TimeoutError as e:
logger.error(f"Connection TIMEOUT for device {device_id} at {device_ip}:{device_port}")
if self.verbose_logging:
logger.error(f"[DEBUG] Connection attempt timed out")
logger.error(f"[DEBUG] This may indicate: wrong IP, firewall, or device not responding")
if self.structured_logger:
self.structured_logger.error(
self.source_name,
f"Connection timeout for device {device_id}",
{"device_ip": device_ip, "device_port": device_port}
)
if self.running:
reconnect_delay = self.device_config.get("reconnect_delay", 5)
logger.info(f"Retrying connection to device {device_id} in {reconnect_delay} seconds...")
await asyncio.sleep(reconnect_delay)
except Exception as e:
logger.error(f"Connection error for device {device_id}: {e}")
if self.verbose_logging:
logger.error(f"[DEBUG] Exception type: {type(e).__name__}")
logger.error(f"[DEBUG] Exception details: {e}")
if self.structured_logger:
self.structured_logger.error(
self.source_name,
f"Connection error for device {device_id}",
{"error": str(e), "error_type": type(e).__name__, "device_ip": device_ip, "device_port": device_port}
)
if self.running:
reconnect_delay = self.device_config.get("reconnect_delay", 5)
logger.info(f"Retrying connection to device {device_id} in {reconnect_delay} seconds...")
if self.structured_logger:
self.structured_logger.info(
self.source_name,
f"Retrying connection to device {device_id}",
{"reconnect_delay": reconnect_delay}
)
await asyncio.sleep(reconnect_delay)
async def process_nmea_sentence(self, sentence: str, source_ip: str, source_port: int, device_id: str):
"""Process a single NMEA sentence"""
try:
start_time = time.time()
# Parse the sentence
parsed_data = self.parser.parse_sentence(sentence)
# Create record
now = datetime.now(timezone.utc)
record = {
"timestamp": now.isoformat(),
"timestamp_unix": now.timestamp() * 1000, # milliseconds
"vessel": self.vessel_info,
"source_ip": source_ip,
"source_port": source_port,
"device_id": device_id,
"raw_nmea": sentence,
"parsed_data": parsed_data,
"validation": {
"checksum_valid": parsed_data.get("checksum_valid", False),
"parse_successful": "error" not in parsed_data,
"errors": ([parsed_data.get("error")] if "error" in parsed_data else []),
},
"collection_metadata": {
"collector_version": "1.0.0",
"processing_delay_ms": int((time.time() - start_time) * 1000),
"sequence_number": self.sequence_number,
},
}
self.sequence_number += 1
# Add to queue for processing
self.data_queue.put(record)
except Exception as e:
logger.error(f"Error processing NMEA sentence from device {device_id}: {e}")
def stop(self):
"""Stop device connection"""
self.running = False
class NMEAGPSCollector:
"""Collector for NMEA GPS coordinates from vessel GPS devices"""
def __init__(
self,
config: Config,
source_name: str,
device_ip: str,
device_port: int,
structured_logger: Optional[StructuredLogger] = None
):
"""
Initialize NMEA GPS collector
Args:
config: Configuration object
source_name: Source identifier (e.g., "nmea_primary", "nmea_secondary")
device_ip: IP address of NMEA device
device_port: Port of NMEA device
structured_logger: Optional StructuredLogger instance for JSON logging
"""
self.config = config
self.source_name = source_name
self.device_ip = device_ip
self.device_port = device_port
self.structured_logger = structured_logger
self.latest_position: Optional[Dict[str, Any]] = None
self.lock = asyncio.Lock()
self.parser = NMEAParser()
self.data_queue = Queue()
self.device_config = {
"id": source_name,
"ip": device_ip,
"port": device_port,
"reconnect_delay": 5
}
self.vessel_info = {"serial": source_name}
self.connection = None
self.running = False
self.gga_count_period = 0
self.last_activity_log_time = time.time()
async def start(self):
"""Start the NMEA collector as an async task"""
if not self.device_ip or self.device_port == 0:
logger.warning(f"NMEA collector {self.source_name} not configured (missing IP/port)")
if self.structured_logger:
self.structured_logger.warning(
self.source_name,
"NMEA collector not configured",
{"reason": "missing IP/port"}
)
return
self.running = True
# Log verbose mode settings
if self.config.nmea_verbose_logging:
logger.info(f"[DEBUG] ========== NMEA DEBUG MODE ENABLED for {self.source_name} ==========")
logger.info(f"[DEBUG] Device configuration:")
logger.info(f"[DEBUG] IP: {self.device_ip}")
logger.info(f"[DEBUG] Port: {self.device_port}")
logger.info(f"[DEBUG] Source name: {self.source_name}")
logger.info(f"[DEBUG] Will show: connection attempts, TCP details, all NMEA sentences, errors")
self.connection = DeviceConnection(
device_config=self.device_config,
data_queue=self.data_queue,
parser=self.parser,
vessel_info=self.vessel_info,
diagnostic_mode=self.config.nmea_verbose_logging, # Enable diagnostic mode when verbose
structured_logger=self.structured_logger,
source_name=self.source_name,
verbose_logging=self.config.nmea_verbose_logging
)
# Start connection task
asyncio.create_task(self._connection_task())
# Start processing task
asyncio.create_task(self._processing_task())
async def _connection_task(self):
"""Task that manages the device connection"""
await self.connection.connect_and_collect()
async def _processing_task(self):
"""Task that processes NMEA sentences from the queue"""
while self.running:
try:
# Check if queue has items (non-blocking)
try:
record = self.data_queue.get_nowait()
except:
# Queue is empty, sleep and continue
# Log periodic activity summary (every 30 seconds)
current_time = time.time()
if current_time - self.last_activity_log_time >= 30:
if self.gga_count_period > 0:
# Only log activity summary if verbose logging is enabled
if self.config.nmea_verbose_logging:
logger.info(f"NMEA {self.source_name}: Activity - {self.gga_count_period} GGA sentences processed in last 30s")
else:
# Always log warnings if no GGA sentences received (important for diagnostics)
logger.warning(f"NMEA {self.source_name}: No GGA sentences received in last 30s (checking connection...)")
self.gga_count_period = 0
self.last_activity_log_time = current_time
await asyncio.sleep(0.1)
continue
# Process only GGA sentences
parsed_data = record.get("parsed_data", {})
sentence_type = parsed_data.get("sentence_type", "")
# Log all sentences if verbose logging is enabled
if self.config.nmea_verbose_logging:
raw_nmea = record.get("raw_nmea", "")
logger.info(f"NMEA {self.source_name}: [{sentence_type}] {raw_nmea[:100]}")
if sentence_type == "GGA":
self.gga_count_period += 1
# Only log GGA count if verbose logging is enabled
if self.config.nmea_verbose_logging:
logger.info(f"NMEA {self.source_name}: Received GGA sentence (total this period: {self.gga_count_period})")
await self._process_gga(record)
else:
# Log non-GGA sentences at debug level (unless verbose logging is enabled)
if not self.config.nmea_verbose_logging:
logger.debug(f"Received {sentence_type} sentence from {self.source_name} (not processing)")
except Exception as e:
logger.error(f"Error in NMEA processing task for {self.source_name}: {e}")
if self.structured_logger:
self.structured_logger.error(
self.source_name,
"Error in NMEA processing task",
{"error": str(e)}
)
await asyncio.sleep(1.0)
async def _process_gga(self, record: Dict[str, Any]):
"""Process a GGA sentence and update latest position"""
try:
parsed_data = record.get("parsed_data", {})
# Extract coordinates from parsed GGA data
latitude = parsed_data.get("latitude")
longitude = parsed_data.get("longitude")
altitude = parsed_data.get("altitude")
if latitude is None or longitude is None:
logger.debug(f"GGA sentence from {self.source_name} missing coordinates")
return
# Get timestamp
timestamp_str = record.get("timestamp", "")
try:
timestamp = datetime.fromisoformat(timestamp_str.replace("Z", "+00:00"))
except:
timestamp = datetime.now(timezone.utc)
# Update latest position
async with self.lock:
self.latest_position = {
"source": self.source_name,
"latitude": float(latitude),
"longitude": float(longitude),
"altitude": float(altitude) if altitude is not None else None,
"timestamp": timestamp.isoformat(),
"timestamp_unix": timestamp.timestamp(),
"supplementary_data": {
"satellites": parsed_data.get("satellites"),
"quality": parsed_data.get("quality"),
"hdop": parsed_data.get("hdop"),
"time": parsed_data.get("time"),
"raw_nmea": record.get("raw_nmea"),
}
}
# Log successful position update only if verbose logging is enabled
if self.config.nmea_verbose_logging:
logger.info(
f"NMEA {self.source_name}: Updated position - "
f"Lat: {latitude:.6f}, Lon: {longitude:.6f}, "
f"Alt: {altitude:.1f}m, Satellites: {parsed_data.get('satellites', 'N/A')}, "
f"Quality: {parsed_data.get('quality', 'N/A')}"
)
if self.structured_logger:
self.structured_logger.info(
self.source_name,
"Position updated from GGA sentence",
{
"latitude": latitude,
"longitude": longitude,
"altitude": altitude,
"satellites": parsed_data.get("satellites"),
"quality": parsed_data.get("quality"),
"hdop": parsed_data.get("hdop")
}
)
except Exception as e:
logger.error(f"Error processing GGA sentence from {self.source_name}: {e}")
if self.structured_logger:
self.structured_logger.error(
self.source_name,
"Error processing GGA sentence",
{"error": str(e)}
)
async def get_latest_position(self) -> Optional[Dict[str, Any]]:
"""Get the latest position from this collector"""
async with self.lock:
if self.latest_position:
# Create a copy to avoid race conditions
return dict(self.latest_position)
return None
async def stop(self):
"""Stop the collector"""
self.running = False
if self.connection:
self.connection.stop()

View File

@@ -0,0 +1,134 @@
#!/usr/bin/env python3
"""
Starlink GPS data fetcher
Fetches GPS coordinates from Starlink terminal via gRPC
Reuses logic from _old_project/starlink_location.py
"""
import sys
import logging
from pathlib import Path
from datetime import datetime, timezone
from typing import Dict, Any, Optional, List
from config import Config
logger = logging.getLogger("gnss_guard.starlink_gps")
# Add starlink-grpc-tools to path
starlink_tools_path = Path(__file__).parent.parent / "starlink-grpc-tools"
if str(starlink_tools_path) not in sys.path:
sys.path.insert(0, str(starlink_tools_path))
try:
import starlink_grpc
except ImportError:
logger.error("Failed to import starlink_grpc. Make sure starlink-grpc-tools is available.")
starlink_grpc = None
class StarlinkGPSFetcher:
"""Fetcher for Starlink GPS coordinates"""
def __init__(self, config: Config):
self.config = config
self.target_ip = f"{config.starlink_ip}:{config.starlink_port}"
def fetch(self) -> List[Dict[str, Any]]:
"""
Fetch GPS coordinates from Starlink terminal
Returns:
List of dictionaries with position data (starlink_location and starlink_gps)
Returns empty list if fetch fails
"""
if not self.config.starlink_enabled:
return []
if starlink_grpc is None:
logger.error("starlink_grpc module not available")
return []
max_retries = self.config.starlink_max_retries
results = []
for attempt in range(1, max_retries + 1):
try:
# Create channel context
context = starlink_grpc.ChannelContext(target=self.target_ip)
# Get location data
try:
raw_location = starlink_grpc.get_location(context)
location_info = starlink_grpc.location_data(context)
# Extract Starlink Location coordinates
if location_info.get("latitude") is not None and location_info.get("longitude") is not None:
timestamp = datetime.now(timezone.utc)
position_uncertainty = None
if hasattr(raw_location, 'sigma_m'):
try:
position_uncertainty = float(raw_location.sigma_m)
except (ValueError, TypeError):
pass
results.append({
"source": "starlink_location",
"latitude": float(location_info.get("latitude")),
"longitude": float(location_info.get("longitude")),
"altitude": float(location_info.get("altitude", 0)),
"position_uncertainty_m": position_uncertainty,
"timestamp": timestamp.isoformat(),
"timestamp_unix": timestamp.timestamp(),
"supplementary_data": {
"location_source": str(raw_location.source) if hasattr(raw_location, 'source') else None,
"horizontal_speed_mps": raw_location.horizontal_speed_mps if hasattr(raw_location, 'horizontal_speed_mps') else None,
"vertical_speed_mps": raw_location.vertical_speed_mps if hasattr(raw_location, 'vertical_speed_mps') else None,
}
})
# Extract Starlink GPS (LLA) coordinates
if hasattr(raw_location, 'lla'):
lla = raw_location.lla
lla_data = {}
for attr in dir(lla):
if not attr.startswith('_') and not callable(getattr(lla, attr)):
try:
lla_data[attr] = getattr(lla, attr)
except:
pass
if lla_data.get('lat') is not None and lla_data.get('lon') is not None:
timestamp = datetime.now(timezone.utc)
results.append({
"source": "starlink_gps",
"latitude": float(lla_data.get('lat')),
"longitude": float(lla_data.get('lon')),
"altitude": float(lla_data.get('alt', 0)),
"timestamp": timestamp.isoformat(),
"timestamp_unix": timestamp.timestamp(),
"supplementary_data": {
**{k: v for k, v in lla_data.items() if k not in ['lat', 'lon', 'alt', 'DESCRIPTOR']}
}
})
except starlink_grpc.GrpcError as e:
if attempt < max_retries:
logger.debug(f"Starlink GPS fetch attempt {attempt}/{max_retries} failed: {e}, retrying...")
continue
else:
logger.error(f"Failed to fetch Starlink location data after {max_retries} attempts: {e}")
return []
# Success - return results
return results
except Exception as e:
if attempt < max_retries:
logger.debug(f"Starlink GPS fetch attempt {attempt}/{max_retries} failed: {e}, retrying...")
continue
else:
logger.error(f"Unexpected error fetching Starlink GPS data after {max_retries} attempts: {e}")
return []
return []

View File

@@ -0,0 +1,156 @@
#!/usr/bin/env python3
"""
TM AIS GPS data fetcher
Fetches GPS coordinates from TM AIS GPS antenna via HTTP API
"""
import logging
import requests
from datetime import datetime, timezone
from typing import Dict, Any, Optional
from config import Config
# Suppress SSL warnings for self-signed certificates
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
logger = logging.getLogger("gnss_guard.tm_ais_gps")
class TMAISGPSFetcher:
"""Fetcher for TM AIS GPS coordinates"""
def __init__(self, config: Config):
self.config = config
self.url = config.tm_ais_url
self.token = config.tm_ais_token # Already trimmed in Config
self.last_fetch_failed = False
# Warn if token is empty
if not self.token:
logger.warning("TM AIS GPS token is empty - authentication will fail")
def fetch(self) -> Optional[Dict[str, Any]]:
"""
Fetch GPS coordinates from TM AIS GPS antenna
Returns:
Dictionary with position data or None if fetch fails
"""
if not self.config.tm_ais_enabled:
return None
headers = {"Authorization": f"Bearer {self.token}"}
max_retries = self.config.tm_ais_max_retries
last_error = None
# Log request details (mask token for security)
token_preview = f"{self.token[:4]}..." if len(self.token) > 4 else "***"
logger.debug(f"TM AIS GPS request: URL={self.url}, Token={token_preview}")
# Try up to max_retries times
for attempt_number in range(1, max_retries + 1):
logger.info(f"TM AIS GPS fetch attempt {attempt_number}/{max_retries}")
try:
# Disable SSL verification for self-signed certificates (equivalent to curl -k)
response = requests.get(
self.url,
headers=headers,
verify=False, # Equivalent to curl -k flag
timeout=5.0
)
# Log response status for debugging
logger.debug(f"TM AIS GPS response status: {response.status_code}")
response.raise_for_status()
data = response.json()
# Extract coordinates
latitude = data.get("latitude")
longitude = data.get("longitude")
gps_timestamp = data.get("gps_timestamp")
response_timestamp = data.get("response_timestamp")
if latitude is None or longitude is None:
logger.warning("TM AIS GPS response missing latitude or longitude")
self.last_fetch_failed = True
return None
# Parse timestamps and convert to UTC
gps_ts = None
if gps_timestamp:
try:
# Parse timestamp (handles both Z and timezone offsets)
parsed_ts = datetime.fromisoformat(gps_timestamp.replace("Z", "+00:00"))
# Convert to UTC if timezone-aware, otherwise assume UTC
if parsed_ts.tzinfo is not None:
gps_ts = parsed_ts.astimezone(timezone.utc)
else:
gps_ts = parsed_ts.replace(tzinfo=timezone.utc)
except Exception as e:
logger.debug(f"Failed to parse GPS timestamp: {e}")
response_ts = datetime.now(timezone.utc)
if response_timestamp:
try:
# Parse timestamp (handles both Z and timezone offsets)
parsed_ts = datetime.fromisoformat(response_timestamp.replace("Z", "+00:00"))
# Convert to UTC if timezone-aware, otherwise assume UTC
if parsed_ts.tzinfo is not None:
response_ts = parsed_ts.astimezone(timezone.utc)
else:
response_ts = parsed_ts.replace(tzinfo=timezone.utc)
except Exception as e:
logger.debug(f"Failed to parse response timestamp: {e}")
# Success - reset failure flag
if self.last_fetch_failed:
logger.info("TM AIS GPS connection restored")
self.last_fetch_failed = False
return {
"source": "tm_ais",
"latitude": float(latitude),
"longitude": float(longitude),
"altitude": None,
"timestamp": gps_ts.isoformat() if gps_ts else response_ts.isoformat(),
"timestamp_unix": (gps_ts or response_ts).timestamp(),
"supplementary_data": {
"gps_timestamp": gps_timestamp,
"response_timestamp": response_timestamp,
}
}
except requests.exceptions.HTTPError as e:
# Log response body for 401 errors to help debug authentication issues
if hasattr(e.response, 'status_code') and e.response.status_code == 401:
try:
error_body = e.response.text[:200] # Limit to first 200 chars
logger.debug(f"TM AIS GPS 401 response body: {error_body}")
except Exception:
pass
last_error = str(e)
logger.info(f"TM AIS GPS attempt {attempt_number}/{max_retries} failed: {e}")
# Continue to next attempt
except requests.exceptions.RequestException as e:
last_error = str(e)
logger.info(f"TM AIS GPS attempt {attempt_number}/{max_retries} failed: {e}")
# Continue to next attempt
except Exception as e:
last_error = str(e)
logger.info(f"TM AIS GPS attempt {attempt_number}/{max_retries} unexpected error: {e}")
# Continue to next attempt
# All attempts failed
if not self.last_fetch_failed:
logger.error(f"Failed to fetch TM AIS GPS data after {max_retries} attempts. Last error: {last_error}")
else:
logger.debug(f"TM AIS GPS still unavailable after {max_retries} attempts")
self.last_fetch_failed = True
return None

View File

@@ -0,0 +1,53 @@
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
name: Create and publish a Docker image to GitHub Packages Repository
on: workflow_dispatch
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:
platforms: 'arm64'
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@@ -0,0 +1,32 @@
FROM python:3.9
LABEL maintainer="neurocis <neurocis@neurocis.me>"
RUN true && \
\
ARCH=`uname -m`; \
if [ "$ARCH" = "armv7l" ]; then \
NOBIN_OPT="--no-binary=grpcio"; \
else \
NOBIN_OPT=""; \
fi; \
# Install python prerequisites
pip3 install --no-cache-dir $NOBIN_OPT \
croniter==2.0.5 pytz==2024.1 six==1.16.0 \
grpcio==1.62.2 \
influxdb==5.3.2 certifi==2024.2.2 charset-normalizer==3.3.2 idna==3.7 \
msgpack==1.0.8 requests==2.31.0 urllib3==2.2.1 \
influxdb-client==1.42.0 reactivex==4.0.4 \
paho-mqtt==2.0.0 \
pypng==0.20220715.0 \
python-dateutil==2.9.0 \
typing_extensions==4.11.0 \
yagrc==1.1.2 grpcio-reflection==1.62.2 protobuf==4.25.3
COPY dish_*.py loop_util.py starlink_*.py entrypoint.sh /app/
WORKDIR /app
ENTRYPOINT ["/bin/sh", "/app/entrypoint.sh"]
CMD ["dish_grpc_influx.py status alert_detail"]
# docker run -d --name='starlink-grpc-tools' -e INFLUXDB_HOST=192.168.1.34 -e INFLUXDB_PORT=8086 -e INFLUXDB_DB=starlink
# --net='br0' --ip='192.168.1.39' ghcr.io/sparky8512/starlink-grpc-tools dish_grpc_influx.py status alert_detail

View File

@@ -0,0 +1,24 @@
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <https://unlicense.org>

View File

@@ -0,0 +1,818 @@
{
"__inputs": [
{
"name": "VAR_DS_INFLUXDB",
"type": "constant",
"label": "InfluxDB DataSource",
"value": "InfluxDB-starlinkstats",
"description": ""
},
{
"name": "VAR_TBL_STATS",
"type": "constant",
"label": "Table name for Statistics",
"value": "spacex.starlink.user_terminal.status",
"description": ""
}
],
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "7.3.6"
},
{
"type": "panel",
"id": "graph",
"name": "Graph",
"version": ""
},
{
"type": "datasource",
"id": "influxdb",
"name": "InfluxDB",
"version": "1.0.0"
},
{
"type": "panel",
"id": "table",
"name": "Table",
"version": ""
}
],
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"id": null,
"iteration": 1610413551748,
"links": [],
"panels": [
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$DS_INFLUXDB",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 11,
"w": 12,
"x": 0,
"y": 0
},
"hiddenSeries": false,
"id": 4,
"legend": {
"alignAsTable": true,
"avg": true,
"current": true,
"hideZero": false,
"max": true,
"min": false,
"rightSide": false,
"show": true,
"total": false,
"values": true
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.3.6",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"groupBy": [],
"measurement": "/^$TBL_STATS$/",
"orderByTime": "ASC",
"policy": "default",
"queryType": "randomWalk",
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"downlink_throughput_bps"
],
"type": "field"
},
{
"params": [
"bps Down"
],
"type": "alias"
}
],
[
{
"params": [
"uplink_throughput_bps"
],
"type": "field"
},
{
"params": [
"bps Up"
],
"type": "alias"
}
]
],
"tags": []
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Actual Throughput",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:1099",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"$$hashKey": "object:1100",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$DS_INFLUXDB",
"description": "",
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 11,
"w": 12,
"x": 12,
"y": 0
},
"hiddenSeries": false,
"id": 2,
"legend": {
"alignAsTable": true,
"avg": true,
"current": true,
"max": true,
"min": true,
"show": true,
"total": false,
"values": true
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.3.6",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"groupBy": [],
"measurement": "/^$TBL_STATS$/",
"orderByTime": "ASC",
"policy": "default",
"queryType": "randomWalk",
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"pop_ping_latency_ms"
],
"type": "field"
},
{
"params": [
"Ping Latency"
],
"type": "alias"
}
],
[
{
"params": [
"pop_ping_drop_rate"
],
"type": "field"
},
{
"params": [
"Drop Rate"
],
"type": "alias"
}
],
[
{
"params": [
"fraction_obstructed"
],
"type": "field"
},
{
"params": [
"*100"
],
"type": "math"
},
{
"params": [
"Percent Obstructed"
],
"type": "alias"
}
],
[
{
"params": [
"snr"
],
"type": "field"
},
{
"params": [
"*10"
],
"type": "math"
},
{
"params": [
"SNR"
],
"type": "alias"
}
]
],
"tags": []
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Ping Latency, Drop Rate, Percent Obstructed & SNR",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"cacheTimeout": null,
"datasource": "$DS_INFLUXDB",
"description": "",
"fieldConfig": {
"defaults": {
"custom": {
"align": null,
"filterable": false
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "Obstructed"
},
"properties": [
{
"id": "custom.width",
"value": 105
}
]
},
{
"matcher": {
"id": "byName",
"options": "Wrong Location"
},
"properties": [
{
"id": "custom.width",
"value": 114
}
]
},
{
"matcher": {
"id": "byName",
"options": "Thermal Throttle"
},
"properties": [
{
"id": "custom.width",
"value": 121
}
]
},
{
"matcher": {
"id": "byName",
"options": "Thermal Shutdown"
},
"properties": [
{
"id": "custom.width",
"value": 136
}
]
},
{
"matcher": {
"id": "byName",
"options": "Motors Stuck"
},
"properties": [
{
"id": "custom.width",
"value": 116
}
]
},
{
"matcher": {
"id": "byName",
"options": "Time"
},
"properties": [
{
"id": "custom.width",
"value": 143
}
]
},
{
"matcher": {
"id": "byName",
"options": "State"
},
"properties": [
{
"id": "custom.width",
"value": 118
}
]
},
{
"matcher": {
"id": "byName",
"options": "Bad Location"
},
"properties": [
{
"id": "custom.width",
"value": 122
}
]
},
{
"matcher": {
"id": "byName",
"options": "Temp Throttle"
},
"properties": [
{
"id": "custom.width",
"value": 118
}
]
},
{
"matcher": {
"id": "byName",
"options": "Temp Shutdown"
},
"properties": [
{
"id": "custom.width",
"value": 134
}
]
},
{
"matcher": {
"id": "byName",
"options": "Software Version"
},
"properties": [
{
"id": "custom.width",
"value": 369
}
]
}
]
},
"gridPos": {
"h": 7,
"w": 24,
"x": 0,
"y": 11
},
"id": 6,
"interval": null,
"links": [],
"options": {
"showHeader": true,
"sortBy": [
{
"desc": true,
"displayName": "Time (last)"
}
]
},
"pluginVersion": "7.3.6",
"targets": [
{
"groupBy": [],
"hide": false,
"measurement": "/^$TBL_STATS$/",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT \"currently_obstructed\" AS \"Obstructed\", \"alert_unexpected_location\" AS \"Wrong Location\", \"alert_thermal_throttle\" AS \"Thermal Throttle\", \"alert_thermal_shutdown\" AS \"Thermal Shutdown\", \"alert_motors_stuck\" AS \"Motors Stuck\", \"state\" AS \"State\" FROM \"spacex.starlink.user_terminal.status\" WHERE $timeFilter",
"queryType": "randomWalk",
"rawQuery": false,
"refId": "A",
"resultFormat": "table",
"select": [
[
{
"params": [
"state"
],
"type": "field"
},
{
"params": [
"State"
],
"type": "alias"
}
],
[
{
"params": [
"currently_obstructed"
],
"type": "field"
},
{
"params": [
"Obstructed"
],
"type": "alias"
}
],
[
{
"params": [
"alert_unexpected_location"
],
"type": "field"
},
{
"params": [
"Bad Location"
],
"type": "alias"
}
],
[
{
"params": [
"alert_thermal_throttle"
],
"type": "field"
},
{
"params": [
"Temp Throttled"
],
"type": "alias"
}
],
[
{
"params": [
"alert_thermal_shutdown"
],
"type": "field"
},
{
"params": [
"Temp Shutdown"
],
"type": "alias"
}
],
[
{
"params": [
"alert_motors_stuck"
],
"type": "field"
},
{
"params": [
"Motors Stuck"
],
"type": "alias"
}
],
[
{
"params": [
"software_version"
],
"type": "field"
},
{
"params": [
"Software Version"
],
"type": "alias"
}
],
[
{
"params": [
"hardware_version"
],
"type": "field"
},
{
"params": [
"Hardware Version"
],
"type": "alias"
}
]
],
"tags": []
}
],
"timeFrom": null,
"timeShift": null,
"title": "Alerts & Versions",
"transformations": [
{
"id": "groupBy",
"options": {
"fields": {
"Bad Location": {
"aggregations": [],
"operation": "groupby"
},
"Hardware Version": {
"aggregations": [],
"operation": "groupby"
},
"Motors Stuck": {
"aggregations": [],
"operation": "groupby"
},
"Obstructed": {
"aggregations": [],
"operation": "groupby"
},
"Software Version": {
"aggregations": [],
"operation": "groupby"
},
"State": {
"aggregations": [],
"operation": "groupby"
},
"Temp Shutdown": {
"aggregations": [],
"operation": "groupby"
},
"Temp Throttle": {
"aggregations": [],
"operation": "groupby"
},
"Temp Throttled": {
"aggregations": [],
"operation": "groupby"
},
"Thermal Shutdown": {
"aggregations": [],
"operation": "groupby"
},
"Thermal Throttle": {
"aggregations": [],
"operation": "groupby"
},
"Time": {
"aggregations": [
"last"
],
"operation": "aggregate"
},
"Wrong Location": {
"aggregations": [],
"operation": "groupby"
}
}
}
}
],
"type": "table"
}
],
"refresh": false,
"schemaVersion": 26,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"current": {
"value": "${VAR_DS_INFLUXDB}",
"text": "${VAR_DS_INFLUXDB}",
"selected": false
},
"error": null,
"hide": 2,
"label": "InfluxDB DataSource",
"name": "DS_INFLUXDB",
"options": [
{
"value": "${VAR_DS_INFLUXDB}",
"text": "${VAR_DS_INFLUXDB}",
"selected": false
}
],
"query": "${VAR_DS_INFLUXDB}",
"skipUrlSync": false,
"type": "constant"
},
{
"current": {
"value": "${VAR_TBL_STATS}",
"text": "${VAR_TBL_STATS}",
"selected": false
},
"error": null,
"hide": 2,
"label": "Table name for Statistics",
"name": "TBL_STATS",
"options": [
{
"value": "${VAR_TBL_STATS}",
"text": "${VAR_TBL_STATS}",
"selected": false
}
],
"query": "${VAR_TBL_STATS}",
"skipUrlSync": false,
"type": "constant"
}
]
},
"time": {
"from": "now-24h",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
]
},
"timezone": "",
"title": "Starlink Statistics",
"uid": "ymkHwLaMz",
"version": 36
}

View File

@@ -0,0 +1,675 @@
{
"__inputs": [
{
"name": "DS_INFLUXDB",
"label": "InfluxDB",
"description": "",
"type": "datasource",
"pluginId": "influxdb",
"pluginName": "InfluxDB"
},
{
"name": "VAR_TBL_STATS",
"label": "influx",
"description": "",
"type": "datasource",
"pluginId": "influxdb",
"pluginName": "InfluxDB"
},
{
"name": "VAR_DS_INFLUXDB",
"type": "constant",
"label": "InfluxDB DataSource",
"value": "InfluxDB-starlinkstats",
"description": ""
},
{
"name": "VAR_TBL_STATS",
"type": "constant",
"label": "Table name for Statistics",
"value": "spacex.starlink.user_terminal.status",
"description": ""
}
],
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "8.2.5"
},
{
"type": "datasource",
"id": "influxdb",
"name": "InfluxDB",
"version": "1.0.0"
},
{
"type": "panel",
"id": "table",
"name": "Table",
"version": ""
},
{
"type": "panel",
"id": "timeseries",
"name": "Time series",
"version": ""
}
],
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"target": {
"limit": 100,
"matchAny": false,
"tags": [],
"type": "dashboard"
},
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"gnetId": null,
"graphTooltip": 0,
"id": null,
"iteration": 1637920561166,
"links": [],
"liveNow": false,
"panels": [
{
"datasource": "${DS_INFLUXDB}",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": true,
"stacking": {
"group": "A",
"mode": "normal"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "binbps"
},
"overrides": [
{
"matcher": {
"id": "byRegexp",
"options": "/(uplink)/m"
},
"properties": [
{
"id": "displayName",
"value": "Uplink"
}
]
},
{
"matcher": {
"id": "byName",
"options": "downlink_throughput_bps"
},
"properties": [
{
"id": "displayName",
"value": "Downlink"
}
]
},
{
"matcher": {
"id": "byName",
"options": "uplink_throughput_bps"
},
"properties": [
{
"id": "displayName",
"value": "Uplink"
}
]
}
]
},
"gridPos": {
"h": 11,
"w": 12,
"x": 0,
"y": 0
},
"id": 4,
"options": {
"legend": {
"calcs": [
"mean",
"max",
"lastNotNull"
],
"displayMode": "table",
"placement": "bottom"
},
"tooltip": {
"mode": "multi"
}
},
"pluginVersion": "8.2.5",
"targets": [
{
"hide": false,
"query": "from(bucket: \"starlink\")\n |> range(start: v.timeRangeStart, stop: v.timeRangeStop)\n |> filter(fn: (r) => r[\"_field\"] == \"downlink_throughput_bps\" or r[\"_field\"] == \"uplink_throughput_bps\")\n |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)\n |> yield(name: \"last\")",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Actual Throughput",
"type": "timeseries"
},
{
"datasource": "${DS_INFLUXDB}",
"description": "",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": true,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "short"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "fraction_obstructed"
},
"properties": [
{
"id": "displayName",
"value": "Fraction Obstruction"
},
{
"id": "unit",
"value": "%"
}
]
},
{
"matcher": {
"id": "byName",
"options": "pop_ping_drop_rate"
},
"properties": [
{
"id": "displayName",
"value": "Pop Ping Drop Rate"
},
{
"id": "unit",
"value": "%"
}
]
},
{
"matcher": {
"id": "byName",
"options": "pop_ping_latency_ms"
},
"properties": [
{
"id": "displayName",
"value": "Pop Ping Latency Rate"
},
{
"id": "unit",
"value": "ms"
}
]
}
]
},
"gridPos": {
"h": 11,
"w": 12,
"x": 12,
"y": 0
},
"id": 2,
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull",
"max",
"min"
],
"displayMode": "table",
"placement": "bottom"
},
"tooltip": {
"mode": "multi"
}
},
"pluginVersion": "8.2.5",
"targets": [
{
"query": "from(bucket: \"starlink\")\n |> range(start: v.timeRangeStart, stop: v.timeRangeStop)\n |> filter(fn: (r) => r[\"_field\"] == \"pop_ping_latency_ms\" or r[\"_field\"] == \"pop_ping_drop_rate\" or r[\"_field\"] == \"fraction_obstructed\" or r[\"_field\"] == \"snr\")\n |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)\n |> yield(name: \"last\")",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Ping Latency, Drop Rate, Percent Obstructed & SNR",
"type": "timeseries"
},
{
"cacheTimeout": null,
"datasource": "${DS_INFLUXDB}",
"description": "",
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": null,
"displayMode": "auto",
"filterable": false
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "alerts"
},
"properties": [
{
"id": "displayName",
"value": "Alerts"
},
{
"id": "custom.width",
"value": 100
},
{
"id": "custom.align",
"value": "left"
}
]
},
{
"matcher": {
"id": "byName",
"options": "currently_obstructed"
},
"properties": [
{
"id": "displayName",
"value": "Currently Obstructed"
},
{
"id": "custom.width",
"value": 200
}
]
},
{
"matcher": {
"id": "byName",
"options": "hardware_version"
},
"properties": [
{
"id": "displayName",
"value": "Hardware Revision"
},
{
"id": "custom.width",
"value": 200
}
]
},
{
"matcher": {
"id": "byName",
"options": "software_version"
},
"properties": [
{
"id": "displayName",
"value": "Software Revision"
},
{
"id": "custom.width",
"value": 400
}
]
},
{
"matcher": {
"id": "byName",
"options": "state"
},
"properties": [
{
"id": "displayName",
"value": "State"
},
{
"id": "custom.width",
"value": 100
}
]
},
{
"matcher": {
"id": "byName",
"options": "alert_motors_stuck"
},
"properties": [
{
"id": "displayName",
"value": "Motor Stuck"
},
{
"id": "custom.width",
"value": 100
}
]
},
{
"matcher": {
"id": "byName",
"options": "alert_unexpected_location"
},
"properties": [
{
"id": "displayName",
"value": "Unexpected Location"
},
{
"id": "custom.width",
"value": 150
}
]
},
{
"matcher": {
"id": "byName",
"options": "alert_thermal_shutdown"
},
"properties": [
{
"id": "displayName",
"value": "Thermal Shutdown"
},
{
"id": "custom.width",
"value": 140
}
]
},
{
"matcher": {
"id": "byName",
"options": "alert_thermal_throttle"
},
"properties": [
{
"id": "displayName",
"value": "Thermal Throttle"
},
{
"id": "custom.width",
"value": 130
}
]
},
{
"matcher": {
"id": "byName",
"options": "uptime"
},
"properties": [
{
"id": "displayName",
"value": "Uptime"
},
{
"id": "custom.align",
"value": "left"
},
{
"id": "unit",
"value": "s"
}
]
},
{
"matcher": {
"id": "byName",
"options": "Time"
},
"properties": [
{
"id": "custom.width",
"value": 150
}
]
}
]
},
"gridPos": {
"h": 7,
"w": 24,
"x": 0,
"y": 11
},
"id": 6,
"interval": null,
"links": [],
"options": {
"frameIndex": 0,
"showHeader": true,
"sortBy": [
{
"desc": true,
"displayName": "Time (last)"
}
]
},
"pluginVersion": "8.2.5",
"targets": [
{
"query": "from(bucket: \"starlink\")\n |> range(start: v.timeRangeStart, stop: v.timeRangeStop)\n |> filter(fn: (r) => r[\"_field\"] == \"hardware_version\" or r[\"_field\"] == \"state\" or r[\"_field\"] == \"software_version\" or r[\"_field\"] == \"alerts\" or r[\"_field\"] == \"currently_obstructed\" or r[\"_field\"] == \"alert_unexpected_location\" or r[\"_field\"] == \"alert_thermal_throttle\" or r[\"_field\"] == \"alert_thermal_shutdown\" or r[\"_field\"] == \"alert_motors_stuck\" or r[\"_field\"] == \"uptime\" )\n |> yield(name: \"last\")",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Alerts & Versions",
"transformations": [
{
"id": "seriesToColumns",
"options": {
"byField": "Time"
}
}
],
"type": "table"
}
],
"refresh": false,
"schemaVersion": 32,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"description": null,
"error": null,
"hide": 2,
"label": "InfluxDB DataSource",
"name": "DS_INFLUXDB",
"query": "${VAR_DS_INFLUXDB}",
"skipUrlSync": false,
"type": "constant",
"current": {
"value": "${VAR_DS_INFLUXDB}",
"text": "${VAR_DS_INFLUXDB}",
"selected": false
},
"options": [
{
"value": "${VAR_DS_INFLUXDB}",
"text": "${VAR_DS_INFLUXDB}",
"selected": false
}
]
},
{
"description": null,
"error": null,
"hide": 2,
"label": "Table name for Statistics",
"name": "TBL_STATS",
"query": "${VAR_TBL_STATS}",
"skipUrlSync": false,
"type": "constant",
"current": {
"value": "${VAR_TBL_STATS}",
"text": "${VAR_TBL_STATS}",
"selected": false
},
"options": [
{
"value": "${VAR_TBL_STATS}",
"text": "${VAR_TBL_STATS}",
"selected": false
}
]
}
]
},
"time": {
"from": "now-30m",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
]
},
"timezone": "",
"title": "Starlink Statistics",
"uid": "ymkHwLaMz",
"version": 12
}

View File

@@ -0,0 +1,308 @@
{
"layout": {},
"schedule": {
"enabled": false,
"cronSchedule": "0 0 * * *",
"tz": "UTC",
"keepLastN": 2
},
"name": "Starlink Statistics",
"description": "This Dashboard is meant to be a clone of the starlink App's Statitics Page",
"elements": [
{
"config": {
"markdown": "# Starlink Statistics\n--- \nThis Dashboard is meant to be a clone of the starlink App's Statitics Page. Increase time python script calls API for more accurate results. (Default API Call: 60 seconds)\n",
"axis": {}
},
"id": "1p7z19fum",
"layout": {
"x": 0,
"y": 0,
"w": 12,
"h": 2
},
"variant": "markdown",
"type": "markdown.default"
},
{
"config": {
"markdown": "### What is Latency?\n- Starlink and the Starlink router both send test pings to the internet many times per minute. Latency measures how long, in milliseconds, a request takes to go to the internet and back.\n\n- High latency may impact your experience with online gaming, video calls, and web browsing. It may be caused by extreme weather or periods of high network usage.\n\n",
"axis": {}
},
"id": "84gt5a832",
"layout": {
"x": 0,
"y": 2,
"w": 6,
"h": 2
},
"variant": "markdown",
"type": "markdown.default"
},
{
"config": {
"markdown": "### What is power Draw?\n- Power Draw Measures the average amount of power that Starlink Uses. Starlink will use more power while heating to melt snow.\n\n",
"axis": {}
},
"id": "pyoifapcf",
"layout": {
"x": 6,
"y": 2,
"w": 6,
"h": 2
},
"variant": "markdown",
"type": "markdown.default"
},
{
"config": {
"onClickAction": {
"type": "None"
},
"style": true,
"applyThreshold": false,
"colorThresholds": {
"thresholds": [
{
"color": "#45850B",
"threshold": 30
},
{
"color": "#EFDB23",
"threshold": 70
},
{
"color": "#B20000",
"threshold": 100
}
]
},
"axis": {
"xAxis": "avg_mean_full_ping_latency",
"yAxis": [
"avg_mean_full_ping_latency"
]
},
"decimals": 2,
"suffix": " ms"
},
"search": {
"type": "inline",
"query": "dataset=\"starlink\" sourcetype in (\"starlink:ping_latency\") | extract parser=json_parser | summarize avg_mean_full_ping_latency=avg(mean_full_ping_latency) ",
"earliest": "-15m",
"latest": "now"
},
"id": "kfntldnby",
"layout": {
"x": 0,
"y": 4,
"w": 6,
"h": 3
},
"type": "counter.single",
"title": "Average Mean Full Ping Latency - Last 15 Min"
},
{
"config": {
"onClickAction": {
"type": "None"
},
"style": true,
"applyThreshold": false,
"colorThresholds": {
"thresholds": [
{
"color": "#45850B",
"threshold": 30
},
{
"color": "#EFDB23",
"threshold": 70
},
{
"color": "#B20000",
"threshold": 100
}
]
},
"axis": {
"xAxis": "avg_mean_power",
"yAxis": [
"avg_mean_power"
]
},
"decimals": 2,
"suffix": " Watts"
},
"search": {
"type": "inline",
"query": "dataset=\"starlink\" sourcetype=\"starlink:power\" | extract parser=json_parser | summarize avg_mean_power=avg(mean_power)",
"earliest": "-15m",
"latest": "now"
},
"id": "7o73dimso",
"layout": {
"x": 6,
"y": 4,
"w": 6,
"h": 3
},
"type": "counter.single",
"title": "Power Draw Average - Last 15 Min"
},
{
"config": {
"colorPalette": 0,
"colorPaletteReversed": false,
"customData": {
"trellis": false,
"connectNulls": "Leave gaps",
"stack": false,
"seriesCount": 1
},
"xAxis": {
"labelOrientation": 0,
"position": "Bottom"
},
"yAxis": {
"position": "Left",
"scale": "Linear",
"splitLine": true,
"interval": 2,
"min": 20,
"max": 35
},
"axis": {
"yAxis": [
"values_ping_latency"
],
"yAxisExcluded": [
"_time"
]
},
"legend": {
"position": "Right",
"truncate": true
},
"onClickAction": {
"type": "None"
},
"seriesInfo": {
"values_ping_latency": {
"type": "column"
},
"_time": {}
}
},
"search": {
"type": "inline",
"query": "dataset=\"starlink\" sourcetype in (\"starlink:ping_latency\") | extract parser=json_parser | timestats values(mean_full_ping_latency) ",
"earliest": "-15m",
"latest": "now"
},
"id": "n5lu6hhw0",
"layout": {
"x": 0,
"y": 7,
"w": 6,
"h": 5
},
"type": "chart.column",
"hidePanel": false,
"title": "Ping Latency - Last 15 Min"
},
{
"config": {
"colorPalette": 1,
"colorPaletteReversed": false,
"customData": {
"trellis": false,
"connectNulls": "Leave gaps",
"stack": false,
"seriesCount": 1
},
"xAxis": {
"labelOrientation": 0,
"position": "Bottom"
},
"yAxis": {
"position": "Left",
"scale": "Linear",
"splitLine": true,
"min": 25,
"max": 70,
"interval": 5
},
"axis": {
"yAxis": [
"values_latest_power"
],
"yAxisExcluded": [
"_time"
]
},
"legend": {
"position": "Top",
"truncate": true
},
"onClickAction": {
"type": "None"
},
"seriesInfo": {
"_time": {
"color": "#29bd00"
},
"values_latest_power": {
"color": "#369900",
"type": "area"
}
}
},
"search": {
"type": "inline",
"query": "dataset=\"starlink\" sourcetype=\"starlink:power\" | extract parser=json_parser | timestats values(latest_power)",
"earliest": "-15m",
"latest": "now"
},
"id": "20ekij4vo",
"layout": {
"x": 6,
"y": 7,
"w": 6,
"h": 5
},
"type": "chart.column",
"title": "Power Draw - Last 15 Min"
},
{
"config": {
"markdown": "## What is ping success?\n- Starlink and the Starlink router both send test pings to the internet many times per minute. It is normal for some pings to be dropped, and your connection to the internet to remain unaffected.",
"axis": {}
},
"id": "2o01xt5al",
"layout": {
"x": 0,
"y": 12,
"w": 6,
"h": 2
},
"variant": "markdown",
"type": "markdown.default"
},
{
"config": {
"markdown": "## What is throughput?\n- 'Download' and 'Upload' measure the amount of data that your Starlink is downloading from or uploading to the internet. Download a large file or run a speed test to watch it jump!",
"axis": {}
},
"id": "hwr5nirfk",
"layout": {
"x": 6,
"y": 12,
"w": 5,
"h": 2
},
"variant": "markdown",
"type": "markdown.default"
}
]
}

View File

@@ -0,0 +1,142 @@
#!/usr/bin/env python3
"""Check whether there is a software update pending on a Starlink user terminal.
Optionally, reboot the dish to initiate install if there is an update pending.
"""
import argparse
from datetime import datetime
import logging
import sys
import time
import grpc
import loop_util
import starlink_grpc
# This is the enum value spacex_api.device.dish_pb2.SoftwareUpdateState.REBOOT_REQUIRED
REBOOT_REQUIRED = 6
# This is the enum value spacex_api.device.dish_pb2.SoftwareUpdateState.DISABLED
UPDATE_DISABLED = 7
def loop_body(opts, context):
now = time.time()
try:
status = starlink_grpc.get_status(context)
except (AttributeError, ValueError, grpc.RpcError) as e:
logging.error("Failed getting dish status: %s", str(starlink_grpc.GrpcError(e)))
return 1
# There are at least 3 and maybe 4 redundant flags that indicate whether or
# not a software update is pending. In order to be robust against future
# changes in the protocol and/or implementation of it, this scripts checks
# them all, while allowing for the possibility that some of them have been
# obsoleted and thus no longer present in the reflected protocol classes.
try:
alert_flag = status.alerts.install_pending
except (AttributeError, ValueError):
alert_flag = None
try:
state_flag = status.software_update_state == REBOOT_REQUIRED
state_dflag = status.software_update_state == UPDATE_DISABLED
except (AttributeError, ValueError):
state_flag = None
state_dflag = None
try:
stats_flag = status.software_update_stats.software_update_state == REBOOT_REQUIRED
stats_dflag = status.software_update_stats.software_update_state == UPDATE_DISABLED
except (AttributeError, ValueError):
stats_flag = None
stats_dflag = None
try:
ready_flag = status.swupdate_reboot_ready
except (AttributeError, ValueError):
ready_flag = None
try:
sw_version = status.device_info.software_version
except (AttributeError, ValueError):
sw_version = "UNKNOWN"
if opts.verbose >= 2:
print("Pending flags:", alert_flag, state_flag, stats_flag, ready_flag)
print("Disable flags:", state_dflag, stats_dflag)
if state_dflag or stats_dflag:
logging.warning("Software updates appear to be disabled")
# The swupdate_reboot_ready field does not appear to be in use, so may
# mean something other than what it sounds like. Only use it if none of
# the others are available.
if alert_flag is None and state_flag is None and stats_flag is None:
install_pending = bool(ready_flag)
else:
install_pending = alert_flag or state_flag or stats_flag
if opts.verbose:
dtnow = datetime.fromtimestamp(now, tz=getattr(opts, "timezone", None))
print(dtnow.replace(microsecond=0, tzinfo=None).isoformat(), "- ", end="")
if install_pending:
print("Install pending, current version:", sw_version)
if opts.install:
print("Rebooting dish to initiate install")
try:
starlink_grpc.reboot(context)
except starlink_grpc.GrpcError as e:
logging.error("Failed reboot request: %s", str(e))
return 1
elif opts.verbose:
print("No install pending, current version:", sw_version)
return 0
def parse_args():
parser = argparse.ArgumentParser(description="Check for Starlink user terminal software update")
parser.add_argument(
"-i",
"--install",
action="store_true",
help="Initiate dish reboot to perform install if there is an update pending")
parser.add_argument("-g",
"--target",
help="host:port of dish to query, default is the standard IP address "
"and port (192.168.100.1:9200)")
parser.add_argument("-v",
"--verbose",
action="count",
default=0,
help="Increase verbosity, may be used multiple times")
loop_util.add_args(parser)
opts = parser.parse_args()
loop_util.check_args(opts, parser)
return opts
def main():
opts = parse_args()
logging.basicConfig(format="%(levelname)s: %(message)s")
context = starlink_grpc.ChannelContext(target=opts.target)
try:
rc = loop_util.run_loop(opts, loop_body, opts, context)
finally:
context.close()
sys.exit(rc)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,445 @@
"""Shared code among the dish_grpc_* commands
Note:
This module is not intended to be generically useful or to export a stable
interface. Rather, it should be considered an implementation detail of the
other scripts, and will change as needed.
For a module that exports an interface intended for general use, see
starlink_grpc.
"""
import argparse
from datetime import datetime
from datetime import timezone
import logging
import re
import time
from typing import List
import grpc
import starlink_grpc
BRACKETS_RE = re.compile(r"([^[]*)(\[((\d+),|)(\d*)\]|)$")
LOOP_TIME_DEFAULT = 0
STATUS_MODES: List[str] = ["status", "obstruction_detail", "alert_detail", "location"]
HISTORY_STATS_MODES: List[str] = [
"ping_drop", "ping_run_length", "ping_latency", "ping_loaded_latency", "usage", "power"
]
UNGROUPED_MODES: List[str] = []
def create_arg_parser(output_description, bulk_history=True):
"""Create an argparse parser and add the common command line options."""
parser = argparse.ArgumentParser(
description="Collect status and/or history data from a Starlink user terminal and " +
output_description,
epilog="Additional arguments can be read from a file by including @FILENAME as an "
"option, where FILENAME is a path to a file that contains arguments, one per line.",
fromfile_prefix_chars="@",
add_help=False)
# need to remember this for later
parser.bulk_history = bulk_history
group = parser.add_argument_group(title="General options")
group.add_argument("-g",
"--target",
help="host:port of dish to query, default is the standard IP address "
"and port (192.168.100.1:9200)")
group.add_argument("-h", "--help", action="help", help="Be helpful")
group.add_argument("-N",
"--numeric",
action="store_true",
help="Record boolean values as 1 and 0 instead of True and False")
group.add_argument("-t",
"--loop-interval",
type=float,
default=float(LOOP_TIME_DEFAULT),
help="Loop interval in seconds or 0 for no loop, default: " +
str(LOOP_TIME_DEFAULT))
group.add_argument("-v", "--verbose", action="store_true", help="Be verbose")
group = parser.add_argument_group(title="History mode options")
group.add_argument("-a",
"--all-samples",
action="store_const",
const=-1,
dest="samples",
help="Parse all valid samples")
group.add_argument("-o",
"--poll-loops",
type=int,
help="Poll history for N loops and aggregate data before computing history "
"stats; this allows for a smaller loop interval with less loss of data "
"when the dish reboots",
metavar="N")
if bulk_history:
sample_help = ("Number of data samples to parse; normally applies to first loop "
"iteration only, default: all in bulk mode, loop interval if loop "
"interval set, else all available samples")
no_counter_help = ("Don't track sample counter across loop iterations in non-bulk "
"modes; keep using samples option value instead")
else:
sample_help = ("Number of data samples to parse; normally applies to first loop "
"iteration only, default: loop interval, if set, else all available " +
"samples")
no_counter_help = ("Don't track sample counter across loop iterations; keep using "
"samples option value instead")
group.add_argument("-s", "--samples", type=int, help=sample_help)
group.add_argument("-j", "--no-counter", action="store_true", help=no_counter_help)
return parser
def run_arg_parser(parser, need_id=False, no_stdout_errors=False, modes=None):
"""Run parse_args on a parser previously created with create_arg_parser
Args:
need_id (bool): A flag to set in options to indicate whether or not to
set dish_id on the global state object; see get_data for more
detail.
no_stdout_errors (bool): A flag set in options to protect stdout from
error messages, in case that's where the data output is going, so
may be being redirected to a file.
modes (list[str]): Optionally provide the subset of data group modes
to allow.
Returns:
An argparse Namespace object with the parsed options set as attributes.
"""
if modes is None:
modes = STATUS_MODES + HISTORY_STATS_MODES + UNGROUPED_MODES
if parser.bulk_history:
modes.append("bulk_history")
parser.add_argument("mode",
nargs="+",
choices=modes,
help="The data group to record, one or more of: " + ", ".join(modes),
metavar="mode")
opts = parser.parse_args()
if opts.loop_interval <= 0.0 or opts.poll_loops is None:
opts.poll_loops = 1
elif opts.poll_loops < 2:
parser.error("Poll loops arg must be 2 or greater to be meaningful")
# for convenience, set flags for whether any mode in a group is selected
status_set = set(STATUS_MODES)
opts.status_mode = bool(status_set.intersection(opts.mode))
status_set.remove("location")
# special group for any status mode other than location
opts.pure_status_mode = bool(status_set.intersection(opts.mode))
opts.history_stats_mode = bool(set(HISTORY_STATS_MODES).intersection(opts.mode))
opts.bulk_mode = "bulk_history" in opts.mode
if opts.samples is None:
opts.samples = int(opts.loop_interval) if opts.loop_interval >= 1.0 else -1
opts.bulk_samples = -1
else:
# for scripts that query starting history counter, skip it if samples
# was explicitly set
opts.skip_query = True
opts.bulk_samples = opts.samples
opts.no_stdout_errors = no_stdout_errors
opts.need_id = need_id
return opts
def conn_error(opts, msg, *args):
"""Indicate an error in an appropriate way."""
# Connection errors that happen in an interval loop are not critical
# failures, but are interesting enough to print in non-verbose mode.
if opts.loop_interval > 0.0 and not opts.no_stdout_errors:
print(msg % args)
else:
logging.error(msg, *args)
class GlobalState:
"""A class for keeping state across loop iterations."""
def __init__(self, target=None):
# counter, timestamp for bulk_history:
self.counter = None
self.timestamp = None
# counter, timestamp for history stats:
self.counter_stats = None
self.timestamp_stats = None
self.dish_id = None
self.context = starlink_grpc.ChannelContext(target=target)
self.poll_count = 0
self.accum_history = None
self.first_poll = True
self.warn_once_location = True
def shutdown(self):
self.context.close()
def get_data(opts, gstate, add_item, add_sequence, add_bulk=None, flush_history=False):
"""Fetch data from the dish, pull it apart and call back with the pieces.
This function uses call backs to return the useful data. If need_id is set
in opts, then it is guaranteed that dish_id will have been set in gstate
prior to any of the call backs being invoked.
Args:
opts (object): The options object returned from run_arg_parser.
gstate (GlobalState): An object for keeping track of state across
multiple calls.
add_item (function): Call back for non-sequence data, with prototype:
add_item(name, value, category)
add_sequence (function): Call back for sequence data, with prototype:
add_sequence(name, value, category, start_index_label)
add_bulk (function): Optional. Call back for bulk history data, with
prototype:
add_bulk(bulk_data, count, start_timestamp, start_counter)
flush_history (bool): Optional. If true, run in a special mode that
emits (only) history stats for already polled data, if any,
regardless of --poll-loops state. Intended for script shutdown
operation, in order to flush stats for polled history data which
would otherwise be lost on script restart.
Returns:
Tuple with 3 values. The first value is 1 if there were any failures
getting data from the dish, otherwise 0. The second value is an int
timestamp for status data (data with category "status"), or None if
no status data was reported. The third value is an int timestamp for
history stats data (non-bulk data with category other than "status"),
or None if no history stats data was reported.
"""
if flush_history and opts.poll_loops < 2:
return 0, None, None
rc = 0
status_ts = None
hist_ts = None
if not flush_history:
rc, status_ts = get_status_data(opts, gstate, add_item, add_sequence)
if opts.history_stats_mode and (not rc or opts.poll_loops > 1):
hist_rc, hist_ts = get_history_stats(opts, gstate, add_item, add_sequence, flush_history)
if not rc:
rc = hist_rc
if not flush_history and opts.bulk_mode and add_bulk and not rc:
rc = get_bulk_data(opts, gstate, add_bulk)
return rc, status_ts, hist_ts
def add_data_normal(data, category, add_item, add_sequence):
for key, val in data.items():
name, start, seq = BRACKETS_RE.match(key).group(1, 4, 5)
if seq is None:
add_item(name, val, category)
else:
add_sequence(name, val, category, int(start) if start else 0)
def add_data_numeric(data, category, add_item, add_sequence):
for key, val in data.items():
name, start, seq = BRACKETS_RE.match(key).group(1, 4, 5)
if seq is None:
add_item(name, int(val) if isinstance(val, int) else val, category)
else:
add_sequence(name,
[int(subval) if isinstance(subval, int) else subval for subval in val],
category,
int(start) if start else 0)
def get_status_data(opts, gstate, add_item, add_sequence):
if opts.status_mode:
timestamp = int(time.time())
add_data = add_data_numeric if opts.numeric else add_data_normal
if opts.pure_status_mode or opts.need_id and gstate.dish_id is None:
try:
groups = starlink_grpc.status_data(context=gstate.context)
status_data, obstruct_detail, alert_detail = groups[0:3]
except starlink_grpc.GrpcError as e:
if "status" in opts.mode:
if opts.need_id and gstate.dish_id is None:
conn_error(opts, "Dish unreachable and ID unknown, so not recording state")
return 1, None
if opts.verbose:
print("Dish unreachable")
add_item("state", "DISH_UNREACHABLE", "status")
return 0, timestamp
conn_error(opts, "Failure getting status: %s", str(e))
return 1, None
if opts.need_id:
gstate.dish_id = status_data["id"]
del status_data["id"]
if "status" in opts.mode:
add_data(status_data, "status", add_item, add_sequence)
if "obstruction_detail" in opts.mode:
add_data(obstruct_detail, "status", add_item, add_sequence)
if "alert_detail" in opts.mode:
add_data(alert_detail, "status", add_item, add_sequence)
if "location" in opts.mode:
try:
location = starlink_grpc.location_data(context=gstate.context)
except starlink_grpc.GrpcError as e:
conn_error(opts, "Failure getting location: %s", str(e))
return 1, None
if location["latitude"] is None and gstate.warn_once_location:
logging.warning("Location data not enabled. See README for more details.")
gstate.warn_once_location = False
add_data(location, "status", add_item, add_sequence)
return 0, timestamp
elif opts.need_id and gstate.dish_id is None:
try:
gstate.dish_id = starlink_grpc.get_id(context=gstate.context)
except starlink_grpc.GrpcError as e:
conn_error(opts, "Failure getting dish ID: %s", str(e))
return 1, None
if opts.verbose:
print("Using dish ID: " + gstate.dish_id)
return 0, None
def get_history_stats(opts, gstate, add_item, add_sequence, flush_history):
"""Fetch history stats. See `get_data` for details."""
if flush_history or (opts.need_id and gstate.dish_id is None):
history = None
else:
try:
timestamp = int(time.time())
history = starlink_grpc.get_history(context=gstate.context)
gstate.timestamp_stats = timestamp
except (AttributeError, ValueError, grpc.RpcError) as e:
conn_error(opts, "Failure getting history: %s", str(starlink_grpc.GrpcError(e)))
history = None
parse_samples = opts.samples if gstate.counter_stats is None else -1
start = gstate.counter_stats if gstate.counter_stats else None
# Accumulate polled history data into gstate.accum_history, even if there
# was a dish reboot.
if gstate.accum_history:
if history is not None:
gstate.accum_history = starlink_grpc.concatenate_history(gstate.accum_history,
history,
samples1=parse_samples,
start1=start,
verbose=opts.verbose)
# Counter tracking gets too complicated to handle across reboots
# once the data has been accumulated, so just have concatenate
# handle it on the first polled loop and use a value of 0 to
# remember it was done (as opposed to None, which is used for a
# different purpose).
if not opts.no_counter:
gstate.counter_stats = 0
else:
gstate.accum_history = history
# When resuming from prior count with --poll-loops set, advance the loop
# count by however many loops worth of data was caught up on. This helps
# avoid abnormally large sample counts in the first set of output data.
if gstate.first_poll and gstate.accum_history:
if opts.poll_loops > 1 and gstate.counter_stats:
new_samples = gstate.accum_history.current - gstate.counter_stats
if new_samples < 0:
new_samples = gstate.accum_history.current
if new_samples > len(gstate.accum_history.pop_ping_drop_rate):
new_samples = len(gstate.accum_history.pop_ping_drop_rate)
gstate.poll_count = max(gstate.poll_count, int((new_samples-1) / opts.loop_interval))
gstate.first_poll = False
if gstate.poll_count < opts.poll_loops - 1 and not flush_history:
gstate.poll_count += 1
return 0, None
gstate.poll_count = 0
if gstate.accum_history is None:
return (0, None) if flush_history else (1, None)
groups = starlink_grpc.history_stats(parse_samples,
start=start,
verbose=opts.verbose,
history=gstate.accum_history)
general, ping, runlen, latency, loaded, usage, power = groups[0:7]
add_data = add_data_numeric if opts.numeric else add_data_normal
add_data(general, "ping_stats", add_item, add_sequence)
if "ping_drop" in opts.mode:
add_data(ping, "ping_stats", add_item, add_sequence)
if "ping_run_length" in opts.mode:
add_data(runlen, "ping_stats", add_item, add_sequence)
if "ping_latency" in opts.mode:
add_data(latency, "ping_stats", add_item, add_sequence)
if "ping_loaded_latency" in opts.mode:
add_data(loaded, "ping_stats", add_item, add_sequence)
if "usage" in opts.mode:
add_data(usage, "usage", add_item, add_sequence)
if "power" in opts.mode:
add_data(power, "power", add_item, add_sequence)
if not opts.no_counter:
gstate.counter_stats = general["end_counter"]
timestamp = gstate.timestamp_stats
gstate.timestamp_stats = None
gstate.accum_history = None
return 0, timestamp
def get_bulk_data(opts, gstate, add_bulk):
"""Fetch bulk data. See `get_data` for details."""
before = time.time()
start = gstate.counter
parse_samples = opts.bulk_samples if start is None else -1
try:
general, bulk = starlink_grpc.history_bulk_data(parse_samples,
start=start,
verbose=opts.verbose,
context=gstate.context)
except starlink_grpc.GrpcError as e:
conn_error(opts, "Failure getting history: %s", str(e))
return 1
after = time.time()
parsed_samples = general["samples"]
new_counter = general["end_counter"]
timestamp = gstate.timestamp
# check this first, so it doesn't report as lost time sync
if gstate.counter is not None and new_counter != gstate.counter + parsed_samples:
timestamp = None
# Allow up to 2 seconds of time drift before forcibly re-syncing, since
# +/- 1 second can happen just due to scheduler timing.
if timestamp is not None and not before - 2.0 <= timestamp + parsed_samples <= after + 2.0:
if opts.verbose:
print("Lost sample time sync at: " +
str(datetime.fromtimestamp(timestamp + parsed_samples, tz=timezone.utc)))
timestamp = None
if timestamp is None:
timestamp = int(before)
if opts.verbose:
print("Establishing new time base: {0} -> {1}".format(
new_counter, datetime.fromtimestamp(timestamp, tz=timezone.utc)))
timestamp -= parsed_samples
if opts.numeric:
add_bulk(
{
k: [int(subv) if isinstance(subv, int) else subv for subv in v]
for k, v in bulk.items()
}, parsed_samples, timestamp, new_counter - parsed_samples)
else:
add_bulk(bulk, parsed_samples, timestamp, new_counter - parsed_samples)
gstate.counter = new_counter
gstate.timestamp = timestamp + parsed_samples
return 0

View File

@@ -0,0 +1,135 @@
#!/usr/bin/env python3
"""Manipulate operating state of a Starlink user terminal."""
import argparse
import logging
import sys
import grpc
from yagrc import reflector as yagrc_reflector
import loop_util
def parse_args():
parser = argparse.ArgumentParser(description="Starlink user terminal state control")
parser.add_argument("-e",
"--target",
default="192.168.100.1:9200",
help="host:port of dish to query, default is the standard IP address "
"and port (192.168.100.1:9200)")
subs = parser.add_subparsers(dest="command", required=True)
subs.add_parser("reboot", help="Reboot the user terminal")
subs.add_parser("stow", help="Set user terminal to stow position")
subs.add_parser("unstow", help="Restore user terminal from stow position")
sleep_parser = subs.add_parser(
"set_sleep",
help="Show, set, or disable power save configuration",
description="Run without arguments to show current configuration")
sleep_parser.add_argument("start",
nargs="?",
type=int,
help="Start time in minutes past midnight UTC")
sleep_parser.add_argument("duration",
nargs="?",
type=int,
help="Duration in minutes, or 0 to disable")
gps_parser = subs.add_parser(
"set_gps",
help="Enable, disable, or show usage of GPS for position data",
description="Run without arguments to show current configuration")
gps_parser.add_argument("--enable",
action=argparse.BooleanOptionalAction,
help="Enable/disable use of GPS for position data")
loop_util.add_args(parser)
opts = parser.parse_args()
if opts.command == "set_sleep" and opts.start is not None:
if opts.duration is None:
sleep_parser.error("Must specify duration if start time is specified")
if opts.start < 0 or opts.start >= 1440:
sleep_parser.error("Invalid start time, must be >= 0 and < 1440")
if opts.duration < 0 or opts.duration > 1440:
sleep_parser.error("Invalid duration, must be >= 0 and <= 1440")
loop_util.check_args(opts, parser)
return opts
def loop_body(opts):
reflector = yagrc_reflector.GrpcReflectionClient()
try:
with grpc.insecure_channel(opts.target) as channel:
reflector.load_protocols(channel, symbols=["SpaceX.API.Device.Device"])
stub = reflector.service_stub_class("SpaceX.API.Device.Device")(channel)
request_class = reflector.message_class("SpaceX.API.Device.Request")
if opts.command == "reboot":
request = request_class(reboot={})
elif opts.command == "stow":
request = request_class(dish_stow={})
elif opts.command == "unstow":
request = request_class(dish_stow={"unstow": True})
elif opts.command == "set_sleep":
if opts.start is None and opts.duration is None:
request = request_class(dish_get_config={})
else:
if opts.duration:
request = request_class(
dish_power_save={
"power_save_start_minutes": opts.start,
"power_save_duration_minutes": opts.duration,
"enable_power_save": True
})
else:
# duration of 0 not allowed, even when disabled
request = request_class(dish_power_save={
"power_save_duration_minutes": 1,
"enable_power_save": False
})
elif opts.command == "set_gps":
if opts.enable is None:
request = request_class(get_status={})
else:
request = request_class(dish_inhibit_gps={"inhibit_gps": not opts.enable})
response = stub.Handle(request, timeout=10)
if opts.command == "set_sleep" and opts.start is None and opts.duration is None:
config = response.dish_get_config.dish_config
if config.power_save_mode:
print("Sleep start:", config.power_save_start_minutes,
"minutes past midnight UTC")
print("Sleep duration:", config.power_save_duration_minutes, "minutes")
else:
print("Sleep disabled")
elif opts.command == "set_gps" and opts.enable is None:
status = response.dish_get_status
if status.gps_stats.inhibit_gps:
print("GPS disabled")
else:
print("GPS enabled")
except (AttributeError, ValueError, grpc.RpcError) as e:
if isinstance(e, grpc.Call):
msg = e.details()
elif isinstance(e, (AttributeError, ValueError)):
msg = "Protocol error"
else:
msg = "Unknown communication or service error"
logging.error(msg)
return 1
return 0
def main():
opts = parse_args()
logging.basicConfig(format="%(levelname)s: %(message)s")
rc = loop_util.run_loop(opts, loop_body, opts)
sys.exit(rc)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,339 @@
#!/usr/bin/env python3
"""Write Starlink user terminal data to an InfluxDB 1.x database.
This script pulls the current status info and/or metrics computed from the
history data and writes them to the specified InfluxDB database either once
or in a periodic loop.
Data will be written into the requested database with the following
measurement / series names:
: spacex.starlink.user_terminal.status : Current status data
: spacex.starlink.user_terminal.history : Bulk history data
: spacex.starlink.user_terminal.ping_stats : Ping history statistics
: spacex.starlink.user_terminal.usage : Usage history statistics
: spacex.starlink.user_terminal.power : Power history statistics
NOTE: The Starlink user terminal does not include time values with its
history or status data, so this script uses current system time to compute
the timestamps it sends to InfluxDB. It is recommended to run this script on
a host that has its system clock synced via NTP. Otherwise, the timestamps
may get out of sync with real time.
"""
from datetime import datetime
from datetime import timezone
import logging
import os
import signal
import sys
import time
import warnings
from influxdb import InfluxDBClient
import dish_common
HOST_DEFAULT = "localhost"
DATABASE_DEFAULT = "starlinkstats"
BULK_MEASUREMENT = "spacex.starlink.user_terminal.history"
FLUSH_LIMIT = 6
MAX_BATCH = 5000
MAX_QUEUE_LENGTH = 864000
class Terminated(Exception):
pass
def handle_sigterm(signum, frame):
# Turn SIGTERM into an exception so main loop can clean up
raise Terminated
def parse_args():
parser = dish_common.create_arg_parser(
output_description="write it to an InfluxDB 1.x database")
group = parser.add_argument_group(title="InfluxDB 1.x database options")
group.add_argument("-n",
"--hostname",
default=HOST_DEFAULT,
dest="host",
help="Hostname of InfluxDB server, default: " + HOST_DEFAULT)
group.add_argument("-p", "--port", type=int, help="Port number to use on InfluxDB server")
group.add_argument("-P", "--password", help="Set password for username/password authentication")
group.add_argument("-U", "--username", help="Set username for authentication")
group.add_argument("-D",
"--database",
default=DATABASE_DEFAULT,
help="Database name to use, default: " + DATABASE_DEFAULT)
group.add_argument("-R", "--retention-policy", help="Retention policy name to use")
group.add_argument("-k",
"--skip-query",
action="store_true",
help="Skip querying for prior sample write point in bulk mode")
group.add_argument("-C",
"--ca-cert",
dest="verify_ssl",
help="Enable SSL/TLS using specified CA cert to verify server",
metavar="FILENAME")
group.add_argument("-I",
"--insecure",
action="store_false",
dest="verify_ssl",
help="Enable SSL/TLS but disable certificate verification (INSECURE!)")
group.add_argument("-S",
"--secure",
action="store_true",
dest="verify_ssl",
help="Enable SSL/TLS using default CA cert")
env_map = (
("INFLUXDB_HOST", "host"),
("INFLUXDB_PORT", "port"),
("INFLUXDB_USER", "username"),
("INFLUXDB_PWD", "password"),
("INFLUXDB_DB", "database"),
("INFLUXDB_RP", "retention-policy"),
("INFLUXDB_SSL", "verify_ssl"),
)
env_defaults = {}
for var, opt in env_map:
# check both set and not empty string
val = os.environ.get(var)
if val:
if var == "INFLUXDB_SSL" and val == "secure":
env_defaults[opt] = True
elif var == "INFLUXDB_SSL" and val == "insecure":
env_defaults[opt] = False
else:
env_defaults[opt] = val
parser.set_defaults(**env_defaults)
opts = dish_common.run_arg_parser(parser, need_id=True)
if opts.username is None and opts.password is not None:
parser.error("Password authentication requires username to be set")
opts.icargs = {"timeout": 5}
for key in ["port", "host", "password", "username", "database", "verify_ssl"]:
val = getattr(opts, key)
if val is not None:
opts.icargs[key] = val
if opts.verify_ssl is not None:
opts.icargs["ssl"] = True
return opts
def flush_points(opts, gstate):
try:
while len(gstate.points) > MAX_BATCH:
gstate.influx_client.write_points(gstate.points[:MAX_BATCH],
time_precision="s",
retention_policy=opts.retention_policy)
if opts.verbose:
print("Data points written: " + str(MAX_BATCH))
del gstate.points[:MAX_BATCH]
if gstate.points:
gstate.influx_client.write_points(gstate.points,
time_precision="s",
retention_policy=opts.retention_policy)
if opts.verbose:
print("Data points written: " + str(len(gstate.points)))
gstate.points.clear()
except Exception as e:
dish_common.conn_error(opts, "Failed writing to InfluxDB database: %s", str(e))
# If failures persist, don't just use infinite memory. Max queue
# is currently 10 days of bulk data, so something is very wrong
# if it's ever exceeded.
if len(gstate.points) > MAX_QUEUE_LENGTH:
logging.error("Max write queue exceeded, discarding data.")
del gstate.points[:-MAX_QUEUE_LENGTH]
return 1
return 0
def query_counter(gstate, start, end):
try:
# fetch the latest point where counter field was recorded
result = gstate.influx_client.query("SELECT counter FROM \"{0}\" "
"WHERE time>={1}s AND time<{2}s AND id=$id "
"ORDER by time DESC LIMIT 1;".format(
BULK_MEASUREMENT, start, end),
bind_params={"id": gstate.dish_id},
epoch="s")
points = list(result.get_points())
if points:
counter = points[0].get("counter", None)
timestamp = points[0].get("time", 0)
if counter and timestamp:
return int(counter), int(timestamp)
except TypeError as e:
# bind_params was added in influxdb-python v5.2.3. That would be easy
# enough to work around, but older versions had other problems with
# query(), so just skip this functionality.
logging.error(
"Failed running query, probably due to influxdb-python version too old. "
"Skipping resumption from prior counter value. Reported error was: %s", str(e))
return None, 0
def sync_timebase(opts, gstate):
try:
db_counter, db_timestamp = query_counter(gstate, gstate.start_timestamp, gstate.timestamp)
except Exception as e:
# could be temporary outage, so try again next time
dish_common.conn_error(opts, "Failed querying InfluxDB for prior count: %s", str(e))
return
gstate.timebase_synced = True
if db_counter and gstate.start_counter <= db_counter:
del gstate.deferred_points[:db_counter - gstate.start_counter]
if gstate.deferred_points:
delta_timestamp = db_timestamp - (gstate.deferred_points[0]["time"] - 1)
# to prevent +/- 1 second timestamp drift when the script restarts,
# if time base is within 2 seconds of that of the last sample in
# the database, correct back to that time base
if delta_timestamp == 0:
if opts.verbose:
print("Exactly synced with database time base")
elif -2 <= delta_timestamp <= 2:
if opts.verbose:
print("Replacing with existing time base: {0} -> {1}".format(
db_counter, datetime.fromtimestamp(db_timestamp, tz=timezone.utc)))
for point in gstate.deferred_points:
db_timestamp += 1
if point["time"] + delta_timestamp == db_timestamp:
point["time"] = db_timestamp
else:
# lost time sync when recording data, leave the rest
break
else:
gstate.timestamp = db_timestamp
else:
if opts.verbose:
print("Database time base out of sync by {0} seconds".format(delta_timestamp))
gstate.points.extend(gstate.deferred_points)
gstate.deferred_points.clear()
def loop_body(opts, gstate, shutdown=False):
fields = {"status": {}, "ping_stats": {}, "usage": {}, "power": {}}
def cb_add_item(key, val, category):
fields[category][key] = val
def cb_add_sequence(key, val, category, start):
for i, subval in enumerate(val, start=start):
fields[category]["{0}_{1}".format(key, i)] = subval
def cb_add_bulk(bulk, count, timestamp, counter):
if gstate.start_timestamp is None:
gstate.start_timestamp = timestamp
gstate.start_counter = counter
points = gstate.points if gstate.timebase_synced else gstate.deferred_points
for i in range(count):
timestamp += 1
points.append({
"measurement": BULK_MEASUREMENT,
"tags": {
"id": gstate.dish_id
},
"time": timestamp,
"fields": {key: val[i] for key, val in bulk.items() if val[i] is not None},
})
if points:
# save off counter value for script restart
points[-1]["fields"]["counter"] = counter + count
rc, status_ts, hist_ts = dish_common.get_data(opts,
gstate,
cb_add_item,
cb_add_sequence,
add_bulk=cb_add_bulk,
flush_history=shutdown)
if rc:
return rc
for category, cat_fields in fields.items():
if cat_fields:
timestamp = status_ts if category == "status" else hist_ts
gstate.points.append({
"measurement": "spacex.starlink.user_terminal." + category,
"tags": {
"id": gstate.dish_id
},
"time": timestamp,
"fields": cat_fields,
})
# This is here and not before the points being processed because if the
# query previously failed, there will be points that were processed in
# a prior loop. This avoids having to handle that as a special case.
if opts.bulk_mode and not gstate.timebase_synced:
sync_timebase(opts, gstate)
if opts.verbose:
print("Data points queued: " + str(len(gstate.points)))
if len(gstate.points) >= FLUSH_LIMIT:
return flush_points(opts, gstate)
return 0
def main():
opts = parse_args()
logging.basicConfig(format="%(levelname)s: %(message)s")
gstate = dish_common.GlobalState(target=opts.target)
gstate.points = []
gstate.deferred_points = []
gstate.timebase_synced = opts.skip_query
gstate.start_timestamp = None
gstate.start_counter = None
if "verify_ssl" in opts.icargs and not opts.icargs["verify_ssl"]:
# user has explicitly said be insecure, so don't warn about it
warnings.filterwarnings("ignore", message="Unverified HTTPS request")
signal.signal(signal.SIGTERM, handle_sigterm)
try:
# attempt to hack around breakage between influxdb-python client and 2.0 server:
gstate.influx_client = InfluxDBClient(**opts.icargs, headers={"Accept": "application/json"})
except TypeError:
# ...unless influxdb-python package version is too old
gstate.influx_client = InfluxDBClient(**opts.icargs)
rc = 0
try:
next_loop = time.monotonic()
while True:
rc = loop_body(opts, gstate)
if opts.loop_interval > 0.0:
now = time.monotonic()
next_loop = max(next_loop + opts.loop_interval, now)
time.sleep(next_loop - now)
else:
break
except (KeyboardInterrupt, Terminated):
pass
finally:
loop_body(opts, gstate, shutdown=True)
if gstate.points:
rc = flush_points(opts, gstate)
gstate.influx_client.close()
gstate.shutdown()
sys.exit(rc)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,331 @@
#!/usr/bin/env python3
"""Write Starlink user terminal data to an InfluxDB 2.x database.
This script pulls the current status info and/or metrics computed from the
history data and writes them to the specified InfluxDB 2.x database either once
or in a periodic loop.
Data will be written into the requested database with the following
measurement / series names:
: spacex.starlink.user_terminal.status : Current status data
: spacex.starlink.user_terminal.history : Bulk history data
: spacex.starlink.user_terminal.ping_stats : Ping history statistics
: spacex.starlink.user_terminal.usage : Usage history statistics
: spacex.starlink.user_terminal.power : Power history statistics
NOTE: The Starlink user terminal does not include time values with its
history or status data, so this script uses current system time to compute
the timestamps it sends to InfluxDB. It is recommended to run this script on
a host that has its system clock synced via NTP. Otherwise, the timestamps
may get out of sync with real time.
"""
from datetime import datetime
from datetime import timezone
import logging
import os
import signal
import sys
import time
import warnings
from influxdb_client import InfluxDBClient, WriteOptions, WritePrecision
import dish_common
URL_DEFAULT = "http://localhost:8086"
BUCKET_DEFAULT = "starlinkstats"
BULK_MEASUREMENT = "spacex.starlink.user_terminal.history"
FLUSH_LIMIT = 6
MAX_BATCH = 5000
MAX_QUEUE_LENGTH = 864000
class Terminated(Exception):
pass
def handle_sigterm(signum, frame):
# Turn SIGTERM into an exception so main loop can clean up
raise Terminated
def parse_args():
parser = dish_common.create_arg_parser(
output_description="write it to an InfluxDB 2.x database")
group = parser.add_argument_group(title="InfluxDB 2.x database options")
group.add_argument("-u",
"--url",
default=URL_DEFAULT,
dest="url",
help="URL of the InfluxDB 2.x server, default: " + URL_DEFAULT)
group.add_argument("-T", "--token", help="Token to access the bucket")
group.add_argument("-B",
"--bucket",
default=BUCKET_DEFAULT,
help="Bucket name to use, default: " + BUCKET_DEFAULT)
group.add_argument("-O", "--org", help="Organisation name")
group.add_argument("-k",
"--skip-query",
action="store_true",
help="Skip querying for prior sample write point in bulk mode")
group.add_argument("-C",
"--ca-cert",
dest="ssl_ca_cert",
help="Use specified CA cert to verify HTTPS server",
metavar="FILENAME")
group.add_argument("-I",
"--insecure",
action="store_false",
dest="verify_ssl",
help="Disable certificate verification of HTTPS server (INSECURE!)")
env_map = (
("INFLUXDB_URL", "url"),
("INFLUXDB_TOKEN", "token"),
("INFLUXDB_Bucket", "bucket"),
("INFLUXDB_ORG", "org"),
("INFLUXDB_SSL", "verify_ssl"),
)
env_defaults = {}
for var, opt in env_map:
# check both set and not empty string
val = os.environ.get(var)
if val:
if var == "INFLUXDB_SSL":
if val == "insecure":
env_defaults[opt] = False
elif val == "secure":
env_defaults[opt] = True
else:
env_defaults["ssl_ca_cert"] = val
else:
env_defaults[opt] = val
parser.set_defaults(**env_defaults)
opts = dish_common.run_arg_parser(parser, need_id=True)
opts.icargs = {}
for key in ["url", "token", "bucket", "org", "verify_ssl", "ssl_ca_cert"]:
val = getattr(opts, key)
if val is not None:
opts.icargs[key] = val
if (not opts.verify_ssl
or opts.ssl_ca_cert is not None) and not opts.url.lower().startswith("https:"):
parser.error("SSL options only apply to HTTPS URLs")
return opts
def flush_points(opts, gstate):
try:
write_api = gstate.influx_client.write_api(
write_options=WriteOptions(batch_size=len(gstate.points),
flush_interval=10_000,
jitter_interval=2_000,
retry_interval=5_000,
max_retries=5,
max_retry_delay=30_000,
exponential_base=2))
while len(gstate.points) > MAX_BATCH:
write_api.write(record=gstate.points[:MAX_BATCH],
write_precision=WritePrecision.S,
bucket=opts.bucket)
if opts.verbose:
print("Data points written: " + str(MAX_BATCH))
del gstate.points[:MAX_BATCH]
if gstate.points:
write_api.write(record=gstate.points,
write_precision=WritePrecision.S,
bucket=opts.bucket)
if opts.verbose:
print("Data points written: " + str(len(gstate.points)))
gstate.points.clear()
write_api.flush()
write_api.close()
except Exception as e:
dish_common.conn_error(opts, "Failed writing to InfluxDB database: %s", str(e))
# If failures persist, don't just use infinite memory. Max queue
# is currently 10 days of bulk data, so something is very wrong
# if it's ever exceeded.
if len(gstate.points) > MAX_QUEUE_LENGTH:
logging.error("Max write queue exceeded, discarding data.")
del gstate.points[:-MAX_QUEUE_LENGTH]
return 1
return 0
def query_counter(opts, gstate, start, end):
query_api = gstate.influx_client.query_api()
result = query_api.query('''
from(bucket: "{0}")
|> range(start: {1}, stop: {2})
|> filter(fn: (r) => r["_measurement"] == "{3}")
|> filter(fn: (r) => r["_field"] == "counter")
|> last()
|> yield(name: "last")
'''.format(opts.bucket, str(start), str(end), BULK_MEASUREMENT))
if result:
counter = result[0].records[0]["_value"]
timestamp = result[0].records[0]["_time"].timestamp()
if counter and timestamp:
return int(counter), int(timestamp)
return None, 0
def sync_timebase(opts, gstate):
try:
db_counter, db_timestamp = query_counter(opts, gstate, gstate.start_timestamp,
gstate.timestamp)
except Exception as e:
# could be temporary outage, so try again next time
dish_common.conn_error(opts, "Failed querying InfluxDB for prior count: %s", str(e))
return
gstate.timebase_synced = True
if db_counter and gstate.start_counter <= db_counter:
del gstate.deferred_points[:db_counter - gstate.start_counter]
if gstate.deferred_points:
delta_timestamp = db_timestamp - (gstate.deferred_points[0]["time"] - 1)
# to prevent +/- 1 second timestamp drift when the script restarts,
# if time base is within 2 seconds of that of the last sample in
# the database, correct back to that time base
if delta_timestamp == 0:
if opts.verbose:
print("Exactly synced with database time base")
elif -2 <= delta_timestamp <= 2:
if opts.verbose:
print("Replacing with existing time base: {0} -> {1}".format(
db_counter, datetime.fromtimestamp(db_timestamp, tz=timezone.utc)))
for point in gstate.deferred_points:
db_timestamp += 1
if point["time"] + delta_timestamp == db_timestamp:
point["time"] = db_timestamp
else:
# lost time sync when recording data, leave the rest
break
else:
gstate.timestamp = db_timestamp
else:
if opts.verbose:
print("Database time base out of sync by {0} seconds".format(delta_timestamp))
gstate.points.extend(gstate.deferred_points)
gstate.deferred_points.clear()
def loop_body(opts, gstate, shutdown=False):
fields = {"status": {}, "ping_stats": {}, "usage": {}, "power": {}}
def cb_add_item(key, val, category):
fields[category][key] = val
def cb_add_sequence(key, val, category, start):
for i, subval in enumerate(val, start=start):
fields[category]["{0}_{1}".format(key, i)] = subval
def cb_add_bulk(bulk, count, timestamp, counter):
if gstate.start_timestamp is None:
gstate.start_timestamp = timestamp
gstate.start_counter = counter
points = gstate.points if gstate.timebase_synced else gstate.deferred_points
for i in range(count):
timestamp += 1
points.append({
"measurement": BULK_MEASUREMENT,
"tags": {
"id": gstate.dish_id
},
"time": timestamp,
"fields": {key: val[i] for key, val in bulk.items() if val[i] is not None},
})
if points:
# save off counter value for script restart
points[-1]["fields"]["counter"] = counter + count
rc, status_ts, hist_ts = dish_common.get_data(opts,
gstate,
cb_add_item,
cb_add_sequence,
add_bulk=cb_add_bulk,
flush_history=shutdown)
if rc:
return rc
for category, cat_fields in fields.items():
if cat_fields:
timestamp = status_ts if category == "status" else hist_ts
gstate.points.append({
"measurement": "spacex.starlink.user_terminal." + category,
"tags": {
"id": gstate.dish_id
},
"time": timestamp,
"fields": cat_fields,
})
# This is here and not before the points being processed because if the
# query previously failed, there will be points that were processed in
# a prior loop. This avoids having to handle that as a special case.
if opts.bulk_mode and not gstate.timebase_synced:
sync_timebase(opts, gstate)
if opts.verbose:
print("Data points queued: " + str(len(gstate.points)))
if len(gstate.points) >= FLUSH_LIMIT:
return flush_points(opts, gstate)
return 0
def main():
opts = parse_args()
logging.basicConfig(format="%(levelname)s: %(message)s")
gstate = dish_common.GlobalState(target=opts.target)
gstate.points = []
gstate.deferred_points = []
gstate.timebase_synced = opts.skip_query
gstate.start_timestamp = None
gstate.start_counter = None
if "verify_ssl" in opts.icargs and not opts.icargs["verify_ssl"]:
# user has explicitly said be insecure, so don't warn about it
warnings.filterwarnings("ignore", message="Unverified HTTPS request")
signal.signal(signal.SIGTERM, handle_sigterm)
gstate.influx_client = InfluxDBClient(**opts.icargs)
rc = 0
try:
next_loop = time.monotonic()
while True:
rc = loop_body(opts, gstate)
if opts.loop_interval > 0.0:
now = time.monotonic()
next_loop = max(next_loop + opts.loop_interval, now)
time.sleep(next_loop - now)
else:
break
except (KeyboardInterrupt, Terminated):
pass
finally:
loop_body(opts, gstate, shutdown=True)
if gstate.points:
rc = flush_points(opts, gstate)
gstate.influx_client.close()
gstate.shutdown()
sys.exit(rc)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,212 @@
#!/usr/bin/env python3
"""Publish Starlink user terminal data to a MQTT broker.
This script pulls the current status info and/or metrics computed from the
history data and publishes them to the specified MQTT broker either once or
in a periodic loop.
Data will be published to the following topic names:
: starlink/dish_status/*id_value*/*field_name* : Current status data
: starlink/dish_ping_stats/*id_value*/*field_name* : Ping history statistics
: starlink/dish_usage/*id_value*/*field_name* : Usage history statistics
: starlink/dish_power/*id_value*/*field_name* : Power history statistics
Where *id_value* is the *id* value from the dish status information.
Unless the --json command line option is used, in which case, JSON-formatted
data will be published to topic name:
: starlink/*id_value*
"""
import json
import logging
import math
import os
import signal
import sys
import time
try:
import ssl
ssl_ok = True
except ImportError:
ssl_ok = False
import paho.mqtt.publish
import dish_common
HOST_DEFAULT = "localhost"
class Terminated(Exception):
pass
def handle_sigterm(signum, frame):
# Turn SIGTERM into an exception so main loop can clean up
raise Terminated
def parse_args():
parser = dish_common.create_arg_parser(output_description="publish it to a MQTT broker",
bulk_history=False)
group = parser.add_argument_group(title="MQTT broker options")
group.add_argument("-n",
"--hostname",
default=HOST_DEFAULT,
help="Hostname of MQTT broker, default: " + HOST_DEFAULT)
group.add_argument("-p", "--port", type=int, help="Port number to use on MQTT broker")
group.add_argument("-P", "--password", help="Set password for username/password authentication")
group.add_argument("-U", "--username", help="Set username for authentication")
group.add_argument("-J", "--json", action="store_true", help="Publish data as JSON")
if ssl_ok:
def wrap_ca_arg(arg):
return {"ca_certs": arg}
group.add_argument("-C",
"--ca-cert",
type=wrap_ca_arg,
dest="tls",
help="Enable SSL/TLS using specified CA cert to verify broker",
metavar="FILENAME")
group.add_argument("-I",
"--insecure",
action="store_const",
const={"cert_reqs": ssl.CERT_NONE},
dest="tls",
help="Enable SSL/TLS but disable certificate verification (INSECURE!)")
group.add_argument("-S",
"--secure",
action="store_const",
const={},
dest="tls",
help="Enable SSL/TLS using default CA cert")
else:
parser.epilog += "\nSSL support options not available due to missing ssl module"
env_map = (
("MQTT_HOST", "hostname"),
("MQTT_PORT", "port"),
("MQTT_USERNAME", "username"),
("MQTT_PASSWORD", "password"),
("MQTT_SSL", "tls"),
)
env_defaults = {}
for var, opt in env_map:
# check both set and not empty string
val = os.environ.get(var)
if val:
if var == "MQTT_SSL":
if ssl_ok and val != "false":
if val == "insecure":
env_defaults[opt] = {"cert_reqs": ssl.CERT_NONE}
elif val == "secure":
env_defaults[opt] = {}
else:
env_defaults[opt] = {"ca_certs": val}
else:
env_defaults[opt] = val
parser.set_defaults(**env_defaults)
opts = dish_common.run_arg_parser(parser, need_id=True)
if opts.username is None and opts.password is not None:
parser.error("Password authentication requires username to be set")
opts.mqargs = {}
for key in ["hostname", "port", "tls"]:
val = getattr(opts, key)
if val is not None:
opts.mqargs[key] = val
if opts.username is not None:
opts.mqargs["auth"] = {"username": opts.username}
if opts.password is not None:
opts.mqargs["auth"]["password"] = opts.password
return opts
def loop_body(opts, gstate):
msgs = []
if opts.json:
data = {}
def cb_add_item(key, val, category):
if not "dish_{0}".format(category) in data:
data["dish_{0}".format(category)] = {}
# Skip NaN values that occur on startup because they can upset Javascript JSON parsers
if not (isinstance(val, float) and math.isnan(val)):
data["dish_{0}".format(category)].update({key: val})
def cb_add_sequence(key, val, category, _):
if not "dish_{0}".format(category) in data:
data["dish_{0}".format(category)] = {}
data["dish_{0}".format(category)].update({key: list(val)})
else:
def cb_add_item(key, val, category):
msgs.append(("starlink/dish_{0}/{1}/{2}".format(category, gstate.dish_id,
key), val, 0, False))
def cb_add_sequence(key, val, category, _):
msgs.append(("starlink/dish_{0}/{1}/{2}".format(category, gstate.dish_id, key),
",".join("" if x is None else str(x) for x in val), 0, False))
rc = dish_common.get_data(opts, gstate, cb_add_item, cb_add_sequence)[0]
if opts.json:
msgs.append(("starlink/{0}".format(gstate.dish_id), json.dumps(data), 0, False))
if msgs:
try:
paho.mqtt.publish.multiple(msgs, client_id=gstate.dish_id, **opts.mqargs)
if opts.verbose:
print("Successfully published to MQTT broker")
except Exception as e:
dish_common.conn_error(opts, "Failed publishing to MQTT broker: %s", str(e))
rc = 1
return rc
def main():
opts = parse_args()
logging.basicConfig(format="%(levelname)s: %(message)s")
gstate = dish_common.GlobalState(target=opts.target)
signal.signal(signal.SIGTERM, handle_sigterm)
rc = 0
try:
next_loop = time.monotonic()
while True:
rc = loop_body(opts, gstate)
if opts.loop_interval > 0.0:
now = time.monotonic()
next_loop = max(next_loop + opts.loop_interval, now)
time.sleep(next_loop - now)
else:
break
except (KeyboardInterrupt, Terminated):
pass
finally:
gstate.shutdown()
sys.exit(rc)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,298 @@
#!/usr/bin/env python3
"""Prometheus exporter for Starlink user terminal data info.
This script pulls the current status info and/or metrics computed from the
history data and makes it available via HTTP in the format Prometheus expects.
"""
from http import HTTPStatus
from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
import logging
import signal
import sys
import threading
import dish_common
class Terminated(Exception):
pass
def handle_sigterm(signum, frame):
# Turn SIGTERM into an exception so main loop can clean up
raise Terminated
class MetricInfo:
unit = ""
kind = "gauge"
help = ""
def __init__(self, unit=None, kind=None, help=None) -> None:
if unit:
self.unit = f"_{unit}"
if kind:
self.kind = kind
if help:
self.help = help
pass
METRICS_INFO = {
"status_uptime": MetricInfo(unit="seconds", kind="counter"),
"status_longitude": MetricInfo(),
"status_latitude": MetricInfo(),
"status_altitude": MetricInfo(),
"status_gps_enabled": MetricInfo(),
"status_gps_ready": MetricInfo(),
"status_gps_sats": MetricInfo(),
"status_seconds_to_first_nonempty_slot": MetricInfo(),
"status_pop_ping_drop_rate": MetricInfo(),
"status_downlink_throughput_bps": MetricInfo(),
"status_uplink_throughput_bps": MetricInfo(),
"status_pop_ping_latency_ms": MetricInfo(),
"status_alerts": MetricInfo(),
"status_fraction_obstructed": MetricInfo(),
"status_currently_obstructed": MetricInfo(),
"status_seconds_obstructed": MetricInfo(),
"status_obstruction_duration": MetricInfo(),
"status_obstruction_interval": MetricInfo(),
"status_direction_azimuth": MetricInfo(),
"status_direction_elevation": MetricInfo(),
"status_is_snr_above_noise_floor": MetricInfo(),
"status_alert_motors_stuck": MetricInfo(),
"status_alert_thermal_throttle": MetricInfo(),
"status_alert_thermal_shutdown": MetricInfo(),
"status_alert_mast_not_near_vertical": MetricInfo(),
"status_alert_unexpected_location": MetricInfo(),
"status_alert_slow_ethernet_speeds": MetricInfo(),
"status_alert_roaming": MetricInfo(),
"status_alert_install_pending": MetricInfo(),
"status_alert_is_heating": MetricInfo(),
"status_alert_power_supply_thermal_throttle": MetricInfo(),
"status_alert_slow_ethernet_speeds_100": MetricInfo(),
"status_alert_is_power_save_idle": MetricInfo(),
"status_alert_moving_while_not_mobile": MetricInfo(),
"status_alert_moving_too_fast_for_policy": MetricInfo(),
"status_alert_dbf_telem_stale": MetricInfo(),
"status_alert_low_motor_current": MetricInfo(),
"status_alert_obstruction_map_reset": MetricInfo(),
"status_alert_lower_signal_than_predicted": MetricInfo(),
"ping_stats_samples": MetricInfo(kind="counter"),
"ping_stats_end_counter": MetricInfo(kind="counter"),
"usage_download_usage": MetricInfo(unit="bytes", kind="counter"),
"usage_upload_usage": MetricInfo(unit="bytes", kind="counter"),
"power_latest_power": MetricInfo(),
"power_mean_power": MetricInfo(),
"power_min_power": MetricInfo(),
"power_max_power": MetricInfo(),
"power_total_energy": MetricInfo(),
}
STATE_VALUES = [
"UNKNOWN",
"CONNECTED",
"BOOTING",
"SEARCHING",
"STOWED",
"THERMAL_SHUTDOWN",
"NO_SATS",
"OBSTRUCTED",
"NO_DOWNLINK",
"NO_PINGS",
"DISH_UNREACHABLE",
]
class Metric:
name = ""
timestamp = ""
kind = None
help = None
values = None
def __init__(self, name, timestamp, kind="gauge", help="", values=None):
self.name = name
self.timestamp = timestamp
self.kind = kind
self.help = help
if values:
self.values = values
else:
self.values = []
pass
def __str__(self):
if not self.values:
return ""
lines = []
lines.append(f"# HELP {self.name} {self.help}")
lines.append(f"# TYPE {self.name} {self.kind}")
for value in self.values:
lines.append(f"{self.name}{value} {self.timestamp*1000}")
lines.append("")
return str.join("\n", lines)
class MetricValue:
value = 0
labels = None
def __init__(self, value, labels=None) -> None:
self.value = value
self.labels = labels
def __str__(self):
label_str = ""
if self.labels:
label_str = ("{" + str.join(",", [f'{v[0]}="{v[1]}"'
for v in self.labels.items()]) + "}")
return f"{label_str} {self.value}"
def parse_args():
parser = dish_common.create_arg_parser(output_description="Prometheus exporter",
bulk_history=False)
group = parser.add_argument_group(title="HTTP server options")
group.add_argument("--address", default="0.0.0.0", help="IP address to listen on")
group.add_argument("--port", default=8080, type=int, help="Port to listen on")
return dish_common.run_arg_parser(parser, modes=["status", "alert_detail", "usage", "location", "power"])
def prometheus_export(opts, gstate):
raw_data = {}
def data_add_item(name, value, category):
raw_data[category + "_" + name] = value
pass
def data_add_sequencem(name, value, category, start):
raise NotImplementedError("Did not expect sequence data")
with gstate.lock:
rc, status_ts, hist_ts = dish_common.get_data(opts, gstate, data_add_item,
data_add_sequencem)
metrics = []
# snr is not supported by starlink any more but still returned by the grpc
# service for backwards compatibility
if "status_snr" in raw_data:
del raw_data["status_snr"]
metrics.append(
Metric(
name="starlink_status_state",
timestamp=status_ts,
values=[
MetricValue(
value=int(raw_data["status_state"] == state_value),
labels={"state": state_value},
) for state_value in STATE_VALUES
],
))
del raw_data["status_state"]
info_metrics = ["status_id", "status_hardware_version", "status_software_version"]
metrics_not_found = []
metrics_not_found.extend([x for x in info_metrics if x not in raw_data])
if len(metrics_not_found) < len(info_metrics):
metrics.append(
Metric(
name="starlink_info",
timestamp=status_ts,
values=[
MetricValue(
value=1,
labels={
x.replace("status_", ""): raw_data.pop(x) for x in info_metrics
if x in raw_data
},
)
],
))
for name, metric_info in METRICS_INFO.items():
if name in raw_data:
metrics.append(
Metric(
name=f"starlink_{name}{metric_info.unit}",
timestamp=status_ts,
kind=metric_info.kind,
values=[MetricValue(value=float(raw_data.pop(name) or 0))],
))
else:
metrics_not_found.append(name)
metrics.append(
Metric(
name="starlink_exporter_unprocessed_metrics",
timestamp=status_ts,
values=[MetricValue(value=1, labels={"metric": name}) for name in raw_data],
))
metrics.append(
Metric(
name="starlink_exporter_missing_metrics",
timestamp=status_ts,
values=[MetricValue(
value=1,
labels={"metric": name},
) for name in metrics_not_found],
))
return str.join("\n", [str(metric) for metric in metrics])
class MetricsRequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
path = self.path.partition("?")[0]
if path.lower() == "/favicon.ico":
self.send_error(HTTPStatus.NOT_FOUND)
return
opts = self.server.opts
gstate = self.server.gstate
content = prometheus_export(opts, gstate)
self.send_response(HTTPStatus.OK)
self.send_header("Content-type", "text/plain")
self.send_header("Content-Length", len(content))
self.end_headers()
self.wfile.write(content.encode())
def main():
opts = parse_args()
logging.basicConfig(format="%(levelname)s: %(message)s", stream=sys.stderr)
gstate = dish_common.GlobalState(target=opts.target)
gstate.lock = threading.Lock()
httpd = ThreadingHTTPServer((opts.address, opts.port), MetricsRequestHandler)
httpd.daemon_threads = False
httpd.opts = opts
httpd.gstate = gstate
signal.signal(signal.SIGTERM, handle_sigterm)
print("HTTP listening on port", opts.port)
try:
httpd.serve_forever()
except (KeyboardInterrupt, Terminated):
pass
finally:
httpd.server_close()
httpd.gstate.shutdown()
sys.exit()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,326 @@
#!/usr/bin/env python3
"""Write Starlink user terminal data to a sqlite database.
This script pulls the current status info and/or metrics computed from the
history data and writes them to the specified sqlite database either once or
in a periodic loop.
Requested data will be written into the following tables:
: status : Current status data
: history : Bulk history data
: ping_stats : Ping history statistics
: usage : Bandwidth usage history statistics
: power : Power consumption history statistics
Array data is currently written to the database as text strings of comma-
separated values, which may not be the best method for some use cases. If you
find yourself wishing they were handled better, please open a feature request
at https://github.com/sparky8512/starlink-grpc-tools/issues explaining the use
case and how you would rather see it. This only affects a few fields, since
most of the useful data is not in arrays.
Note that using this script to record the alert_detail group mode will tend to
trip schema-related errors when new alert types are added to the dish
software. The error message will include something like "table status has no
column named alert_foo", where "foo" is the newly added alert type. To work
around this rare occurrence, you can pass the -f option to force a schema
update. Alternatively, instead of using the alert_detail mode, you can use the
alerts bitmask in the status group.
NOTE: The Starlink user terminal does not include time values with its
history or status data, so this script uses current system time to compute
the timestamps it writes into the database. It is recommended to run this
script on a host that has its system clock synced via NTP. Otherwise, the
timestamps may get out of sync with real time.
"""
from datetime import datetime
from datetime import timezone
from itertools import repeat
import logging
import signal
import sqlite3
import sys
import time
import dish_common
import starlink_grpc
SCHEMA_VERSION = 5
class Terminated(Exception):
pass
def handle_sigterm(signum, frame):
# Turn SIGTERM into an exception so main loop can clean up
raise Terminated
def parse_args():
parser = dish_common.create_arg_parser(output_description="write it to a sqlite database")
parser.add_argument("database", help="Database file to use")
group = parser.add_argument_group(title="sqlite database options")
group.add_argument("-f",
"--force",
action="store_true",
help="Force schema conversion, even if it results in downgrade; may "
"result in discarded data")
group.add_argument("-k",
"--skip-query",
action="store_true",
help="Skip querying for prior sample write point in history modes")
opts = dish_common.run_arg_parser(parser, need_id=True)
opts.skip_query |= opts.no_counter
return opts
def query_counter(opts, gstate, column, table):
now = time.time()
cur = gstate.sql_conn.cursor()
cur.execute(
'SELECT "time", "{0}" FROM "{1}" WHERE "time"<? AND "id"=? '
'ORDER BY "time" DESC LIMIT 1'.format(column, table), (now, gstate.dish_id))
row = cur.fetchone()
cur.close()
if row and row[0] and row[1]:
if opts.verbose:
print("Existing time base: {0} -> {1}".format(
row[1], datetime.fromtimestamp(row[0], tz=timezone.utc)))
return row
else:
return 0, None
def loop_body(opts, gstate, shutdown=False):
tables = {"status": {}, "ping_stats": {}, "usage": {}, "power": {}}
hist_cols = ["time", "id"]
hist_rows = []
def cb_add_item(key, val, category):
tables[category][key] = val
def cb_add_sequence(key, val, category, start):
tables[category][key] = ",".join(str(subv) if subv is not None else "" for subv in val)
def cb_add_bulk(bulk, count, timestamp, counter):
if len(hist_cols) == 2:
hist_cols.extend(bulk.keys())
hist_cols.append("counter")
for i in range(count):
timestamp += 1
counter += 1
row = [timestamp, gstate.dish_id]
row.extend(val[i] for val in bulk.values())
row.append(counter)
hist_rows.append(row)
rc = 0
status_ts = None
hist_ts = None
if not shutdown:
rc, status_ts = dish_common.get_status_data(opts, gstate, cb_add_item, cb_add_sequence)
if opts.history_stats_mode and (not rc or opts.poll_loops > 1):
if gstate.counter_stats is None and not opts.skip_query and opts.samples < 0:
_, gstate.counter_stats = query_counter(opts, gstate, "end_counter", "ping_stats")
hist_rc, hist_ts = dish_common.get_history_stats(opts, gstate, cb_add_item, cb_add_sequence,
shutdown)
if not rc:
rc = hist_rc
if not shutdown and opts.bulk_mode and not rc:
if gstate.counter is None and not opts.skip_query and opts.bulk_samples < 0:
gstate.timestamp, gstate.counter = query_counter(opts, gstate, "counter", "history")
rc = dish_common.get_bulk_data(opts, gstate, cb_add_bulk)
rows_written = 0
try:
cur = gstate.sql_conn.cursor()
for category, fields in tables.items():
if fields:
timestamp = status_ts if category == "status" else hist_ts
sql = 'INSERT OR REPLACE INTO "{0}" ("time","id",{1}) VALUES ({2})'.format(
category, ",".join('"' + x + '"' for x in fields),
",".join(repeat("?",
len(fields) + 2)))
values = [timestamp, gstate.dish_id]
values.extend(fields.values())
cur.execute(sql, values)
rows_written += 1
if hist_rows:
sql = 'INSERT OR REPLACE INTO "history" ({0}) VALUES({1})'.format(
",".join('"' + x + '"' for x in hist_cols), ",".join(repeat("?", len(hist_cols))))
cur.executemany(sql, hist_rows)
rows_written += len(hist_rows)
cur.close()
gstate.sql_conn.commit()
except sqlite3.OperationalError as e:
# these are not necessarily fatal, but also not much can do about
logging.error("Unexpected error from database, discarding data: %s", e)
rc = 1
else:
if opts.verbose:
print("Rows written to db:", rows_written)
return rc
def ensure_schema(opts, conn, context):
cur = conn.cursor()
cur.execute("PRAGMA user_version")
version = cur.fetchone()
if version and version[0] == SCHEMA_VERSION and not opts.force:
cur.close()
return 0
try:
if not version or not version[0]:
if opts.verbose:
print("Initializing new database")
create_tables(conn, context, "")
elif version[0] > SCHEMA_VERSION and not opts.force:
logging.error("Cowardly refusing to downgrade from schema version %s", version[0])
return 1
else:
print("Converting from schema version:", version[0])
convert_tables(conn, context)
cur.execute("PRAGMA user_version={0}".format(SCHEMA_VERSION))
conn.commit()
return 0
except starlink_grpc.GrpcError as e:
dish_common.conn_error(opts, "Failure reflecting status fields: %s", str(e))
return 1
finally:
cur.close()
def create_tables(conn, context, suffix):
tables = {}
name_groups = (starlink_grpc.status_field_names(context=context) +
(starlink_grpc.location_field_names(),))
type_groups = (starlink_grpc.status_field_types(context=context) +
(starlink_grpc.location_field_types(),))
tables["status"] = zip(name_groups, type_groups)
name_groups = starlink_grpc.history_stats_field_names()
type_groups = starlink_grpc.history_stats_field_types()
tables["ping_stats"] = zip(name_groups[0:5], type_groups[0:5])
tables["usage"] = ((name_groups[5], type_groups[5]),)
tables["power"] = ((name_groups[6], type_groups[6]),)
name_groups = starlink_grpc.history_bulk_field_names()
type_groups = starlink_grpc.history_bulk_field_types()
tables["history"] = ((name_groups[1], type_groups[1]), (["counter"], [int]))
def sql_type(type_class):
if issubclass(type_class, float):
return "REAL"
if issubclass(type_class, bool):
# advisory only, stores as int:
return "BOOLEAN"
if issubclass(type_class, int):
return "INTEGER"
if issubclass(type_class, str):
return "TEXT"
raise TypeError
column_info = {}
cur = conn.cursor()
for table, group_pairs in tables.items():
column_names = ["time", "id"]
columns = ['"time" INTEGER NOT NULL', '"id" TEXT NOT NULL']
for name_group, type_group in group_pairs:
for name_item, type_item in zip(name_group, type_group):
name_item = dish_common.BRACKETS_RE.match(name_item).group(1)
if name_item != "id":
columns.append('"{0}" {1}'.format(name_item, sql_type(type_item)))
column_names.append(name_item)
cur.execute('DROP TABLE IF EXISTS "{0}{1}"'.format(table, suffix))
sql = 'CREATE TABLE "{0}{1}" ({2}, PRIMARY KEY("time","id"))'.format(
table, suffix, ", ".join(columns))
cur.execute(sql)
column_info[table] = column_names
cur.close()
return column_info
def convert_tables(conn, context):
new_column_info = create_tables(conn, context, "_new")
conn.row_factory = sqlite3.Row
old_cur = conn.cursor()
new_cur = conn.cursor()
for table, new_columns in new_column_info.items():
try:
old_cur.execute('SELECT * FROM "{0}"'.format(table))
table_ok = True
except sqlite3.OperationalError:
table_ok = False
if table_ok:
old_columns = set(x[0] for x in old_cur.description)
new_columns = tuple(x for x in new_columns if x in old_columns)
sql = 'INSERT OR REPLACE INTO "{0}_new" ({1}) VALUES ({2})'.format(
table, ",".join('"' + x + '"' for x in new_columns),
",".join(repeat("?", len(new_columns))))
new_cur.executemany(sql, (tuple(row[col] for col in new_columns) for row in old_cur))
new_cur.execute('DROP TABLE "{0}"'.format(table))
new_cur.execute('ALTER TABLE "{0}_new" RENAME TO "{0}"'.format(table))
old_cur.close()
new_cur.close()
conn.row_factory = None
def main():
opts = parse_args()
logging.basicConfig(format="%(levelname)s: %(message)s")
gstate = dish_common.GlobalState(target=opts.target)
gstate.points = []
gstate.deferred_points = []
signal.signal(signal.SIGTERM, handle_sigterm)
gstate.sql_conn = sqlite3.connect(opts.database)
rc = 0
try:
rc = ensure_schema(opts, gstate.sql_conn, gstate.context)
if rc:
sys.exit(rc)
next_loop = time.monotonic()
while True:
rc = loop_body(opts, gstate)
if opts.loop_interval > 0.0:
now = time.monotonic()
next_loop = max(next_loop + opts.loop_interval, now)
time.sleep(next_loop - now)
else:
break
except sqlite3.Error as e:
logging.error("Database error: %s", e)
rc = 1
except (KeyboardInterrupt, Terminated):
pass
finally:
loop_body(opts, gstate, shutdown=True)
gstate.sql_conn.close()
gstate.shutdown()
sys.exit(rc)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,304 @@
#!/usr/bin/env python3
"""Output Starlink user terminal data info in text format.
This script pulls the current status info and/or metrics computed from the
history data and prints them to a file or stdout either once or in a periodic
loop. By default, it will print the results in CSV format.
Note that using this script to record the alert_detail group mode as CSV
data is not recommended, because the number of alerts and their relative
order in the output can change with the dish software. Instead of using
the alert_detail mode, you can use the alerts bitmask in the status group.
"""
import datetime
import logging
import os
import signal
import sys
import time
import dish_common
import starlink_grpc
COUNTER_FIELD = "end_counter"
VERBOSE_FIELD_MAP = {
# status fields (the remainder are either self-explanatory or I don't
# know with confidence what they mean)
"alerts": "Alerts bit field",
# ping_drop fields
"samples": "Parsed samples",
"end_counter": "Sample counter",
"total_ping_drop": "Total ping drop",
"count_full_ping_drop": "Count of drop == 1",
"count_obstructed": "Obstructed",
"total_obstructed_ping_drop": "Obstructed ping drop",
"count_full_obstructed_ping_drop": "Obstructed drop == 1",
"count_unscheduled": "Unscheduled",
"total_unscheduled_ping_drop": "Unscheduled ping drop",
"count_full_unscheduled_ping_drop": "Unscheduled drop == 1",
# ping_run_length fields
"init_run_fragment": "Initial drop run fragment",
"final_run_fragment": "Final drop run fragment",
"run_seconds": "Per-second drop runs",
"run_minutes": "Per-minute drop runs",
# ping_latency fields
"mean_all_ping_latency": "Mean RTT, drop < 1",
"deciles_all_ping_latency": "RTT deciles, drop < 1",
"mean_full_ping_latency": "Mean RTT, drop == 0",
"deciles_full_ping_latency": "RTT deciles, drop == 0",
"stdev_full_ping_latency": "RTT standard deviation, drop == 0",
# ping_loaded_latency is still experimental, so leave those unexplained
# usage fields
"download_usage": "Bytes downloaded",
"upload_usage": "Bytes uploaded",
}
class Terminated(Exception):
pass
def handle_sigterm(signum, frame):
# Turn SIGTERM into an exception so main loop can clean up
raise Terminated
def parse_args():
parser = dish_common.create_arg_parser(
output_description="print it in text format; by default, will print in CSV format")
group = parser.add_argument_group(title="CSV output options")
group.add_argument("-H",
"--print-header",
action="store_true",
help="Print CSV header instead of parsing data")
group.add_argument("-O",
"--out-file",
default="-",
help="Output file path; if set, can also be used to resume from prior "
"history sample counter, default: write to standard output")
group.add_argument("-k",
"--skip-query",
action="store_true",
help="Skip querying for prior sample write point in history modes")
opts = dish_common.run_arg_parser(parser)
if (opts.history_stats_mode or opts.status_mode) and opts.bulk_mode and not opts.verbose:
parser.error("bulk_history cannot be combined with other modes for CSV output")
# Technically possible, but a pain to implement, so just disallow it. User
# probably doesn't realize how weird it would be, anyway, given that stats
# data reports at a different rate from status data in this case.
if opts.history_stats_mode and opts.status_mode and not opts.verbose and opts.poll_loops > 1:
parser.error("usage of --poll-loops with history stats modes cannot be mixed with status "
"modes for CSV output")
opts.skip_query |= opts.no_counter | opts.verbose
if opts.out_file == "-":
opts.no_stdout_errors = True
return opts
def open_out_file(opts, mode):
if opts.out_file == "-":
# open new file, so it can be closed later without affecting sys.stdout
return os.fdopen(sys.stdout.fileno(), "w", buffering=1, closefd=False)
return open(opts.out_file, mode, buffering=1)
def print_header(opts, print_file):
header = ["datetimestamp_utc"]
def header_add(names):
for name in names:
name, start, end = dish_common.BRACKETS_RE.match(name).group(1, 4, 5)
if start:
header.extend(name + "_" + str(x) for x in range(int(start), int(end)))
elif end:
header.extend(name + "_" + str(x) for x in range(int(end)))
else:
header.append(name)
if opts.status_mode:
if opts.pure_status_mode:
context = starlink_grpc.ChannelContext(target=opts.target)
try:
name_groups = starlink_grpc.status_field_names(context=context)
except starlink_grpc.GrpcError as e:
dish_common.conn_error(opts, "Failure reflecting status field names: %s", str(e))
return 1
if "status" in opts.mode:
header_add(name_groups[0])
if "obstruction_detail" in opts.mode:
header_add(name_groups[1])
if "alert_detail" in opts.mode:
header_add(name_groups[2])
if "location" in opts.mode:
header_add(starlink_grpc.location_field_names())
if opts.bulk_mode:
general, bulk = starlink_grpc.history_bulk_field_names()
header_add(bulk)
if opts.history_stats_mode:
groups = starlink_grpc.history_stats_field_names()
general, ping, runlen, latency, loaded, usage, power = groups[0:7]
header_add(general)
if "ping_drop" in opts.mode:
header_add(ping)
if "ping_run_length" in opts.mode:
header_add(runlen)
if "ping_latency" in opts.mode:
header_add(latency)
if "ping_loaded_latency" in opts.mode:
header_add(loaded)
if "usage" in opts.mode:
header_add(usage)
if "power" in opts.mode:
header_add(power)
print(",".join(header), file=print_file)
return 0
def get_prior_counter(opts, gstate):
# This implementation is terrible in that it makes a bunch of assumptions.
# Those assumptions should be true for files generated by this script, but
# it would be better not to make them. However, it also only works if the
# CSV file has a header that correctly matches the last line of the file,
# and there's really no way to verify that, so it's garbage in, garbage
# out, anyway. It also reads the entire file line-by-line, which is not
# great.
try:
with open_out_file(opts, "r") as csv_file:
header = csv_file.readline().split(",")
column = header.index(COUNTER_FIELD)
last_line = None
for last_line in csv_file:
pass
if last_line is not None:
gstate.counter_stats = int(last_line.split(",")[column])
except (IndexError, OSError, ValueError):
pass
def loop_body(opts, gstate, print_file, shutdown=False):
csv_data = []
def xform(val):
return "" if val is None else str(val)
def cb_data_add_item(name, val, category):
if opts.verbose:
csv_data.append("{0:22} {1}".format(
VERBOSE_FIELD_MAP.get(name, name) + ":", xform(val)))
else:
# special case for get_status failure: this will be the lone item added
if name == "state" and val == "DISH_UNREACHABLE":
csv_data.extend(["", "", "", val])
else:
csv_data.append(xform(val))
def cb_data_add_sequence(name, val, category, start):
if opts.verbose:
csv_data.append("{0:22} {1}".format(
VERBOSE_FIELD_MAP.get(name, name) + ":",
", ".join(xform(subval) for subval in val)))
else:
csv_data.extend(xform(subval) for subval in val)
def cb_add_bulk(bulk, count, timestamp, counter):
if opts.verbose:
print("Time range (UTC): {0} -> {1}".format(
datetime.datetime.fromtimestamp(timestamp, datetime.timezone.utc).replace(tzinfo=None).isoformat(),
datetime.datetime.fromtimestamp(timestamp + count, datetime.timezone.utc).replace(tzinfo=None).isoformat()),
file=print_file)
for key, val in bulk.items():
print("{0:22} {1}".format(key + ":", ", ".join(xform(subval) for subval in val)),
file=print_file)
if opts.loop_interval > 0.0:
print(file=print_file)
else:
for i in range(count):
timestamp += 1
fields = [datetime.datetime.fromtimestamp(timestamp, datetime.timezone.utc).replace(tzinfo=None).isoformat()]
fields.extend([xform(val[i]) for val in bulk.values()])
print(",".join(fields), file=print_file)
rc, status_ts, hist_ts = dish_common.get_data(opts,
gstate,
cb_data_add_item,
cb_data_add_sequence,
add_bulk=cb_add_bulk,
flush_history=shutdown)
if opts.verbose:
if csv_data:
print("\n".join(csv_data), file=print_file)
if opts.loop_interval > 0.0:
print(file=print_file)
else:
if csv_data:
timestamp = status_ts if status_ts is not None else hist_ts
csv_data.insert(0, datetime.datetime.fromtimestamp(timestamp, datetime.timezone.utc).replace(tzinfo=None).isoformat())
print(",".join(csv_data), file=print_file)
return rc
def main():
opts = parse_args()
logging.basicConfig(format="%(levelname)s: %(message)s")
if opts.print_header:
try:
with open_out_file(opts, "a") as print_file:
rc = print_header(opts, print_file)
except OSError as e:
logging.error("Failed opening output file: %s", str(e))
rc = 1
sys.exit(rc)
gstate = dish_common.GlobalState(target=opts.target)
if opts.out_file != "-" and not opts.skip_query and opts.history_stats_mode:
get_prior_counter(opts, gstate)
try:
print_file = open_out_file(opts, "a")
except OSError as e:
logging.error("Failed opening output file: %s", str(e))
sys.exit(1)
signal.signal(signal.SIGTERM, handle_sigterm)
rc = 0
try:
next_loop = time.monotonic()
while True:
rc = loop_body(opts, gstate, print_file)
if opts.loop_interval > 0.0:
now = time.monotonic()
next_loop = max(next_loop + opts.loop_interval, now)
time.sleep(next_loop - now)
else:
break
except (KeyboardInterrupt, Terminated):
pass
finally:
loop_body(opts, gstate, print_file, shutdown=True)
print_file.close()
gstate.shutdown()
sys.exit(rc)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,284 @@
#!/usr/bin/env python3
r"""Output Starlink user terminal data info in text format.
Expects input as from the following command:
grpcurl -plaintext -d {\"get_history\":{}} 192.168.100.1:9200 SpaceX.API.Device.Device/Handle
This script examines the most recent samples from the history data and
prints several different metrics computed from them to stdout. By default,
it will print the results in CSV format.
"""
import argparse
import datetime
import logging
import re
import sys
import time
import starlink_json
BRACKETS_RE = re.compile(r"([^[]*)(\[((\d+),|)(\d*)\]|)$")
SAMPLES_DEFAULT = 3600
HISTORY_STATS_MODES = [
"ping_drop", "ping_run_length", "ping_latency", "ping_loaded_latency", "usage"
]
VERBOSE_FIELD_MAP = {
# ping_drop fields
"samples": "Parsed samples",
"end_counter": "Sample counter",
"total_ping_drop": "Total ping drop",
"count_full_ping_drop": "Count of drop == 1",
"count_obstructed": "Obstructed",
"total_obstructed_ping_drop": "Obstructed ping drop",
"count_full_obstructed_ping_drop": "Obstructed drop == 1",
"count_unscheduled": "Unscheduled",
"total_unscheduled_ping_drop": "Unscheduled ping drop",
"count_full_unscheduled_ping_drop": "Unscheduled drop == 1",
# ping_run_length fields
"init_run_fragment": "Initial drop run fragment",
"final_run_fragment": "Final drop run fragment",
"run_seconds": "Per-second drop runs",
"run_minutes": "Per-minute drop runs",
# ping_latency fields
"mean_all_ping_latency": "Mean RTT, drop < 1",
"deciles_all_ping_latency": "RTT deciles, drop < 1",
"mean_full_ping_latency": "Mean RTT, drop == 0",
"deciles_full_ping_latency": "RTT deciles, drop == 0",
"stdev_full_ping_latency": "RTT standard deviation, drop == 0",
# ping_loaded_latency is still experimental, so leave those unexplained
# usage fields
"download_usage": "Bytes downloaded",
"upload_usage": "Bytes uploaded",
}
def parse_args():
parser = argparse.ArgumentParser(
description="Collect status and/or history data from a Starlink user terminal and "
"print it to standard output in text format; by default, will print in CSV format",
add_help=False)
group = parser.add_argument_group(title="General options")
group.add_argument("-f", "--filename", default="-", help="The file to parse, default: stdin")
group.add_argument("-h", "--help", action="help", help="Be helpful")
group.add_argument("-t",
"--timestamp",
help="UTC time history data was pulled, as YYYY-MM-DD_HH:MM:SS or as "
"seconds since Unix epoch, default: current time")
group.add_argument("-v", "--verbose", action="store_true", help="Be verbose")
group = parser.add_argument_group(title="History mode options")
group.add_argument("-a",
"--all-samples",
action="store_const",
const=-1,
dest="samples",
help="Parse all valid samples")
group.add_argument("-s",
"--samples",
type=int,
help="Number of data samples to parse, default: all in bulk mode, "
"else " + str(SAMPLES_DEFAULT))
group = parser.add_argument_group(title="CSV output options")
group.add_argument("-H",
"--print-header",
action="store_true",
help="Print CSV header instead of parsing data")
all_modes = HISTORY_STATS_MODES + ["bulk_history"]
parser.add_argument("mode",
nargs="+",
choices=all_modes,
help="The data group to record, one or more of: " + ", ".join(all_modes),
metavar="mode")
opts = parser.parse_args()
# for convenience, set flags for whether any mode in a group is selected
opts.history_stats_mode = bool(set(HISTORY_STATS_MODES).intersection(opts.mode))
opts.bulk_mode = "bulk_history" in opts.mode
if opts.history_stats_mode and opts.bulk_mode:
parser.error("bulk_history cannot be combined with other modes for CSV output")
if opts.samples is None:
opts.samples = -1 if opts.bulk_mode else SAMPLES_DEFAULT
if opts.timestamp is None:
opts.history_time = None
else:
try:
opts.history_time = int(opts.timestamp)
except ValueError:
try:
opts.history_time = int(
datetime.datetime.strptime(opts.timestamp, "%Y-%m-%d_%H:%M:%S").timestamp())
except ValueError:
parser.error("Could not parse timestamp")
if opts.verbose:
print("Using timestamp", datetime.datetime.fromtimestamp(opts.history_time, tz=datetime.timezone.utc))
return opts
def print_header(opts):
header = ["datetimestamp_utc"]
def header_add(names):
for name in names:
name, start, end = BRACKETS_RE.match(name).group(1, 4, 5)
if start:
header.extend(name + "_" + str(x) for x in range(int(start), int(end)))
elif end:
header.extend(name + "_" + str(x) for x in range(int(end)))
else:
header.append(name)
if opts.bulk_mode:
general, bulk = starlink_json.history_bulk_field_names()
header_add(general)
header_add(bulk)
if opts.history_stats_mode:
groups = starlink_json.history_stats_field_names()
general, ping, runlen, latency, loaded, usage = groups[0:6]
header_add(general)
if "ping_drop" in opts.mode:
header_add(ping)
if "ping_run_length" in opts.mode:
header_add(runlen)
if "ping_loaded_latency" in opts.mode:
header_add(loaded)
if "ping_latency" in opts.mode:
header_add(latency)
if "usage" in opts.mode:
header_add(usage)
print(",".join(header))
return 0
def get_data(opts, add_item, add_sequence, add_bulk):
def add_data(data):
for key, val in data.items():
name, seq = BRACKETS_RE.match(key).group(1, 5)
if seq is None:
add_item(name, val)
else:
add_sequence(name, val)
if opts.history_stats_mode:
try:
groups = starlink_json.history_stats(opts.filename, opts.samples, verbose=opts.verbose)
except starlink_json.JsonError as e:
logging.error("Failure getting history stats: %s", str(e))
return 1
general, ping, runlen, latency, loaded, usage = groups[0:6]
add_data(general)
if "ping_drop" in opts.mode:
add_data(ping)
if "ping_run_length" in opts.mode:
add_data(runlen)
if "ping_latency" in opts.mode:
add_data(latency)
if "ping_loaded_latency" in opts.mode:
add_data(loaded)
if "usage" in opts.mode:
add_data(usage)
if opts.bulk_mode and add_bulk:
timestamp = int(time.time()) if opts.history_time is None else opts.history_time
try:
general, bulk = starlink_json.history_bulk_data(opts.filename,
opts.samples,
verbose=opts.verbose)
except starlink_json.JsonError as e:
logging.error("Failure getting bulk history: %s", str(e))
return 1
parsed_samples = general["samples"]
new_counter = general["end_counter"]
if opts.verbose:
print("Establishing time base: {0} -> {1}".format(
new_counter, datetime.datetime.fromtimestamp(timestamp, tz=datetime.timezone.utc)))
timestamp -= parsed_samples
add_bulk(bulk, parsed_samples, timestamp, new_counter - parsed_samples)
return 0
def loop_body(opts):
if opts.verbose:
csv_data = []
else:
history_time = int(time.time()) if opts.history_time is None else opts.history_time
csv_data = [datetime.datetime.fromtimestamp(history_time, datetime.timezone.utc).replace(tzinfo=None).isoformat()]
def cb_data_add_item(name, val):
if opts.verbose:
csv_data.append("{0:22} {1}".format(VERBOSE_FIELD_MAP.get(name, name) + ":", val))
else:
# special case for get_status failure: this will be the lone item added
if name == "state" and val == "DISH_UNREACHABLE":
csv_data.extend(["", "", "", val])
else:
csv_data.append(str(val))
def cb_data_add_sequence(name, val):
if opts.verbose:
csv_data.append("{0:22} {1}".format(
VERBOSE_FIELD_MAP.get(name, name) + ":", ", ".join(str(subval) for subval in val)))
else:
csv_data.extend(str(subval) for subval in val)
def cb_add_bulk(bulk, count, timestamp, counter):
if opts.verbose:
print("Time range (UTC): {0} -> {1}".format(
datetime.datetime.fromtimestamp(timestamp, datetime.timezone.utc).replace(tzinfo=None).isoformat(),
datetime.datetime.fromtimestamp(timestamp + count, datetime.timezone.utc).replace(tzinfo=None).isoformat()))
for key, val in bulk.items():
print("{0:22} {1}".format(key + ":", ", ".join(str(subval) for subval in val)))
else:
for i in range(count):
timestamp += 1
fields = [datetime.datetime.fromtimestamp(timestamp, datetime.timezone.utc).replace(tzinfo=None).isoformat()]
fields.extend(["" if val[i] is None else str(val[i]) for val in bulk.values()])
print(",".join(fields))
rc = get_data(opts, cb_data_add_item, cb_data_add_sequence, cb_add_bulk)
if opts.verbose:
if csv_data:
print("\n".join(csv_data))
else:
# skip if only timestamp
if len(csv_data) > 1:
print(",".join(csv_data))
return rc
def main():
opts = parse_args()
logging.basicConfig(format="%(levelname)s: %(message)s")
if opts.print_header:
rc = print_header(opts)
sys.exit(rc)
# for consistency with dish_grpc_text, pretend there was a loop
rc = loop_body(opts)
sys.exit(rc)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,227 @@
#!/usr/bin/env python3
"""Write a PNG image representing Starlink obstruction map data.
This scripts queries obstruction map data from the Starlink user terminal
(dish) reachable on the local network and writes a PNG image based on that
data.
Each pixel in the image represents the signal quality in a particular
direction, as observed by the dish. If the dish has not communicated with
satellites located in that direction, the pixel will be the "no data" color;
otherwise, it will be a color in the range from the "obstructed" color (no
signal at all) to the "unobstructed" color (sufficient signal quality for full
signal).
The coordinates of the pixels are the altitude and azimuth angles from the
horizontal coordinate system representation of the sky, converted to Cartesian
(rectangular) coordinates. The conversion is done in a way that maps all valid
directions into a circle that touches the edges of the image. Pixels outside
that circle will show up as "no data".
Azimuth is represented as angle from a line drawn from the center of the image
to the center of the top edge of the image, where center-top is 0 degrees
(North), the center of the right edge is 90 degrees (East), etc.
Altitude (elevation) is represented as distance from the center of the image,
where the center of the image represents vertical up from the point of view of
an observer located at the dish (zenith, which is usually not the physical
direction the dish is pointing) and the further away from the center a pixel
is, the closer to the horizon it is, down to a minimum altitude angle at the
edge of the circle.
"""
import argparse
from datetime import datetime
import logging
import os
import png
import sys
import time
import starlink_grpc
DEFAULT_OBSTRUCTED_COLOR = "FFFF0000"
DEFAULT_UNOBSTRUCTED_COLOR = "FFFFFFFF"
DEFAULT_NO_DATA_COLOR = "00000000"
DEFAULT_OBSTRUCTED_GREYSCALE = "FF00"
DEFAULT_UNOBSTRUCTED_GREYSCALE = "FFFF"
DEFAULT_NO_DATA_GREYSCALE = "0000"
LOOP_TIME_DEFAULT = 0
def loop_body(opts, context):
try:
snr_data = starlink_grpc.obstruction_map(context)
except starlink_grpc.GrpcError as e:
logging.error("Failed getting obstruction map data: %s", str(e))
return 1
def pixel_bytes(row):
for point in row:
if point > 1.0:
# shouldn't happen, but just in case...
point = 1.0
if point >= 0.0:
if opts.greyscale:
yield round(point * opts.unobstructed_color_g +
(1.0-point) * opts.obstructed_color_g)
else:
yield round(point * opts.unobstructed_color_r +
(1.0-point) * opts.obstructed_color_r)
yield round(point * opts.unobstructed_color_g +
(1.0-point) * opts.obstructed_color_g)
yield round(point * opts.unobstructed_color_b +
(1.0-point) * opts.obstructed_color_b)
if not opts.no_alpha:
yield round(point * opts.unobstructed_color_a +
(1.0-point) * opts.obstructed_color_a)
else:
if opts.greyscale:
yield opts.no_data_color_g
else:
yield opts.no_data_color_r
yield opts.no_data_color_g
yield opts.no_data_color_b
if not opts.no_alpha:
yield opts.no_data_color_a
if opts.filename == "-":
# Open new stdout file to get binary mode
out_file = os.fdopen(sys.stdout.fileno(), "wb", closefd=False)
else:
now = int(time.time())
filename = opts.filename.replace("%u", str(now))
filename = filename.replace("%d",
datetime.utcfromtimestamp(now).strftime("%Y_%m_%d_%H_%M_%S"))
filename = filename.replace("%s", str(opts.sequence))
out_file = open(filename, "wb")
if not snr_data or not snr_data[0]:
logging.error("Invalid SNR map data: Zero-length")
return 1
writer = png.Writer(len(snr_data[0]),
len(snr_data),
alpha=(not opts.no_alpha),
greyscale=opts.greyscale)
writer.write(out_file, (bytes(pixel_bytes(row)) for row in snr_data))
out_file.close()
opts.sequence += 1
return 0
def parse_args():
parser = argparse.ArgumentParser(
description="Collect directional obstruction map data from a Starlink user terminal and "
"emit it as a PNG image")
parser.add_argument(
"filename",
nargs="?",
help="The image file to write, or - to write to stdout; may be a template with the "
"following to be filled in per loop iteration: %%s for sequence number, %%d for UTC date "
"and time, %%u for seconds since Unix epoch.")
parser.add_argument(
"-o",
"--obstructed-color",
help="Color of obstructed areas, in RGB, ARGB, L, or AL hex notation, default: " +
DEFAULT_OBSTRUCTED_COLOR + " or " + DEFAULT_OBSTRUCTED_GREYSCALE)
parser.add_argument(
"-u",
"--unobstructed-color",
help="Color of unobstructed areas, in RGB, ARGB, L, or AL hex notation, default: " +
DEFAULT_UNOBSTRUCTED_COLOR + " or " + DEFAULT_UNOBSTRUCTED_GREYSCALE)
parser.add_argument(
"-n",
"--no-data-color",
help="Color of areas with no data, in RGB, ARGB, L, or AL hex notation, default: " +
DEFAULT_NO_DATA_COLOR + " or " + DEFAULT_NO_DATA_GREYSCALE)
parser.add_argument(
"-g",
"--greyscale",
action="store_true",
help="Emit a greyscale image instead of the default full color image; greyscale images "
"use L or AL hex notation for the color options")
parser.add_argument(
"-z",
"--no-alpha",
action="store_true",
help="Emit an image without alpha (transparency) channel instead of the default that "
"includes alpha channel")
parser.add_argument("-e",
"--target",
help="host:port of dish to query, default is the standard IP address "
"and port (192.168.100.1:9200)")
parser.add_argument("-t",
"--loop-interval",
type=float,
default=float(LOOP_TIME_DEFAULT),
help="Loop interval in seconds or 0 for no loop, default: " +
str(LOOP_TIME_DEFAULT))
parser.add_argument("-s",
"--sequence",
type=int,
default=1,
help="Starting sequence number for templatized filenames, default: 1")
parser.add_argument("-r",
"--reset",
action="store_true",
help="Reset obstruction map data before starting")
opts = parser.parse_args()
if opts.filename is None and not opts.reset:
parser.error("Must specify a filename unless resetting")
if opts.obstructed_color is None:
opts.obstructed_color = DEFAULT_OBSTRUCTED_GREYSCALE if opts.greyscale else DEFAULT_OBSTRUCTED_COLOR
if opts.unobstructed_color is None:
opts.unobstructed_color = DEFAULT_UNOBSTRUCTED_GREYSCALE if opts.greyscale else DEFAULT_UNOBSTRUCTED_COLOR
if opts.no_data_color is None:
opts.no_data_color = DEFAULT_NO_DATA_GREYSCALE if opts.greyscale else DEFAULT_NO_DATA_COLOR
for option in ("obstructed_color", "unobstructed_color", "no_data_color"):
try:
color = int(getattr(opts, option), 16)
if opts.greyscale:
setattr(opts, option + "_a", (color >> 8) & 255)
setattr(opts, option + "_g", color & 255)
else:
setattr(opts, option + "_a", (color >> 24) & 255)
setattr(opts, option + "_r", (color >> 16) & 255)
setattr(opts, option + "_g", (color >> 8) & 255)
setattr(opts, option + "_b", color & 255)
except ValueError:
logging.error("Invalid hex number for %s", option)
sys.exit(1)
return opts
def main():
opts = parse_args()
logging.basicConfig(format="%(levelname)s: %(message)s")
context = starlink_grpc.ChannelContext(target=opts.target)
try:
if opts.reset:
starlink_grpc.reset_obstruction_map(context)
if opts.filename is not None:
next_loop = time.monotonic()
while True:
rc = loop_body(opts, context)
if opts.loop_interval > 0.0:
now = time.monotonic()
next_loop = max(next_loop + opts.loop_interval, now)
time.sleep(next_loop - now)
else:
break
finally:
context.close()
sys.exit(rc)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,29 @@
#!/usr/bin/env python3
"""Simple example of get_status request using grpc call directly."""
import sys
import grpc
try:
from spacex_api.device import device_pb2
from spacex_api.device import device_pb2_grpc
except ModuleNotFoundError:
print("This script requires the generated gRPC protocol modules. See README file for details.",
file=sys.stderr)
sys.exit(1)
# Note that if you remove the 'with' clause here, you need to separately
# call channel.close() when you're done with the gRPC connection.
with grpc.insecure_channel("192.168.100.1:9200") as channel:
stub = device_pb2_grpc.DeviceStub(channel)
response = stub.Handle(device_pb2.Request(get_status={}), timeout=10)
# Dump everything
print(response)
# Just the software version
print("Software version:", response.dish_get_status.device_info.software_version)
# Check if connected
print("Not connected" if response.dish_get_status.HasField("outage") else "Connected")

View File

@@ -0,0 +1,5 @@
#!/bin/sh
printenv >> /etc/environment
ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
exec /usr/local/bin/python3 $@

View File

@@ -0,0 +1,142 @@
#!/usr/bin/env python3
"""Poll and record service information from a gRPC reflection server
This script will query a gRPC reflection server for descriptor information of
all services supported by the server, excluding the reflection service itself,
and write a serialized FileDescriptorSet protobuf containing all returned
descriptors to a file, either once or in a periodic loop. This file can then
be used by any tool that accepts such data, including protoc, the protocol
buffer compiler.
Output files are named with the CRC32 value and byte length of the serialized
FileDescriptorSet data. If those match the name of a file written previously,
the data is assumed not to have changed and no new file is written. For this
reason, it is recommended to use an output directory specific to the server,
to avoid mixing with files written with data from other servers.
Although the default target option is the local IP and port number used by the
gRPC service on a Starlink user terminal, this script is otherwise not
specific to Starlink and should work for any gRPC server that does not require
SSL and that has the reflection service enabled.
"""
import argparse
import binascii
import logging
import os
import sys
import time
import grpc
from yagrc import dump
from yagrc import reflector
TARGET_DEFAULT = "192.168.100.1:9200"
LOOP_TIME_DEFAULT = 0
RETRY_DELAY_DEFAULT = 0
def parse_args():
parser = argparse.ArgumentParser(
description="Poll a gRPC reflection server and record a serialized "
"FileDescriptorSet (protoset) of the reflected information")
parser.add_argument("outdir",
nargs="?",
metavar="OUTDIR",
help="Directory in which to write protoset files")
parser.add_argument("-g",
"--target",
default=TARGET_DEFAULT,
help="host:port of device to query, default: " + TARGET_DEFAULT)
parser.add_argument("-n",
"--print-only",
action="store_true",
help="Print the protoset filename instead of writing the data")
parser.add_argument("-r",
"--retry-delay",
type=float,
default=float(RETRY_DELAY_DEFAULT),
help="Time in seconds to wait before retrying after network "
"error or 0 for no retry, default: " + str(RETRY_DELAY_DEFAULT))
parser.add_argument("-t",
"--loop-interval",
type=float,
default=float(LOOP_TIME_DEFAULT),
help="Loop interval in seconds or 0 for no loop, default: " +
str(LOOP_TIME_DEFAULT))
parser.add_argument("-v", "--verbose", action="store_true", help="Be verbose")
opts = parser.parse_args()
if opts.outdir is None and not opts.print_only:
parser.error("Output dir is required unless --print-only option set")
return opts
def loop_body(opts):
while True:
try:
with grpc.insecure_channel(opts.target) as channel:
protoset = dump.dump_protocols(channel)
break
except reflector.ServiceError as e:
logging.error("Problem with reflection service: %s", str(e))
# Only retry on network-related errors, not service errors
return
except grpc.RpcError as e:
# grpc.RpcError error message is not very useful, but grpc.Call has
# something slightly better
if isinstance(e, grpc.Call):
msg = e.details()
else:
msg = "Unknown communication or service error"
print("Problem communicating with reflection service:", msg)
if opts.retry_delay > 0.0:
time.sleep(opts.retry_delay)
else:
return
filename = "{0:08x}_{1}.protoset".format(binascii.crc32(protoset), len(protoset))
if opts.print_only:
print("Protoset:", filename)
else:
try:
with open(filename, mode="xb") as outfile:
outfile.write(protoset)
print("New protoset found:", filename)
except FileExistsError:
if opts.verbose:
print("Existing protoset:", filename)
def goto_dir(outdir):
try:
outdir_abs = os.path.abspath(outdir)
os.makedirs(outdir_abs, exist_ok=True)
os.chdir(outdir)
except OSError as e:
logging.error("Output directory error: %s", str(e))
sys.exit(1)
def main():
opts = parse_args()
logging.basicConfig(format="%(levelname)s: %(message)s")
if not opts.print_only:
goto_dir(opts.outdir)
next_loop = time.monotonic()
while True:
loop_body(opts)
if opts.loop_interval > 0.0:
now = time.monotonic()
next_loop = max(next_loop + opts.loop_interval, now)
time.sleep(next_loop - now)
else:
break
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,101 @@
"""Shared logic for main loop control.
This module provides support for running a function from a loop at fixed
intervals using monotonic time or on cron-like schedule using wall clock time.
The cron scheduler uses the same schedule format string that cron uses for
crontab entries, and will do its best to remain on schedule despite clock
adjustments.
"""
try:
from croniter import croniter
import dateutil.tz
croniter_ok = True
except ImportError:
croniter_ok = False
from datetime import datetime
import signal
import time
# Max time to sleep when using non-monotonic time. This helps protect against
# oversleeping as the result of large clock adjustments.
MAX_SLEEP = 3600.0
class Terminated(Exception):
pass
def handle_sigterm(signum, frame):
# Turn SIGTERM into an exception so main loop can clean up
raise Terminated
def add_args(parser):
group = parser.add_argument_group(title="Loop options")
group.add_argument("-t", "--loop-interval", type=float, help="Run loop at interval, in seconds")
group.add_argument("-c",
"--loop-cron",
help="Run loop on schedule defined by cron format expression")
group.add_argument("-m",
"--cron-timezone",
help='Timezone name (IANA name or "UTC") to use for --loop-cron '
'schedule; default is system local time')
def check_args(opts, parser):
if opts.loop_interval is not None and opts.loop_cron is not None:
parser.error("At most one of --loop-interval and --loop-cron may be used")
if opts.cron_timezone and not opts.loop_cron:
parser.error("cron timezone specified, but not using cron scheduling")
if opts.loop_cron is not None:
if not croniter_ok:
parser.error("croniter is not installed, --loop-cron requires it")
if not croniter.is_valid(opts.loop_cron):
parser.error("Invalid cron format")
opts.timezone = dateutil.tz.gettz(opts.cron_timezone)
if opts.timezone is None:
if opts.cron_timezone is None:
parser.error("Failed to get local timezone, may need to use --cron-timezone")
else:
parser.error("Invalid timezone name")
if opts.loop_interval is None:
opts.loop_interval = 0.0
def run_loop(opts, loop_body, *loop_args):
signal.signal(signal.SIGTERM, handle_sigterm)
rc = 0
try:
if opts.loop_interval <= 0.0 and not opts.loop_cron:
rc = loop_body(*loop_args)
elif opts.loop_cron:
criter = croniter(opts.loop_cron, datetime.now(tz=opts.timezone))
now = time.time()
next_loop = criter.get_next(start_time=now)
while True:
while now < next_loop:
# This is to protect against clock getting set backwards
# by a large amount. Normally, it should do nothing:
next_loop = criter.get_next(start_time=now)
time.sleep(min(next_loop - now, MAX_SLEEP))
now = time.time()
next_loop = criter.get_next(start_time=now)
rc = loop_body(*loop_args)
now = time.time()
else:
next_loop = time.monotonic()
while True:
rc = loop_body(*loop_args)
now = time.monotonic()
next_loop = max(next_loop + opts.loop_interval, now)
time.sleep(next_loop - now)
except (KeyboardInterrupt, Terminated):
pass
return rc

View File

@@ -0,0 +1,10 @@
[build-system]
requires = [
"setuptools>=42",
"setuptools_scm[toml]>=3.4",
"wheel"
]
build-backend = "setuptools.build_meta"
[tool.setuptools_scm]
root = ".."

View File

@@ -0,0 +1,27 @@
[metadata]
name = starlink-grpc-core
url = https://github.com/sparky8512/starlink-grpc-tools
author_email = sparky8512-py@yahoo.com
license_files = ../LICENSE
classifiers =
Development Status :: 4 - Beta
Intended Audience :: Developers
License :: OSI Approved :: The Unlicense (Unlicense)
Operating System :: OS Independent
Programming Language :: Python :: 3
Topic :: Software Development :: Libraries :: Python Modules
description = Core functions for Starlink gRPC communication
long_description = file: README.md
long_description_content_type = text/markdown
[options]
install_requires =
grpcio>=1.12.0
protobuf>=3.6.0
yagrc>=1.1.1
typing-extensions>=4.3.0
package_dir =
=..
py_modules =
starlink_grpc
python_requires = >=3.7

View File

@@ -0,0 +1,3 @@
import setuptools
setuptools.setup()

View File

@@ -0,0 +1,88 @@
#!/usr/bin/env python3
"""A simple(?) example for using the starlink_grpc module.
This script shows an example of how to use the starlink_grpc module to
implement polling of status and/or history data.
By itself, it's not very useful unless you're trying to understand how the
status data correlates with certain aspects of the history data because all it
does is to dump both status and history data when it detects certain
conditions in the history data.
"""
from datetime import datetime
from datetime import timezone
import time
import starlink_grpc
INITIAL_SAMPLES = 20
LOOP_SLEEP_TIME = 4
def run_loop(context):
samples = INITIAL_SAMPLES
counter = None
prev_triggered = False
while True:
try:
# `starlink_grpc.status_data` returns a tuple of 3 dicts, but in case
# the API changes to add more in the future, it's best to reference
# them by index instead of direct assignment from the function call.
groups = starlink_grpc.status_data(context=context)
status = groups[0]
# On the other hand, `starlink_grpc.history_bulk_data` will always
# return 2 dicts, because that's all the data there is.
general, bulk = starlink_grpc.history_bulk_data(samples, start=counter, context=context)
except starlink_grpc.GrpcError:
# Dish rebooting maybe, or LAN connectivity error. Just ignore it
# and hope it goes away.
pass
else:
# The following is what actually does stuff with the data. It should
# be replaced with something more useful.
# This computes a trigger detecting any packet loss (ping drop):
#triggered = any(x > 0 for x in bulk["pop_ping_drop_rate"])
# This computes a trigger detecting samples marked as obstructed:
#triggered = any(bulk["obstructed"])
# This computes a trigger detecting samples not marked as scheduled:
triggered = not all(bulk["scheduled"])
if triggered or prev_triggered:
print("Triggered" if triggered else "Continued", "at:",
datetime.now(tz=timezone.utc))
print("status:", status)
print("history:", bulk)
if not triggered:
print()
prev_triggered = triggered
# The following makes the next loop only pull the history samples that
# are newer than the ones already examined.
samples = -1
counter = general["end_counter"]
# And this is a not-very-robust way of implementing an interval loop.
# Note that a 4 second loop will poll the history buffer pretty
# frequently. Even though we only ask for new samples (which should
# only be 4 of them), the grpc layer needs to pull the entire 12 hour
# history buffer each time, only to discard most of it.
time.sleep(LOOP_SLEEP_TIME)
def main():
# This part is optional. The `starlink_grpc` functions can work without a
# `starlink_grpc.ChannelContext` object passed in, but they will open a
# new channel for each RPC call (so twice for each loop iteration) without
# it.
context = starlink_grpc.ChannelContext()
try:
run_loop(context)
finally:
context.close()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,11 @@
grpcio>=1.12.0
grpcio-tools>=1.20.0
protobuf>=3.6.0
yagrc>=1.1.1
paho-mqtt>=1.5.1
influxdb>=5.3.1
influxdb_client>=1.23.0
pypng>=0.0.20
typing-extensions>=4.3.0
croniter>=1.0.1
python-dateutil>=2.7.0

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,396 @@
"""Parser for JSON format gRPC output from a Starlink user terminal.
Expects input as from grpcurl get_history request.
Handling output for other request responses may be added in the future, but
the others don't really need as much interpretation as the get_history
response does.
See the starlink_grpc module docstring for descriptions of the stat elements.
"""
import json
import math
import statistics
import sys
from itertools import chain
class JsonError(Exception):
"""Provides error info when something went wrong with JSON parsing."""
def history_bulk_field_names():
"""Return the field names of the bulk history data.
Note:
See `starlink_grpc` module docs regarding brackets in field names.
Returns:
A tuple with 2 lists, the first with general data names, the second
with bulk history data names.
"""
return [
"samples",
"end_counter",
], [
"pop_ping_drop_rate[]",
"pop_ping_latency_ms[]",
"downlink_throughput_bps[]",
"uplink_throughput_bps[]",
"snr[]",
"scheduled[]",
"obstructed[]",
]
def history_ping_field_names():
"""Deprecated. Use history_stats_field_names instead."""
return history_stats_field_names()[0:3]
def history_stats_field_names():
"""Return the field names of the packet loss stats.
Note:
See `starlink_grpc` module docs regarding brackets in field names.
Returns:
A tuple with 6 lists, with general data names, ping drop stat names,
ping drop run length stat names, ping latency stat names, loaded ping
latency stat names, and bandwidth usage stat names, in that order.
Note:
Additional lists may be added to this tuple in the future with
additional data groups, so it not recommended for the caller to
assume exactly 6 elements.
"""
return [
"samples",
"end_counter",
], [
"total_ping_drop",
"count_full_ping_drop",
"count_obstructed",
"total_obstructed_ping_drop",
"count_full_obstructed_ping_drop",
"count_unscheduled",
"total_unscheduled_ping_drop",
"count_full_unscheduled_ping_drop",
], [
"init_run_fragment",
"final_run_fragment",
"run_seconds[1,61]",
"run_minutes[1,61]",
], [
"mean_all_ping_latency",
"deciles_all_ping_latency[11]",
"mean_full_ping_latency",
"deciles_full_ping_latency[11]",
"stdev_full_ping_latency",
], [
"load_bucket_samples[15]",
"load_bucket_min_latency[15]",
"load_bucket_median_latency[15]",
"load_bucket_max_latency[15]",
], [
"download_usage",
"upload_usage",
]
def get_history(filename):
"""Read JSON data and return the raw history in dict format.
Args:
filename (str): Filename from which to read JSON data, or "-" to read
from standard input.
Raises:
Various exceptions depending on Python version: Failure to open or
read input or invalid JSON read on input.
"""
if filename == "-":
json_data = json.load(sys.stdin)
else:
with open(filename) as json_file:
json_data = json.load(json_file)
return json_data["dishGetHistory"]
def _compute_sample_range(history, parse_samples, verbose=False):
current = int(history["current"])
samples = len(history["popPingDropRate"])
if verbose:
print("current counter: " + str(current))
print("All samples: " + str(samples))
samples = min(samples, current)
if verbose:
print("Valid samples: " + str(samples))
if parse_samples < 0 or samples < parse_samples:
parse_samples = samples
start = current - parse_samples
if start == current:
return range(0), 0, current
# This is ring buffer offset, so both index to oldest data sample and
# index to next data sample after the newest one.
end_offset = current % samples
start_offset = start % samples
# Set the range for the requested set of samples. This will iterate
# sample index in order from oldest to newest.
if start_offset < end_offset:
sample_range = range(start_offset, end_offset)
else:
sample_range = chain(range(start_offset, samples), range(0, end_offset))
return sample_range, current - start, current
def history_bulk_data(filename, parse_samples, verbose=False):
"""Fetch history data for a range of samples.
Args:
filename (str): Filename from which to read JSON data, or "-" to read
from standard input.
parse_samples (int): Number of samples to process, or -1 to parse all
available samples.
verbose (bool): Optionally produce verbose output.
Returns:
A tuple with 2 dicts, the first mapping general data names to their
values and the second mapping bulk history data names to their values.
Note: The field names in the returned data do _not_ include brackets
to indicate sequences, since those would just need to be parsed
out. The general data is all single items and the bulk history
data is all sequences.
Raises:
JsonError: Failure to open, read, or parse JSON on input.
"""
try:
history = get_history(filename)
except ValueError as e:
raise JsonError("Failed to parse JSON: " + str(e))
except Exception as e:
raise JsonError(e)
sample_range, parsed_samples, current = _compute_sample_range(history,
parse_samples,
verbose=verbose)
pop_ping_drop_rate = []
pop_ping_latency_ms = []
downlink_throughput_bps = []
uplink_throughput_bps = []
for i in sample_range:
pop_ping_drop_rate.append(history["popPingDropRate"][i])
pop_ping_latency_ms.append(
history["popPingLatencyMs"][i] if history["popPingDropRate"][i] < 1 else None)
downlink_throughput_bps.append(history["downlinkThroughputBps"][i])
uplink_throughput_bps.append(history["uplinkThroughputBps"][i])
return {
"samples": parsed_samples,
"end_counter": current,
}, {
"pop_ping_drop_rate": pop_ping_drop_rate,
"pop_ping_latency_ms": pop_ping_latency_ms,
"downlink_throughput_bps": downlink_throughput_bps,
"uplink_throughput_bps": uplink_throughput_bps,
"snr": [None] * parsed_samples, # obsoleted in grpc service
"scheduled": [None] * parsed_samples, # obsoleted in grpc service
"obstructed": [None] * parsed_samples, # obsoleted in grpc service
}
def history_ping_stats(filename, parse_samples, verbose=False):
"""Deprecated. Use history_stats instead."""
return history_stats(filename, parse_samples, verbose=verbose)[0:3]
def history_stats(filename, parse_samples, verbose=False):
"""Fetch, parse, and compute ping and usage stats.
Args:
filename (str): Filename from which to read JSON data, or "-" to read
from standard input.
parse_samples (int): Number of samples to process, or -1 to parse all
available samples.
verbose (bool): Optionally produce verbose output.
Returns:
A tuple with 6 dicts, mapping general data names, ping drop stat
names, ping drop run length stat names, ping latency stat names,
loaded ping latency stat names, and bandwidth usage stat names to
their respective values, in that order.
Note:
Additional dicts may be added to this tuple in the future with
additional data groups, so it not recommended for the caller to
assume exactly 6 elements.
Raises:
JsonError: Failure to open, read, or parse JSON on input.
"""
try:
history = get_history(filename)
except ValueError as e:
raise JsonError("Failed to parse JSON: " + str(e))
except Exception as e:
raise JsonError(e)
sample_range, parsed_samples, current = _compute_sample_range(history,
parse_samples,
verbose=verbose)
tot = 0.0
count_full_drop = 0
count_unsched = 0
total_unsched_drop = 0.0
count_full_unsched = 0
count_obstruct = 0
total_obstruct_drop = 0.0
count_full_obstruct = 0
second_runs = [0] * 60
minute_runs = [0] * 60
run_length = 0
init_run_length = None
usage_down = 0.0
usage_up = 0.0
rtt_full = []
rtt_all = []
rtt_buckets = [[] for _ in range(15)]
for i in sample_range:
d = history["popPingDropRate"][i]
if d >= 1:
# just in case...
d = 1
count_full_drop += 1
run_length += 1
elif run_length > 0:
if init_run_length is None:
init_run_length = run_length
else:
if run_length <= 60:
second_runs[run_length - 1] += run_length
else:
minute_runs[min((run_length-1) // 60 - 1, 59)] += run_length
run_length = 0
elif init_run_length is None:
init_run_length = 0
tot += d
down = history["downlinkThroughputBps"][i]
usage_down += down
up = history["uplinkThroughputBps"][i]
usage_up += up
rtt = history["popPingLatencyMs"][i]
# note that "full" here means the opposite of ping drop full
if d == 0.0:
rtt_full.append(rtt)
if down + up > 500000:
rtt_buckets[min(14, int(math.log2((down+up) / 500000)))].append(rtt)
else:
rtt_buckets[0].append(rtt)
if d < 1.0:
rtt_all.append((rtt, 1.0 - d))
# If the entire sample set is one big drop run, it will be both initial
# fragment (continued from prior sample range) and final one (continued
# to next sample range), but to avoid double-reporting, just call it
# the initial run.
if init_run_length is None:
init_run_length = run_length
run_length = 0
def weighted_mean_and_quantiles(data, n):
if not data:
return None, [None] * (n+1)
total_weight = sum(x[1] for x in data)
result = []
items = iter(data)
value, accum_weight = next(items)
accum_value = value * accum_weight
for boundary in (total_weight * x / n for x in range(n)):
while accum_weight < boundary:
try:
value, weight = next(items)
accum_value += value * weight
accum_weight += weight
except StopIteration:
# shouldn't happen, but in case of float precision weirdness...
break
result.append(value)
result.append(data[-1][0])
accum_value += sum(x[0] for x in items)
return accum_value / total_weight, result
bucket_samples = []
bucket_min = []
bucket_median = []
bucket_max = []
for bucket in rtt_buckets:
if bucket:
bucket_samples.append(len(bucket))
bucket_min.append(min(bucket))
bucket_median.append(statistics.median(bucket))
bucket_max.append(max(bucket))
else:
bucket_samples.append(0)
bucket_min.append(None)
bucket_median.append(None)
bucket_max.append(None)
rtt_all.sort(key=lambda x: x[0])
wmean_all, wdeciles_all = weighted_mean_and_quantiles(rtt_all, 10)
rtt_full.sort()
mean_full, deciles_full = weighted_mean_and_quantiles(tuple((x, 1.0) for x in rtt_full), 10)
return {
"samples": parsed_samples,
"end_counter": current,
}, {
"total_ping_drop": tot,
"count_full_ping_drop": count_full_drop,
"count_obstructed": count_obstruct,
"total_obstructed_ping_drop": total_obstruct_drop,
"count_full_obstructed_ping_drop": count_full_obstruct,
"count_unscheduled": count_unsched,
"total_unscheduled_ping_drop": total_unsched_drop,
"count_full_unscheduled_ping_drop": count_full_unsched,
}, {
"init_run_fragment": init_run_length,
"final_run_fragment": run_length,
"run_seconds[1,]": second_runs,
"run_minutes[1,]": minute_runs,
}, {
"mean_all_ping_latency": wmean_all,
"deciles_all_ping_latency[]": wdeciles_all,
"mean_full_ping_latency": mean_full,
"deciles_full_ping_latency[]": deciles_full,
"stdev_full_ping_latency": statistics.pstdev(rtt_full) if rtt_full else None,
}, {
"load_bucket_samples[]": bucket_samples,
"load_bucket_min_latency[]": bucket_min,
"load_bucket_median_latency[]": bucket_median,
"load_bucket_max_latency[]": bucket_max,
}, {
"download_usage": int(round(usage_down / 8)),
"upload_usage": int(round(usage_up / 8)),
}

View File

@@ -0,0 +1,12 @@
[Unit]
Description=Starlink GRPC to InfluxDB 2.x exporter
After=network.target
[Service]
Type=simple
WorkingDirectory=/opt/starlink-grpc-tools/
Environment=INFLUXDB_URL=http://localhost:8086 INFLUXDB_TOKEN=<changeme> INFLUXDB_Bucket=<changeme> INFLUXDB_ORG=<changeme>
ExecStart=/opt/starlink-grpc-tools/venv/bin/python3 dish_grpc_influx2.py -t 10 status alert_detail
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,12 @@
[Unit]
Description=Starlink GRPC to MQTT exporter
After=network.target
[Service]
Type=simple
WorkingDirectory=/opt/starlink-grpc-tools/
Environment=MQTT_HOST=localhost MQTT_PORT=1883 MQTT_USERNAME=<changeme> MQTT_PASSWORD=<changeme> MQTT_SSL=false
ExecStart=/opt/starlink-grpc-tools/venv/bin/python3 dish_grpc_mqtt.py -t 10 status alert_detail
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,12 @@
[Unit]
Description=Starlink GRPC to Prometheus exporter
After=network.target
[Service]
Type=simple
WorkingDirectory=/opt/starlink-grpc-tools/
ExecStart=/opt/starlink-grpc-tools/venv/bin/python3 dish_grpc_prometheus.py status alert_detail usage location power
KillSignal=SIGINT
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,4 @@
"""
Storage modules for GNSS Guard
"""

View File

@@ -0,0 +1,286 @@
#!/usr/bin/env python3
"""
Cleanup manager for GNSS Guard
Handles cleanup of database tables and log files
"""
import logging
import sqlite3
from datetime import datetime, timedelta
from pathlib import Path
from typing import Optional
logger = logging.getLogger("gnss_guard.cleanup")
class CleanupManager:
"""Manages cleanup of old data from database and logs"""
def __init__(
self,
database_path: Path,
logs_base_path: Path,
positions_raw_retention_days: int = 14,
positions_validation_retention_days: int = 31,
logs_retention_days: int = 14,
demo_mode: bool = False
):
"""
Initialize cleanup manager
Args:
database_path: Path to SQLite database file
logs_base_path: Base path for logs directory
positions_raw_retention_days: Days to retain positions_raw records (default: 14)
positions_validation_retention_days: Days to retain positions_validation records (default: 31)
logs_retention_days: Days to retain log files (default: 14)
demo_mode: If True, skip database cleanup (data isn't growing in demo mode)
"""
self.database_path = Path(database_path)
self.logs_base_path = Path(logs_base_path)
self.positions_raw_retention_days = positions_raw_retention_days
self.positions_validation_retention_days = positions_validation_retention_days
self.logs_retention_days = logs_retention_days
self.demo_mode = demo_mode
self._last_cleanup_date: Optional[str] = None
def run_cleanup_if_needed(self):
"""Run cleanup once per day (checks if already ran today)
In demo mode, only log cleanup runs (database cleanup is skipped
since data isn't growing - records are created and deleted in demo mode).
"""
today = datetime.now().strftime("%Y-%m-%d")
if self._last_cleanup_date == today:
return # Already ran today
# In demo mode, skip database cleanup entirely but still clean logs
if self.demo_mode:
logger.info("Demo mode: skipping database cleanup (data not growing)")
try:
files_deleted, dirs_deleted = self._cleanup_logs()
self._last_cleanup_date = today
if files_deleted > 0 or dirs_deleted > 0:
logger.info(
f"Demo mode cleanup completed: "
f"{files_deleted} log files, "
f"{dirs_deleted} empty directories"
)
except Exception as e:
logger.error(f"Demo mode log cleanup failed: {e}")
return
logger.info("Starting daily cleanup...")
try:
raw_deleted = self._cleanup_positions_raw()
validation_deleted = self._cleanup_positions_validation()
files_deleted, dirs_deleted = self._cleanup_logs()
# Optimize database after cleanup (VACUUM reclaims space, ANALYZE updates statistics)
space_saved = self._optimize_database()
self._last_cleanup_date = today
logger.info(
f"Daily cleanup completed: "
f"{raw_deleted} raw positions, "
f"{validation_deleted} validations, "
f"{files_deleted} log files, "
f"{dirs_deleted} empty directories"
f"{f', {space_saved}' if space_saved else ''}"
)
except Exception as e:
logger.error(f"Cleanup failed: {e}")
def _cleanup_positions_raw(self) -> int:
"""
Delete positions_raw records older than retention period
Returns:
Number of records deleted
"""
cutoff_timestamp = (
datetime.now() - timedelta(days=self.positions_raw_retention_days)
).timestamp()
deleted_count = 0
try:
conn = sqlite3.connect(str(self.database_path), timeout=30.0)
cursor = conn.cursor()
# Count before delete
cursor.execute(
"SELECT COUNT(*) FROM positions_raw WHERE timestamp_unix < ?",
(cutoff_timestamp,)
)
deleted_count = cursor.fetchone()[0]
if deleted_count > 0:
cursor.execute(
"DELETE FROM positions_raw WHERE timestamp_unix < ?",
(cutoff_timestamp,)
)
conn.commit()
logger.info(
f"Cleaned up {deleted_count} positions_raw records "
f"(> {self.positions_raw_retention_days} days)"
)
conn.close()
except Exception as e:
logger.error(f"Failed to cleanup positions_raw: {e}")
return deleted_count
def _cleanup_positions_validation(self) -> int:
"""
Delete positions_validation records older than retention period
Returns:
Number of records deleted
"""
cutoff_timestamp = (
datetime.now() - timedelta(days=self.positions_validation_retention_days)
).timestamp()
deleted_count = 0
try:
conn = sqlite3.connect(str(self.database_path), timeout=30.0)
cursor = conn.cursor()
# Count before delete
cursor.execute(
"SELECT COUNT(*) FROM positions_validation WHERE validation_timestamp_unix < ?",
(cutoff_timestamp,)
)
deleted_count = cursor.fetchone()[0]
if deleted_count > 0:
cursor.execute(
"DELETE FROM positions_validation WHERE validation_timestamp_unix < ?",
(cutoff_timestamp,)
)
conn.commit()
logger.info(
f"Cleaned up {deleted_count} positions_validation records "
f"(> {self.positions_validation_retention_days} days)"
)
conn.close()
except Exception as e:
logger.error(f"Failed to cleanup positions_validation: {e}")
return deleted_count
def _cleanup_logs(self) -> tuple:
"""
Delete log files and empty directories older than retention period
Returns:
Tuple of (files_deleted, directories_deleted)
"""
cutoff_timestamp = (
datetime.now() - timedelta(days=self.logs_retention_days)
).timestamp()
deleted_files = 0
deleted_dirs = 0
try:
if not self.logs_base_path.exists():
return (0, 0)
# Delete old log files
for log_file in self.logs_base_path.rglob("app_*.json"):
try:
if log_file.stat().st_mtime < cutoff_timestamp:
log_file.unlink()
deleted_files += 1
except Exception as e:
logger.debug(f"Failed to delete log file {log_file}: {e}")
# Clean up empty directories (must iterate multiple times for nested dirs)
# Sort by path length descending to delete deepest first
all_dirs = sorted(
[d for d in self.logs_base_path.rglob("*") if d.is_dir()],
key=lambda p: len(str(p)),
reverse=True
)
for dir_path in all_dirs:
try:
# Only delete if empty
if not any(dir_path.iterdir()):
dir_path.rmdir()
deleted_dirs += 1
except Exception:
pass # Directory not empty or other error
if deleted_files > 0 or deleted_dirs > 0:
logger.info(
f"Cleaned up {deleted_files} log files and "
f"{deleted_dirs} empty directories "
f"(> {self.logs_retention_days} days)"
)
except Exception as e:
logger.error(f"Failed to cleanup logs: {e}")
return (deleted_files, deleted_dirs)
def _optimize_database(self) -> str:
"""
Optimize database after cleanup operations.
Runs VACUUM to reclaim disk space from deleted records and
ANALYZE to update query planner statistics.
Returns:
String describing space saved, or empty string if no optimization needed
"""
try:
# Get database size before optimization
size_before = self.database_path.stat().st_size if self.database_path.exists() else 0
conn = sqlite3.connect(str(self.database_path), timeout=60.0)
cursor = conn.cursor()
# ANALYZE updates statistics used by the query planner
cursor.execute("ANALYZE")
# VACUUM rebuilds the database file, reclaiming unused space
# Note: VACUUM requires exclusive access and can't run inside a transaction
cursor.execute("VACUUM")
conn.close()
# Get database size after optimization
size_after = self.database_path.stat().st_size if self.database_path.exists() else 0
# Calculate space saved
space_saved = size_before - size_after
if space_saved > 0:
# Format size for logging
if space_saved >= 1024 * 1024:
saved_str = f"{space_saved / (1024 * 1024):.1f} MB"
elif space_saved >= 1024:
saved_str = f"{space_saved / 1024:.1f} KB"
else:
saved_str = f"{space_saved} bytes"
logger.info(f"Database optimized: reclaimed {saved_str}")
return f"reclaimed {saved_str}"
else:
logger.debug("Database optimized (no space reclaimed)")
return ""
except Exception as e:
logger.error(f"Failed to optimize database: {e}")
return ""

View File

@@ -0,0 +1,316 @@
#!/usr/bin/env python3
"""
SQLite database storage for GNSS Guard
Manages positions_raw and positions_validation tables
"""
import json
import logging
import sqlite3
import time
from datetime import datetime, timezone
from pathlib import Path
from typing import Dict, Any, Optional, List
logger = logging.getLogger("gnss_guard.database")
class Database:
"""SQLite database manager for GNSS Guard"""
def __init__(self, database_path: Path):
"""
Initialize database
Args:
database_path: Path to SQLite database file
"""
self.database_path = Path(database_path)
self.database_path.parent.mkdir(parents=True, exist_ok=True)
self._init_database()
def _init_database(self):
"""Initialize database schema and configure SQLite for optimal performance"""
try:
conn = sqlite3.connect(str(self.database_path), check_same_thread=False)
cursor = conn.cursor()
# Configure SQLite for better performance and concurrency
# WAL mode allows concurrent reads during writes
cursor.execute("PRAGMA journal_mode=WAL")
# Set busy timeout to 30 seconds (in milliseconds)
cursor.execute("PRAGMA busy_timeout=30000")
# NORMAL synchronous is faster than FULL while still being safe with WAL
cursor.execute("PRAGMA synchronous=NORMAL")
# Enable foreign key constraints (good practice)
cursor.execute("PRAGMA foreign_keys=ON")
# Create positions_raw table
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS positions_raw (
id INTEGER PRIMARY KEY AUTOINCREMENT,
source TEXT NOT NULL,
timestamp TEXT NOT NULL,
timestamp_unix REAL NOT NULL,
latitude REAL,
longitude REAL,
altitude REAL,
position_uncertainty_m REAL,
supplementary_data TEXT,
created_at REAL NOT NULL,
UNIQUE(source, timestamp_unix)
)
"""
)
# Add position_uncertainty_m column if it doesn't exist (migration for existing databases)
try:
cursor.execute("ALTER TABLE positions_raw ADD COLUMN position_uncertainty_m REAL")
except sqlite3.OperationalError:
# Column already exists, ignore
pass
# Create positions_validation table
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS positions_validation (
id INTEGER PRIMARY KEY AUTOINCREMENT,
validation_timestamp TEXT NOT NULL,
validation_timestamp_unix REAL NOT NULL,
is_valid INTEGER NOT NULL,
sources_missing TEXT,
sources_stale TEXT,
coordinate_differences TEXT,
source_coordinates TEXT,
validation_details TEXT,
created_at REAL NOT NULL
)
"""
)
# Create indexes
cursor.execute(
"""
CREATE INDEX IF NOT EXISTS idx_positions_raw_source_timestamp
ON positions_raw(source, timestamp_unix DESC)
"""
)
cursor.execute(
"""
CREATE INDEX IF NOT EXISTS idx_positions_validation_timestamp
ON positions_validation(validation_timestamp_unix DESC)
"""
)
conn.commit()
conn.close()
logger.info(f"Database initialized at {self.database_path}")
except Exception as e:
logger.error(f"Failed to initialize database: {e}")
raise
def store_position(self, position: Dict[str, Any]) -> bool:
"""
Store or update a position in positions_raw table
Args:
position: Dictionary with position data (source, latitude, longitude, etc.)
Returns:
True if successful, False otherwise
"""
try:
conn = sqlite3.connect(str(self.database_path), check_same_thread=False, timeout=5.0)
cursor = conn.cursor()
# Use INSERT OR REPLACE to update latest position per source
cursor.execute(
"""
INSERT OR REPLACE INTO positions_raw
(source, timestamp, timestamp_unix, latitude, longitude, altitude, position_uncertainty_m, supplementary_data, created_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
""",
(
position.get("source"),
position.get("timestamp"),
position.get("timestamp_unix"),
position.get("latitude"),
position.get("longitude"),
position.get("altitude"),
position.get("position_uncertainty_m"),
json.dumps(position.get("supplementary_data", {})),
time.time(),
),
)
conn.commit()
conn.close()
return True
except sqlite3.OperationalError as e:
if "database is locked" in str(e):
# Retry once after short delay
time.sleep(0.01)
try:
conn = sqlite3.connect(str(self.database_path), check_same_thread=False, timeout=5.0)
cursor = conn.cursor()
cursor.execute(
"""
INSERT OR REPLACE INTO positions_raw
(source, timestamp, timestamp_unix, latitude, longitude, altitude, position_uncertainty_m, supplementary_data, created_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
""",
(
position.get("source"),
position.get("timestamp"),
position.get("timestamp_unix"),
position.get("latitude"),
position.get("longitude"),
position.get("altitude"),
position.get("position_uncertainty_m"),
json.dumps(position.get("supplementary_data", {})),
time.time(),
),
)
conn.commit()
conn.close()
return True
except Exception:
pass
logger.error(f"Failed to store position: {e}")
return False
except Exception as e:
logger.error(f"Failed to store position: {e}")
return False
def store_validation(self, validation_result: Dict[str, Any]) -> bool:
"""
Store validation result in positions_validation table
Args:
validation_result: Dictionary with validation data
Returns:
True if successful, False otherwise
"""
try:
conn = sqlite3.connect(str(self.database_path), check_same_thread=False, timeout=5.0)
cursor = conn.cursor()
cursor.execute(
"""
INSERT INTO positions_validation
(validation_timestamp, validation_timestamp_unix, is_valid, sources_missing,
sources_stale, coordinate_differences, source_coordinates, validation_details, created_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
""",
(
validation_result.get("validation_timestamp"),
validation_result.get("validation_timestamp_unix"),
1 if validation_result.get("is_valid") else 0,
json.dumps(validation_result.get("sources_missing", [])),
json.dumps(validation_result.get("sources_stale", [])),
json.dumps(validation_result.get("coordinate_differences", {})),
json.dumps(validation_result.get("source_coordinates", {})),
json.dumps(validation_result.get("validation_details", {})),
time.time(),
),
)
conn.commit()
conn.close()
return True
except Exception as e:
logger.error(f"Failed to store validation result: {e}")
return False
def get_latest_positions(self) -> Dict[str, Dict[str, Any]]:
"""
Get latest position for each source
Returns:
Dictionary mapping source names to their latest positions
"""
try:
conn = sqlite3.connect(str(self.database_path), check_same_thread=False, timeout=5.0)
cursor = conn.cursor()
cursor.execute(
"""
SELECT source, timestamp, timestamp_unix, latitude, longitude, altitude, position_uncertainty_m, supplementary_data
FROM positions_raw
WHERE (source, timestamp_unix) IN (
SELECT source, MAX(timestamp_unix)
FROM positions_raw
GROUP BY source
)
"""
)
positions = {}
for row in cursor.fetchall():
source, timestamp, timestamp_unix, lat, lon, alt, pos_uncertainty, supp_data = row
positions[source] = {
"source": source,
"timestamp": timestamp,
"timestamp_unix": timestamp_unix,
"latitude": lat,
"longitude": lon,
"altitude": alt,
"position_uncertainty_m": pos_uncertainty,
"supplementary_data": json.loads(supp_data) if supp_data else {},
}
conn.close()
return positions
except Exception as e:
logger.error(f"Failed to get latest positions: {e}")
return {}
def get_latest_validation(self) -> Optional[Dict[str, Any]]:
"""
Get the most recent validation result from the database.
Used to restore state after app restart.
Returns:
Dictionary with validation data or None if not found
"""
try:
conn = sqlite3.connect(str(self.database_path), check_same_thread=False, timeout=5.0)
cursor = conn.cursor()
cursor.execute(
"""
SELECT validation_timestamp, validation_timestamp_unix, is_valid,
sources_missing, sources_stale, coordinate_differences,
source_coordinates, validation_details
FROM positions_validation
ORDER BY validation_timestamp_unix DESC
LIMIT 1
"""
)
row = cursor.fetchone()
conn.close()
if row:
return {
"validation_timestamp": row[0],
"validation_timestamp_unix": row[1],
"is_valid": row[2] == 1,
"sources_missing": json.loads(row[3]) if row[3] else [],
"sources_stale": json.loads(row[4]) if row[4] else [],
"coordinate_differences": json.loads(row[5]) if row[5] else {},
"source_coordinates": json.loads(row[6]) if row[6] else {},
"validation_details": json.loads(row[7]) if row[7] else {},
}
return None
except Exception as e:
logger.error(f"Failed to get latest validation: {e}")
return None

View File

@@ -0,0 +1,156 @@
#!/usr/bin/env python3
"""
Structured JSON logging for GNSS Guard
Logs to date-based folders with daily rotation and cleanup
"""
import json
import logging
import time
from datetime import datetime, timedelta
from pathlib import Path
from typing import Dict, Any, Optional
logger = logging.getLogger("gnss_guard.logger")
class StructuredLogger:
"""Structured JSON logger with date-based folders"""
def __init__(self, logs_base_path: Path, retention_days: int = 14):
"""
Initialize structured logger
Args:
logs_base_path: Base path for logs directory
retention_days: Number of days to retain logs
"""
self.logs_base_path = Path(logs_base_path)
self.retention_days = retention_days
self.current_log_file: Optional[Path] = None
self.current_date: Optional[str] = None
self.log_file_handle = None
self._closed = False
def _get_log_path(self, date: datetime) -> Path:
"""Get log file path for a given date"""
year = date.strftime("%Y")
month = date.strftime("%m")
day = date.strftime("%d")
date_str = date.strftime("%Y-%m-%d")
log_dir = self.logs_base_path / year / month / day
log_dir.mkdir(parents=True, exist_ok=True)
return log_dir / f"app_{date_str}.json"
def _ensure_log_file(self):
"""Ensure log file is open for current date"""
today = datetime.now()
today_str = today.strftime("%Y-%m-%d")
if self.current_date != today_str or self.current_log_file is None:
# Close previous file if open
if self.log_file_handle:
self.log_file_handle.close()
self.log_file_handle = None
# Cleanup old logs
self._cleanup_old_logs()
# Open new log file
self.current_log_file = self._get_log_path(today)
self.current_date = today_str
# Open file in append mode
self.log_file_handle = open(self.current_log_file, "a")
logger.info(f"Opened log file: {self.current_log_file}")
def _cleanup_old_logs(self):
"""Delete log files older than retention_days"""
try:
cutoff_date = datetime.now() - timedelta(days=self.retention_days)
cutoff_timestamp = cutoff_date.timestamp()
deleted_count = 0
# Walk through all log directories
if self.logs_base_path.exists():
for log_file in self.logs_base_path.rglob("app_*.json"):
try:
if log_file.stat().st_mtime < cutoff_timestamp:
log_file.unlink()
deleted_count += 1
except Exception as e:
logger.debug(f"Failed to delete old log file {log_file}: {e}")
if deleted_count > 0:
logger.info(f"Cleaned up {deleted_count} old log file(s) (> {self.retention_days} days)")
except Exception as e:
logger.error(f"Error during log cleanup: {e}")
def log(self, level: str, source: str, message: str, data: Optional[Dict[str, Any]] = None):
"""
Write structured log entry
Args:
level: Log level (INFO, WARNING, ERROR, DEBUG)
source: Source identifier
message: Log message
data: Optional additional data dictionary
"""
try:
# Don't write if logger is explicitly closed
if self._closed:
return
# Ensure log file is open
self._ensure_log_file()
# Check if file handle is still None (shouldn't happen, but be safe)
if self.log_file_handle is None:
logger.warning(f"Cannot write log entry: logger file handle is None")
return
log_entry = {
"timestamp": datetime.now().isoformat(),
"level": level,
"source": source,
"message": message,
}
if data:
log_entry["data"] = data
# Write as JSON line
json_line = json.dumps(log_entry, separators=(",", ":"))
self.log_file_handle.write(json_line + "\n")
self.log_file_handle.flush()
except Exception as e:
logger.error(f"Failed to write log entry: {e}")
def info(self, source: str, message: str, data: Optional[Dict[str, Any]] = None):
"""Log info message"""
self.log("INFO", source, message, data)
def warning(self, source: str, message: str, data: Optional[Dict[str, Any]] = None):
"""Log warning message"""
self.log("WARNING", source, message, data)
def error(self, source: str, message: str, data: Optional[Dict[str, Any]] = None):
"""Log error message"""
self.log("ERROR", source, message, data)
def debug(self, source: str, message: str, data: Optional[Dict[str, Any]] = None):
"""Log debug message"""
self.log("DEBUG", source, message, data)
def close(self):
"""Close log file handle"""
self._closed = True
if self.log_file_handle:
self.log_file_handle.close()
self.log_file_handle = None

View File

@@ -0,0 +1,4 @@
"""
Utility functions for GNSS Guard
"""

View File

@@ -0,0 +1,59 @@
#!/usr/bin/env python3
"""
Distance calculation utilities using Haversine formula
"""
import math
from typing import Optional
def haversine_distance(
lat1: float, lon1: float, lat2: float, lon2: float
) -> Optional[float]:
"""
Calculate the great circle distance between two points on Earth using Haversine formula.
Args:
lat1: Latitude of first point in degrees
lon1: Longitude of first point in degrees
lat2: Latitude of second point in degrees
lon2: Longitude of second point in degrees
Returns:
Distance in meters, or None if calculation fails
"""
try:
# Validate inputs
if not all(isinstance(x, (int, float)) for x in [lat1, lon1, lat2, lon2]):
return None
# Check if coordinates are valid
if not (-90 <= lat1 <= 90) or not (-90 <= lat2 <= 90):
return None
if not (-180 <= lon1 <= 180) or not (-180 <= lon2 <= 180):
return None
# Earth's radius in meters
R = 6371000
# Convert degrees to radians
phi1 = math.radians(lat1)
phi2 = math.radians(lat2)
delta_phi = math.radians(lat2 - lat1)
delta_lambda = math.radians(lon2 - lon1)
# Haversine formula
a = (
math.sin(delta_phi / 2) ** 2
+ math.cos(phi1) * math.cos(phi2) * math.sin(delta_lambda / 2) ** 2
)
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))
# Distance in meters
distance = R * c
return distance
except (ValueError, TypeError, OverflowError):
return None

View File

@@ -0,0 +1,386 @@
#!/usr/bin/env python3
"""
Telegram Alert Module for GNSS Guard
Sends alerts to Telegram for GPS validation failures:
- Distance differences exceeding threshold
- Missing GPS sources
- Stale GPS data
"""
import requests
import logging
from datetime import datetime
from typing import Dict, Any, Optional
logger = logging.getLogger("gnss_guard.telegram")
class TelegramAlert:
"""Handle Telegram notifications for GNSS Guard validation alerts."""
def __init__(self, bot_token: str, chat_id: str):
"""
Initialize Telegram bot.
Args:
bot_token: Telegram bot token (from BotFather)
chat_id: Telegram chat/group ID (can be negative for groups)
"""
self.bot_token = bot_token
self.chat_id = chat_id
self.api_url = f"https://api.telegram.org/bot{bot_token}"
@staticmethod
def escape_html(text: str) -> str:
"""
Escape HTML special characters for Telegram HTML parsing.
Args:
text: Text to escape
Returns:
str: Escaped text safe for Telegram HTML
"""
text = str(text)
text = text.replace('&', '&amp;')
text = text.replace('<', '&lt;')
text = text.replace('>', '&gt;')
return text
def send_message(self, message: str, parse_mode: str = "HTML") -> bool:
"""
Send a message to Telegram.
Args:
message: Message text to send
parse_mode: Message formatting (HTML or Markdown)
Returns:
bool: True if successful, False otherwise
"""
try:
url = f"{self.api_url}/sendMessage"
payload = {
"chat_id": self.chat_id,
"text": message,
"parse_mode": parse_mode,
"disable_web_page_preview": True
}
response = requests.post(url, json=payload, timeout=10)
if response.status_code == 200:
return True
else:
logger.error(f"Telegram API error: {response.status_code} - {response.text}")
return False
except Exception as e:
logger.error(f"Failed to send Telegram message: {e}")
return False
def send_validation_alert(
self,
validation_result: Dict[str, Any],
asset_name: str,
positions: Dict[str, Dict[str, Any]]
) -> bool:
"""
Send alert when GPS validation fails.
Args:
validation_result: Validation result dictionary from CoordinateValidator
asset_name: Asset identifier from config
positions: Dictionary of positions from all sources
Returns:
bool: True if successful
"""
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S UTC")
validation_details = validation_result.get("validation_details", {})
threshold_meters = validation_details.get("threshold_meters", 0)
max_distance_meters = validation_details.get("max_distance_meters", 0)
# Build alert message
message = (
f"🚨 <b>GNSS VALIDATION FAILED</b>\n\n"
f"📍 <b>Asset:</b> {self.escape_html(asset_name)}\n"
f"⏰ <b>Time:</b> {timestamp}\n\n"
)
# Missing sources
missing_sources = validation_result.get("sources_missing", [])
if missing_sources:
missing_list = ", ".join(missing_sources)
message += (
f"❌ <b>Missing Sources:</b> {self.escape_html(missing_list)}\n\n"
)
# Stale sources
stale_sources = validation_result.get("sources_stale", [])
if stale_sources:
stale_list = ", ".join(stale_sources)
message += (
f"⏱️ <b>Stale Sources:</b> {self.escape_html(stale_list)}\n"
f" (Data older than {validation_details.get('stale_threshold_seconds', 60)}s)\n\n"
)
# Distance differences
coordinate_differences = validation_result.get("coordinate_differences", {})
if coordinate_differences:
message += f"📏 <b>Distance Differences:</b>\n"
for pair_key, diff_data in coordinate_differences.items():
source1 = diff_data.get("source1", "unknown")
source2 = diff_data.get("source2", "unknown")
distance_m = diff_data.get("distance_meters", 0)
distance_km = distance_m / 1000.0
threshold_exceeded = "🚨" if distance_m > threshold_meters else ""
message += (
f" {threshold_exceeded} {self.escape_html(source1)}{self.escape_html(source2)}: "
f"{distance_m:.1f}m ({distance_km:.3f}km)\n"
)
if max_distance_meters > threshold_meters:
message += (
f"\n⚠️ <b>Threshold Exceeded:</b> {max_distance_meters:.1f}m > {threshold_meters:.1f}m\n"
)
message += "\n"
# Source coordinates summary
source_coordinates = validation_result.get("source_coordinates", {})
if source_coordinates:
message += f"📍 <b>Source Coordinates:</b>\n"
for source, coords in source_coordinates.items():
lat = coords.get("latitude", "N/A")
lon = coords.get("longitude", "N/A")
alt = coords.get("altitude")
timestamp_str = coords.get("timestamp", "N/A")
# Truncate timestamp for display
if isinstance(timestamp_str, str) and len(timestamp_str) > 19:
timestamp_str = timestamp_str[:19] + "Z"
alt_str = f"{alt:.1f}m" if alt is not None else "N/A"
message += (
f" • <b>{self.escape_html(source)}</b>\n"
f" Lat: {lat}, Lon: {lon}, Alt: {alt_str}\n"
f" Time: {self.escape_html(str(timestamp_str))}\n\n"
)
# Expected vs found sources
expected_sources = validation_details.get("expected_sources", [])
sources_found = validation_details.get("sources_found", [])
if expected_sources:
message += (
f"📊 <b>Sources:</b> {len(sources_found)}/{len(expected_sources)} found\n"
f" Expected: {', '.join(expected_sources)}\n"
f" Found: {', '.join(sources_found) if sources_found else 'None'}\n"
)
return self.send_message(message)
def send_validation_success(
self,
validation_result: Dict[str, Any],
asset_name: str,
positions: Dict[str, Dict[str, Any]]
) -> bool:
"""
Send notification when GPS validation passes (only if TELEGRAM_SEND_ALL=true).
Args:
validation_result: Validation result dictionary from CoordinateValidator
asset_name: Asset identifier from config
positions: Dictionary of positions from all sources
Returns:
bool: True if successful
"""
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S UTC")
validation_details = validation_result.get("validation_details", {})
max_distance_meters = validation_details.get("max_distance_meters", 0)
sources_found = validation_details.get("sources_found", [])
# Build success message
message = (
f"✅ <b>GNSS VALIDATION PASSED</b>\n\n"
f"📍 <b>Asset:</b> {self.escape_html(asset_name)}\n"
f"⏰ <b>Time:</b> {timestamp}\n\n"
)
# Source coordinates summary
source_coordinates = validation_result.get("source_coordinates", {})
if source_coordinates:
message += f"📍 <b>Source Coordinates:</b>\n"
for source, coords in source_coordinates.items():
lat = coords.get("latitude", "N/A")
lon = coords.get("longitude", "N/A")
alt = coords.get("altitude")
alt_str = f"{alt:.1f}m" if alt is not None else "N/A"
message += (
f" • <b>{self.escape_html(source)}</b>: "
f"{lat}, {lon}, Alt: {alt_str}\n"
)
message += "\n"
# Distance summary
coordinate_differences = validation_result.get("coordinate_differences", {})
if coordinate_differences:
message += f"📏 <b>Max Distance Difference:</b> {max_distance_meters:.1f}m\n\n"
message += (
f"📊 <b>Sources:</b> {len(sources_found)} active\n"
f" {', '.join(sources_found) if sources_found else 'None'}\n"
)
return self.send_message(message)
def send_error_alert(
self,
error_message: str,
asset_name: str,
error_details: Optional[str] = None
) -> bool:
"""
Send alert when there's an error during validation or data collection.
Args:
error_message: Main error message
asset_name: Asset identifier from config
error_details: Detailed error information
Returns:
bool: True if successful
"""
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S UTC")
# Escape error message to prevent HTML parsing issues
escaped_error = self.escape_html(error_message)
message = (
f"🔴 <b>GNSS GUARD ERROR</b>\n\n"
f"📍 <b>Asset:</b> {self.escape_html(asset_name)}\n"
f"⏰ <b>Time:</b> {timestamp}\n"
f"❌ <b>Error:</b> {escaped_error}\n"
)
if error_details:
escaped_details = self.escape_html(error_details[:1000])
message += f"\n📋 <b>Details:</b>\n<pre>{escaped_details}</pre>"
return self.send_message(message)
def send_state_change_alert(
self,
asset_name: str,
missing_added: set,
missing_removed: set,
stale_added: set,
stale_removed: set,
threshold_breached: bool,
threshold_was_breached: bool,
max_distance_meters: float,
threshold_meters: float,
source_coordinates: Dict[str, Any]
) -> bool:
"""
Send alert when validation state changes.
Args:
asset_name: Asset identifier from config
missing_added: Sources that became missing
missing_removed: Sources that recovered from missing
stale_added: Sources that became stale
stale_removed: Sources that recovered from stale
threshold_breached: Current threshold breach state
threshold_was_breached: Previous threshold breach state
max_distance_meters: Current max distance between sources
threshold_meters: Configured threshold
source_coordinates: Current source coordinates
Returns:
bool: True if successful
"""
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S UTC")
# Determine if this is a degradation or recovery
is_degradation = missing_added or stale_added or (threshold_breached and not threshold_was_breached)
is_recovery = missing_removed or stale_removed or (not threshold_breached and threshold_was_breached)
if is_degradation and not is_recovery:
emoji = "🚨"
title = "GNSS STATE DEGRADED"
elif is_recovery and not is_degradation:
emoji = ""
title = "GNSS STATE RECOVERED"
else:
emoji = "⚠️"
title = "GNSS STATE CHANGED"
message = (
f"{emoji} <b>{title}</b>\n\n"
f"📍 <b>Asset:</b> {self.escape_html(asset_name)}\n"
f"⏰ <b>Time:</b> {timestamp}\n\n"
)
# Missing sources changes
if missing_added:
message += f"❌ <b>Sources now MISSING:</b> {', '.join(sorted(missing_added))}\n"
if missing_removed:
message += f"✅ <b>Sources RECOVERED (was missing):</b> {', '.join(sorted(missing_removed))}\n"
# Stale sources changes
if stale_added:
message += f"⏱️ <b>Sources now STALE:</b> {', '.join(sorted(stale_added))}\n"
if stale_removed:
message += f"✅ <b>Sources RECOVERED (was stale):</b> {', '.join(sorted(stale_removed))}\n"
# Threshold breach changes
if threshold_breached and not threshold_was_breached:
message += (
f"\n🚨 <b>DISTANCE THRESHOLD BREACHED!</b>\n"
f" Max distance: {max_distance_meters:.1f}m (threshold: {threshold_meters:.1f}m)\n"
f" ⚠️ Possible GPS jamming or spoofing!\n"
)
elif not threshold_breached and threshold_was_breached:
message += (
f"\n✅ <b>Distance threshold OK</b>\n"
f" Max distance: {max_distance_meters:.1f}m (threshold: {threshold_meters:.1f}m)\n"
)
# Current coordinates summary
if source_coordinates:
message += f"\n📍 <b>Current Coordinates:</b>\n"
for source, coords in source_coordinates.items():
lat = coords.get("latitude", "N/A")
lon = coords.get("longitude", "N/A")
message += f"{self.escape_html(source)}: {lat}, {lon}\n"
return self.send_message(message)
def test_connection(self) -> bool:
"""
Test Telegram bot connection.
Returns:
bool: True if connection successful
"""
try:
url = f"{self.api_url}/getMe"
response = requests.get(url, timeout=10)
if response.status_code == 200:
bot_info = response.json()
logger.info(f"Telegram bot connected: @{bot_info['result']['username']}")
return True
else:
logger.error(f"Telegram connection failed: {response.status_code}")
return False
except Exception as e:
logger.error(f"Telegram connection error: {e}")
return False

View File

@@ -0,0 +1,4 @@
"""
Validation modules for GNSS Guard
"""

View File

@@ -0,0 +1,155 @@
#!/usr/bin/env python3
"""
Coordinate validation logic for GNSS Guard
Validates coordinates across multiple sources
"""
import logging
from datetime import datetime, timezone
from typing import Dict, Any, List, Optional
from utils.distance import haversine_distance
logger = logging.getLogger("gnss_guard.validation")
class CoordinateValidator:
"""Validator for GPS coordinates across multiple sources"""
def __init__(self, threshold_meters: float, stale_threshold_seconds: int, expected_sources: List[str]):
"""
Initialize coordinate validator
Args:
threshold_meters: Maximum allowed distance difference in meters
stale_threshold_seconds: Threshold in seconds after which data is considered stale
expected_sources: List of expected source names
"""
self.threshold_meters = threshold_meters
self.stale_threshold_seconds = stale_threshold_seconds
self.expected_sources = expected_sources
def validate_positions(self, positions: Dict[str, Dict[str, Any]]) -> Dict[str, Any]:
"""
Validate positions from multiple sources
Args:
positions: Dictionary mapping source names to position dictionaries
Returns:
Validation result dictionary
"""
validation_timestamp = datetime.now(timezone.utc)
# Check for missing sources
missing_sources = [src for src in self.expected_sources if src not in positions]
# Check for stale timestamps
stale_sources = []
current_time = validation_timestamp.timestamp()
for source, position in positions.items():
timestamp_unix = position.get("timestamp_unix")
if timestamp_unix:
age_seconds = current_time - timestamp_unix
if age_seconds > self.stale_threshold_seconds:
stale_sources.append(source)
# Calculate coordinate differences
coordinate_differences = {}
source_coordinates = {}
null_island_sources = [] # Sources reporting exactly (0, 0) - "Null Island"
# Extract coordinates for all sources
for source, position in positions.items():
lat = position.get("latitude")
lon = position.get("longitude")
if lat is not None and lon is not None:
# Filter out "Null Island" coordinates (0, 0) - indicates no valid GPS fix
if lat == 0.0 and lon == 0.0:
logger.warning(f"Source {source} reported (0, 0) coordinates - treating as missing/invalid")
null_island_sources.append(source)
continue
source_coordinates[source] = {
"latitude": lat,
"longitude": lon,
"altitude": position.get("altitude"),
"position_uncertainty_m": position.get("position_uncertainty_m"),
"timestamp": position.get("timestamp"),
"timestamp_unix": position.get("timestamp_unix"),
}
# Add null island sources to missing sources list (they're effectively missing)
missing_sources.extend(null_island_sources)
# Calculate pairwise distances
sources_with_coords = list(source_coordinates.keys())
for i, source1 in enumerate(sources_with_coords):
for source2 in sources_with_coords[i + 1:]:
coord1 = source_coordinates[source1]
coord2 = source_coordinates[source2]
distance = haversine_distance(
coord1["latitude"],
coord1["longitude"],
coord2["latitude"],
coord2["longitude"]
)
if distance is not None:
pair_key = f"{source1}_{source2}"
coordinate_differences[pair_key] = {
"distance_meters": distance,
"source1": source1,
"source2": source2,
}
# Determine validity
is_valid = True
# Invalid if any source is missing
if missing_sources:
is_valid = False
# Invalid if any source is stale
if stale_sources:
is_valid = False
# Invalid if any coordinate pair differs by more than threshold
for pair_key, diff_data in coordinate_differences.items():
if diff_data["distance_meters"] > self.threshold_meters:
is_valid = False
# Build position uncertainty summary
position_uncertainties = {}
for source, position in positions.items():
uncertainty = position.get("position_uncertainty_m")
if uncertainty is not None:
position_uncertainties[source] = uncertainty
# Build validation details
validation_details = {
"threshold_meters": self.threshold_meters,
"stale_threshold_seconds": self.stale_threshold_seconds,
"expected_sources": self.expected_sources,
"sources_found": list(positions.keys()),
"sources_with_coordinates": sources_with_coords,
"sources_null_island": null_island_sources, # Sources reporting (0,0)
"max_distance_meters": max(
[diff["distance_meters"] for diff in coordinate_differences.values()],
default=0.0
),
"position_uncertainties": position_uncertainties,
}
return {
"validation_timestamp": validation_timestamp.isoformat(),
"validation_timestamp_unix": validation_timestamp.timestamp(),
"is_valid": is_valid,
"sources_missing": missing_sources,
"sources_stale": stale_sources,
"coordinate_differences": coordinate_differences,
"source_coordinates": source_coordinates,
"validation_details": validation_details,
}

View File

@@ -0,0 +1,2 @@
"""Web server package for GNSS Guard dashboard"""

View File

@@ -0,0 +1,423 @@
#!/usr/bin/env python3
"""
Flask web server for GNSS Guard dashboard
Provides real-time monitoring of GPS sources with auto-refresh
"""
import json
import logging
import sqlite3
import time
from datetime import datetime, timezone
from pathlib import Path
from typing import Dict, Any, Optional
from flask import Flask, render_template, jsonify, request
logger = logging.getLogger("gnss_guard.web")
class WebServer:
"""Web server for GNSS Guard dashboard"""
def __init__(self, config, database, buzzer_service=None):
"""
Initialize web server
Args:
config: Config instance
database: Database instance
buzzer_service: Optional BuzzerService instance for alarm control
"""
self.config = config
self.database = database
self.buzzer_service = buzzer_service
# Create Flask app
self.app = Flask(
__name__,
template_folder=str(Path(__file__).parent / "templates"),
static_folder=str(Path(__file__).parent / "static")
)
# Setup routes
self.app.add_url_rule('/', 'index', self.index)
self.app.add_url_rule('/api/status', 'api_status', self.api_status)
self.app.add_url_rule('/api/route', 'api_route', self.api_route)
self.app.add_url_rule('/api/alarm/acknowledge', 'api_alarm_acknowledge',
self.api_alarm_acknowledge, methods=['POST'])
self.app.add_url_rule('/api/alarm/status', 'api_alarm_status', self.api_alarm_status)
def index(self):
"""Render main dashboard page"""
return render_template('dashboard.html', show_route=self.config.web_show_route)
def api_status(self):
"""
API endpoint returning latest validation data
Returns:
JSON response with current status of all GPS sources
"""
try:
# Get latest validation record
conn = sqlite3.connect(str(self.database.database_path), check_same_thread=False, timeout=5.0)
cursor = conn.cursor()
cursor.execute(
"""
SELECT validation_timestamp, validation_timestamp_unix, is_valid,
sources_missing, sources_stale, coordinate_differences,
source_coordinates, validation_details
FROM positions_validation
ORDER BY validation_timestamp_unix DESC
LIMIT 1
"""
)
row = cursor.fetchone()
conn.close()
if not row:
return jsonify({
"error": "No validation data available",
"timestamp": datetime.now(timezone.utc).isoformat()
}), 404
# Parse row data
validation_timestamp = row[0]
validation_timestamp_unix = row[1]
is_valid = bool(row[2])
sources_missing = json.loads(row[3]) if row[3] else []
sources_stale = json.loads(row[4]) if row[4] else []
coordinate_differences = json.loads(row[5]) if row[5] else {}
source_coordinates = json.loads(row[6]) if row[6] else {}
validation_details = json.loads(row[7]) if row[7] else {}
# Get enabled sources from config
enabled_sources = self.config.get_enabled_sources()
# Source name mapping for display
source_display_names = {
"nmea_primary": "Primary GPS",
"nmea_secondary": "Secondary GPS",
"tm_ais": "TM AIS GPS",
"starlink_gps": "Starlink GPS",
"starlink_location": "Starlink Location"
}
# Build sources status
sources = {}
all_source_names = ["nmea_primary", "nmea_secondary", "tm_ais", "starlink_gps", "starlink_location"]
for source_name in all_source_names:
display_name = source_display_names.get(source_name, source_name)
# Check if source is enabled
if source_name not in enabled_sources:
sources[source_name] = {
"display_name": display_name,
"enabled": False,
"status": "not_configured",
"is_stale": False,
"coordinates": None,
"last_update": None,
"last_update_unix": None
}
continue
# Source is enabled
source_data = source_coordinates.get(source_name)
# Check if this source is stale
is_stale = source_name in sources_stale
if not source_data:
# Enabled but missing
sources[source_name] = {
"display_name": display_name,
"enabled": True,
"status": "missing",
"is_stale": is_stale,
"coordinates": None,
"last_update": None,
"last_update_unix": None
}
else:
# Has data - check if stale
status = "stale" if is_stale else "ok"
sources[source_name] = {
"display_name": display_name,
"enabled": True,
"status": status,
"is_stale": is_stale,
"coordinates": {
"latitude": source_data.get("latitude"),
"longitude": source_data.get("longitude")
},
"last_update": source_data.get("timestamp"),
"last_update_unix": source_data.get("timestamp_unix")
}
# Get threshold from validation_details
threshold_meters = validation_details.get("threshold_meters", 100.0)
# Calculate maximum distance for alert banner
max_distance_km = None
max_distance_m = 0.0
if not is_valid and coordinate_differences:
# Find maximum distance in coordinate_differences
for source_pair, diff_data in coordinate_differences.items():
if isinstance(diff_data, dict):
# Try both field names (distance_meters is the correct one)
distance = diff_data.get("distance_meters", diff_data.get("distance_m", 0))
if distance > max_distance_m:
max_distance_m = distance
# Only show distance alert if max distance exceeds threshold
if max_distance_m > threshold_meters:
max_distance_km = max_distance_m / 1000.0
# Determine alert state
# Show alert if validation fails AND distance exceeds threshold (GPS jamming/spoofing)
has_alert = (not is_valid and max_distance_km is not None) or len(sources_missing) > 0
# Find center coordinate for map (priority: nmea_primary > tm_ais > starlink_location)
map_center = None
for priority_source in ["nmea_primary", "tm_ais", "starlink_location"]:
if sources.get(priority_source, {}).get("coordinates"):
coords = sources[priority_source]["coordinates"]
if coords.get("latitude") and coords.get("longitude"):
map_center = coords
break
# If no priority source, use any available
if not map_center:
for source_name, source_data in sources.items():
if source_data.get("coordinates"):
coords = source_data["coordinates"]
if coords.get("latitude") and coords.get("longitude"):
map_center = coords
break
# Build response
response = {
"timestamp": datetime.now(timezone.utc).isoformat(),
"validation_timestamp": validation_timestamp,
"validation_timestamp_unix": validation_timestamp_unix,
"is_valid": is_valid,
"has_alert": has_alert,
"max_distance_km": max_distance_km,
"threshold_meters": threshold_meters,
"sources": sources,
"sources_stale": sources_stale,
"map_center": map_center,
"asset_name": self.config.asset_name
}
return jsonify(response)
except Exception as e:
logger.error(f"Error in api_status: {e}")
return jsonify({
"error": str(e),
"timestamp": datetime.now(timezone.utc).isoformat()
}), 500
def api_route(self):
"""
API endpoint returning 24h route data for map visualization
Returns:
JSON array of route points with coordinates and validation status
"""
if not self.config.web_show_route:
return jsonify({"error": "Route feature is disabled"}), 403
try:
hours = 24
conn = sqlite3.connect(str(self.database.database_path), check_same_thread=False, timeout=5.0)
cursor = conn.cursor()
# Determine reference time for the 24h window
if self.config.demo_unit:
# Demo mode: use the PREVIOUS-TO-LAST historical record's timestamp as reference
# The last record is the "live" one that gets deleted, so we skip it
# Exclude recent "live" records (last 5 minutes) added during demo session
five_minutes_ago = time.time() - 300
cursor.execute(
"""
SELECT validation_timestamp_unix FROM positions_validation
WHERE validation_timestamp_unix < ?
ORDER BY validation_timestamp_unix DESC
LIMIT 1 OFFSET 1
""",
(five_minutes_ago,)
)
result = cursor.fetchone()
reference_time = result[0] if result and result[0] else time.time()
else:
# Normal mode: use current time as reference
reference_time = time.time()
# Get validations from the 24h window
cutoff_unix = reference_time - (hours * 3600)
cursor.execute(
"""
SELECT validation_timestamp, validation_timestamp_unix, is_valid,
sources_missing, sources_stale, source_coordinates, validation_details
FROM positions_validation
WHERE validation_timestamp_unix >= ? AND validation_timestamp_unix <= ?
ORDER BY validation_timestamp_unix DESC
""",
(cutoff_unix, reference_time)
)
rows = cursor.fetchall()
conn.close()
route_points = []
for row in rows:
validation_timestamp = row[0]
validation_timestamp_unix = row[1]
is_valid = bool(row[2])
sources_missing = json.loads(row[3]) if row[3] else []
sources_stale = json.loads(row[4]) if row[4] else []
source_coordinates = json.loads(row[5]) if row[5] else {}
validation_details = json.loads(row[6]) if row[6] else {}
# Find best coordinate (priority: nmea_primary > tm_ais > starlink_location)
coord = None
for priority_source in ["nmea_primary", "tm_ais", "starlink_location", "starlink_gps"]:
if priority_source in source_coordinates:
src_data = source_coordinates[priority_source]
if src_data.get("latitude") and src_data.get("longitude"):
coord = {
"latitude": src_data["latitude"],
"longitude": src_data["longitude"]
}
break
if not coord:
continue
# Determine status
threshold = validation_details.get("threshold_meters", 200)
max_distance = validation_details.get("max_difference_meters", 0)
if not is_valid and max_distance > threshold:
status = "alert"
elif sources_missing or sources_stale:
status = "degraded"
else:
status = "valid"
route_points.append({
"lat": coord["latitude"],
"lng": coord["longitude"],
"timestamp": validation_timestamp,
"timestamp_unix": validation_timestamp_unix,
"status": status,
"is_valid": is_valid,
"sources_missing": sources_missing,
"sources_stale": sources_stale,
"max_distance_m": max_distance,
"threshold_m": threshold
})
return jsonify(route_points)
except Exception as e:
logger.error(f"Error in api_route: {e}")
return jsonify({
"error": str(e),
"timestamp": datetime.now(timezone.utc).isoformat()
}), 500
def api_alarm_acknowledge(self):
"""
API endpoint to acknowledge the buzzer alarm.
POST /api/alarm/acknowledge
Returns:
JSON response with acknowledgment status
"""
try:
if not self.buzzer_service:
return jsonify({
"success": False,
"error": "Buzzer service not available",
"alarm_active": False,
"alarm_acknowledged": False
}), 503
# Acknowledge the alarm
was_active = self.buzzer_service.is_alarm_active()
acknowledged = self.buzzer_service.acknowledge_alarm()
logger.info(f"Alarm acknowledge request: was_active={was_active}, acknowledged={acknowledged}")
return jsonify({
"success": True,
"acknowledged": acknowledged,
"alarm_active": self.buzzer_service.is_alarm_active(),
"alarm_acknowledged": self.buzzer_service.is_alarm_acknowledged(),
"timestamp": datetime.now(timezone.utc).isoformat()
})
except Exception as e:
logger.error(f"Error in api_alarm_acknowledge: {e}")
return jsonify({
"success": False,
"error": str(e),
"timestamp": datetime.now(timezone.utc).isoformat()
}), 500
def api_alarm_status(self):
"""
API endpoint to get current alarm status.
GET /api/alarm/status
Returns:
JSON response with alarm status
"""
try:
if not self.buzzer_service:
return jsonify({
"available": False,
"alarm_active": False,
"alarm_acknowledged": False,
"buzzer_status": "unavailable"
})
return jsonify({
"available": True,
"alarm_active": self.buzzer_service.is_alarm_active(),
"alarm_acknowledged": self.buzzer_service.is_alarm_acknowledged(),
"buzzer_status": self.buzzer_service.get_status(),
"timestamp": datetime.now(timezone.utc).isoformat()
})
except Exception as e:
logger.error(f"Error in api_alarm_status: {e}")
return jsonify({
"available": False,
"error": str(e),
"timestamp": datetime.now(timezone.utc).isoformat()
}), 500
def run(self, host='0.0.0.0', port=8080, debug=False):
"""
Run the Flask web server
Args:
host: Host to bind to
port: Port to bind to
debug: Enable debug mode
"""
logger.info(f"Starting web server on {host}:{port}")
self.app.run(host=host, port=port, debug=debug, threaded=True)

View File

@@ -0,0 +1,913 @@
// TM GNSS Guard - Dashboard JavaScript
// Auto-refreshes data every 10 seconds and updates the UI
let map = null;
let markers = {};
let currentData = null;
let lastFetchSucceeded = false;
let lastValidationTimestamp = null;
// Route visualization
let routeMarkers = [];
let routeCoords = []; // Store route coordinates for bounds calculation
let showRouteEnabled = getRoutePreference(); // Load from localStorage
let isInitialMapLoad = true; // Only fit bounds on initial load
// Get route preference from localStorage (defaults to true if not set)
function getRoutePreference() {
if (!window.SHOW_ROUTE) return false;
const stored = localStorage.getItem('showRoute');
return stored === null ? true : stored === 'true';
}
// Save route preference to localStorage
function saveRoutePreference(enabled) {
localStorage.setItem('showRoute', enabled ? 'true' : 'false');
}
// =============================================================================
// AUTO-REFRESH PAGE (every 1 hour to pick up deployments)
// =============================================================================
const PAGE_LOAD_TIME = Date.now();
const AUTO_REFRESH_INTERVAL_MS = 60 * 60 * 1000; // 1 hour
let lastVisibilityCheck = Date.now();
function checkAutoRefresh() {
const elapsed = Date.now() - PAGE_LOAD_TIME;
if (elapsed >= AUTO_REFRESH_INTERVAL_MS) {
console.log('Auto-refreshing page after 1 hour...');
window.location.reload();
}
}
// Check for refresh on visibility change (tab becomes active)
document.addEventListener('visibilitychange', () => {
if (document.visibilityState === 'visible') {
const now = Date.now();
// Only check if at least 10 seconds since last check (prevents rapid refreshes)
if (now - lastVisibilityCheck > 10000) {
lastVisibilityCheck = now;
checkAutoRefresh();
}
}
});
// Periodic check every 5 minutes while tab is active
setInterval(checkAutoRefresh, 5 * 60 * 1000);
// Initialize map
function initMap() {
// Create map centered on default location
map = L.map('map', {
zoomControl: true
}).setView([34.665151, 33.016326], 11);
// Dark basemap
L.tileLayer('https://{s}.basemaps.cartocdn.com/dark_all/{z}/{x}/{y}{r}.png', {
maxZoom: 19,
attribution: '&copy; OpenStreetMap & CARTO'
}).addTo(map);
// Recalculate marker offsets when zoom changes
map.on('zoomend', () => {
if (currentData) {
updateMap(currentData);
}
});
}
// Create marker icons (using local files)
function makeIcon(color) {
return new L.Icon({
iconUrl: `/static/markers/marker-icon-${color}.png`,
shadowUrl: '/static/markers/marker-shadow.png',
iconSize: [25, 41],
iconAnchor: [12, 41],
popupAnchor: [1, -34]
});
}
const iconPrimary = makeIcon('violet');
const iconPrimaryAlert = makeIcon('red');
const iconAIS = makeIcon('blue');
const iconStarlinkGps = makeIcon('yellow');
const iconStarlinkLocation = makeIcon('green');
const iconSecondary = makeIcon('grey');
// Format timestamp as relative time
function formatRelativeTime(timestampUnix) {
if (!timestampUnix) return '-';
const now = Date.now() / 1000;
const diff = now - timestampUnix;
if (diff < 0) return 'just now';
if (diff < 60) return `${Math.floor(diff)} sec ago`;
if (diff < 3600) return `${Math.floor(diff / 60)} min ago`;
if (diff < 86400) return `${Math.floor(diff / 3600)} hr ago`;
return `${Math.floor(diff / 86400)} days ago`;
}
// Format UTC timestamp for display
function formatUTCTimestamp(isoString) {
if (!isoString) return '-';
const date = new Date(isoString);
const day = String(date.getUTCDate()).padStart(2, '0');
const month = String(date.getUTCMonth() + 1).padStart(2, '0');
const year = date.getUTCFullYear();
const hours = String(date.getUTCHours()).padStart(2, '0');
const minutes = String(date.getUTCMinutes()).padStart(2, '0');
const seconds = String(date.getUTCSeconds()).padStart(2, '0');
return `${day}/${month}/${year} ${hours}:${minutes}:${seconds}`;
}
// Log event to event log
function logEvent(level, text) {
const eventLog = document.getElementById('eventLog');
if (!eventLog) return;
const div = document.createElement('div');
div.className = 'event level-' + level;
const spanLevel = document.createElement('span');
spanLevel.className = 'level';
spanLevel.textContent = level.toUpperCase();
div.appendChild(spanLevel);
const spanText = document.createElement('span');
const timestamp = new Date().toTimeString().slice(0, 8);
spanText.textContent = ' [' + timestamp + '] ' + text;
div.appendChild(spanText);
eventLog.appendChild(div);
// Limit to last 3 events (re-query each iteration since querySelectorAll is static)
while (eventLog.querySelectorAll('.event').length > 3) {
const firstEvent = eventLog.querySelector('.event');
if (firstEvent) {
eventLog.removeChild(firstEvent);
} else {
break;
}
}
}
// Update source card
function updateSource(sourceName, sourceData) {
const card = document.getElementById(`card-${sourceName}`);
const badge = document.getElementById(`badge-${sourceName}`);
const coordsEl = document.getElementById(`coords-${sourceName}`);
const updateEl = document.getElementById(`update-${sourceName}`);
if (!card || !badge) return;
// Reset card classes
card.classList.remove('ok', 'warn', 'crit', 'stale', 'offline');
badge.className = 'badge';
if (!sourceData.enabled) {
// Not configured
card.classList.add('offline');
badge.classList.add('badge-offline');
badge.textContent = 'NOT CONFIGURED';
if (coordsEl) {
coordsEl.textContent = 'No data source configured.';
}
if (updateEl) {
updateEl.textContent = '-';
updateEl.className = '';
}
} else if (sourceData.status === 'missing') {
// Missing data
card.classList.add('crit');
badge.classList.add('badge-danger');
badge.textContent = 'MISSING';
if (coordsEl) {
coordsEl.textContent = 'MISSING';
coordsEl.className = 'alert';
}
if (updateEl) {
updateEl.textContent = formatRelativeTime(sourceData.last_update_unix);
updateEl.className = 'alert';
}
} else if (sourceData.status === 'stale' || sourceData.is_stale) {
// Stale data - has coordinates but timestamp is old
card.classList.add('stale');
badge.classList.add('badge-stale');
badge.textContent = 'STALE';
if (coordsEl && sourceData.coordinates) {
const lat = sourceData.coordinates.latitude;
const lon = sourceData.coordinates.longitude;
coordsEl.textContent = `${lat.toFixed(6)}, ${lon.toFixed(6)}`;
coordsEl.className = '';
}
if (updateEl) {
updateEl.textContent = formatRelativeTime(sourceData.last_update_unix);
updateEl.className = 'stale-text';
}
} else {
// Has valid data
card.classList.add('ok');
badge.classList.add('badge-healthy');
badge.textContent = 'HEALTHY';
if (coordsEl && sourceData.coordinates) {
const lat = sourceData.coordinates.latitude;
const lon = sourceData.coordinates.longitude;
coordsEl.textContent = `${lat.toFixed(6)}, ${lon.toFixed(6)}`;
coordsEl.className = '';
}
if (updateEl) {
updateEl.textContent = formatRelativeTime(sourceData.last_update_unix);
updateEl.className = '';
}
}
}
// Calculate offset for markers to spread them in a circle when close together
function calculateMarkerOffsets(sourceCoords, zoomLevel) {
if (Object.keys(sourceCoords).length <= 1) {
// Single marker, no offset needed
const result = {};
for (const [name, coord] of Object.entries(sourceCoords)) {
result[name] = { lat: coord.lat, lon: coord.lon, offsetLat: 0, offsetLon: 0 };
}
return result;
}
// Calculate centroid
let sumLat = 0, sumLon = 0, count = 0;
for (const coord of Object.values(sourceCoords)) {
sumLat += coord.lat;
sumLon += coord.lon;
count++;
}
const centroidLat = sumLat / count;
const centroidLon = sumLon / count;
// Check if markers are close together (within ~50 meters)
const closeThreshold = 0.0005; // ~50m in degrees
let maxDist = 0;
for (const coord of Object.values(sourceCoords)) {
const dist = Math.sqrt(
Math.pow(coord.lat - centroidLat, 2) +
Math.pow(coord.lon - centroidLon, 2)
);
maxDist = Math.max(maxDist, dist);
}
// If markers are spread out enough, don't offset
if (maxDist > closeThreshold) {
const result = {};
for (const [name, coord] of Object.entries(sourceCoords)) {
result[name] = { lat: coord.lat, lon: coord.lon, offsetLat: 0, offsetLon: 0 };
}
return result;
}
// Calculate offset radius based on zoom level (smaller offset when zoomed in)
// At zoom 15, offset ~30m; at zoom 10, offset ~100m
const baseOffset = 0.0003; // ~30m base offset
const zoomFactor = Math.pow(2, 15 - Math.min(zoomLevel, 18));
const offsetRadius = baseOffset * zoomFactor;
// Arrange markers in a circle around centroid
const result = {};
const sourceNames = Object.keys(sourceCoords);
const angleStep = (2 * Math.PI) / sourceNames.length;
sourceNames.forEach((name, index) => {
const angle = angleStep * index - Math.PI / 2; // Start from top
const offsetLat = offsetRadius * Math.cos(angle);
const offsetLon = offsetRadius * Math.sin(angle) * 1.5; // Adjust for latitude distortion
result[name] = {
lat: centroidLat + offsetLat,
lon: centroidLon + offsetLon,
offsetLat: offsetLat,
offsetLon: offsetLon,
originalLat: sourceCoords[name].lat,
originalLon: sourceCoords[name].lon
};
});
return result;
}
// Update map markers
function updateMap(data) {
if (!data.map_center) return;
const sourceMapping = {
'nmea_primary': { icon: iconPrimary, name: 'Primary GPS' },
'nmea_secondary': { icon: iconSecondary, name: 'Secondary GPS' },
'tm_ais': { icon: iconAIS, name: 'TM AIS GPS' },
'starlink_gps': { icon: iconStarlinkGps, name: 'Starlink GPS' },
'starlink_location': { icon: iconStarlinkLocation, name: 'Starlink Location' }
};
// Collect all valid source coordinates
const sourceCoords = {};
const allCoords = [];
Object.keys(sourceMapping).forEach(sourceName => {
const sourceData = data.sources[sourceName];
if (sourceData && sourceData.enabled && sourceData.status !== 'missing' && sourceData.coordinates) {
const lat = sourceData.coordinates.latitude;
const lon = sourceData.coordinates.longitude;
sourceCoords[sourceName] = { lat, lon };
allCoords.push([lat, lon]);
}
});
// Calculate offsets for overlapping markers
const zoomLevel = map.getZoom() || 13;
const offsetPositions = calculateMarkerOffsets(sourceCoords, zoomLevel);
// Update or create markers for each source
Object.keys(sourceMapping).forEach(sourceName => {
const sourceData = data.sources[sourceName];
if (!sourceData || !sourceData.enabled || sourceData.status === 'missing' || !sourceData.coordinates) {
// Remove marker if it exists
if (markers[sourceName]) {
map.removeLayer(markers[sourceName]);
delete markers[sourceName];
}
return;
}
const mapping = sourceMapping[sourceName];
const position = offsetPositions[sourceName];
// Use alert icon for primary if there's an alert
let icon = mapping.icon;
if (sourceName === 'nmea_primary' && data.has_alert && !data.is_valid) {
icon = iconPrimaryAlert;
}
// Build popup with original coordinates
const origLat = sourceData.coordinates.latitude;
const origLon = sourceData.coordinates.longitude;
const popupContent = `<b>${mapping.name}</b><br>` +
`Lat: ${origLat.toFixed(6)}<br>` +
`Lon: ${origLon.toFixed(6)}`;
if (markers[sourceName]) {
markers[sourceName].setLatLng([position.lat, position.lon]).setIcon(icon);
markers[sourceName].setPopupContent(popupContent);
} else {
markers[sourceName] = L.marker([position.lat, position.lon], { icon: icon })
.addTo(map)
.bindPopup(popupContent);
}
});
// Fit map to show all markers (only on initial load, not on refresh)
// If route is enabled, wait for route data to load before fitting bounds
if (isInitialMapLoad && !window.SHOW_ROUTE) {
if (allCoords.length > 0) {
const bounds = L.latLngBounds(allCoords);
map.fitBounds(bounds, {
padding: [50, 50],
maxZoom: 15
});
} else if (data.map_center && data.map_center.latitude && data.map_center.longitude) {
map.setView([data.map_center.latitude, data.map_center.longitude], 13);
}
isInitialMapLoad = false;
}
}
// Track alarm state
let alarmActive = false;
let alarmAcknowledged = false;
// Update global status pill and border frame
function updateGlobalStatus(data) {
const statusPill = document.getElementById('globalStatusPill');
const statusText = statusPill ? statusPill.querySelector('.status-text') : null;
const borderFrame = document.getElementById('healthBorderFrame');
if (statusPill) {
statusPill.classList.remove('warn', 'crit', 'alarm-active', 'alarm-acknowledged');
}
if (borderFrame) {
borderFrame.classList.remove('warn', 'crit');
}
if (data.is_valid) {
if (statusText) {
statusText.textContent = 'GNSS Integrity: Stable';
} else if (statusPill) {
statusPill.textContent = 'GNSS Integrity: Stable';
}
// Border frame hidden (no class) when healthy
// Reset alarm state when status returns to healthy
alarmActive = false;
alarmAcknowledged = false;
} else if (data.has_alert && data.max_distance_km !== null) {
if (statusText) {
statusText.textContent = 'GNSS Integrity: At Risk';
} else if (statusPill) {
statusPill.textContent = 'GNSS Integrity: At Risk';
}
if (statusPill) {
statusPill.classList.add('crit');
}
if (borderFrame) {
borderFrame.classList.add('crit');
}
// Alarm should be active in this state
if (!alarmAcknowledged) {
alarmActive = true;
}
} else {
if (statusText) {
statusText.textContent = 'GNSS Integrity: Degraded';
} else if (statusPill) {
statusPill.textContent = 'GNSS Integrity: Degraded';
}
if (statusPill) {
statusPill.classList.add('warn');
}
if (borderFrame) {
borderFrame.classList.add('warn');
}
// Alarm should be active in this state
if (!alarmAcknowledged) {
alarmActive = true;
}
}
// Update alarm visual state
updateAlarmVisualState();
}
// Update alarm visual state on the button
function updateAlarmVisualState() {
const statusPill = document.getElementById('globalStatusPill');
if (!statusPill) return;
statusPill.classList.remove('alarm-active', 'alarm-acknowledged');
if (alarmActive && !alarmAcknowledged) {
statusPill.classList.add('alarm-active');
} else if (alarmAcknowledged) {
statusPill.classList.add('alarm-acknowledged');
}
}
// Acknowledge alarm - called when user clicks the status button
async function acknowledgeAlarm() {
const statusPill = document.getElementById('globalStatusPill');
// Only allow acknowledgment if alarm is active
if (!alarmActive && !statusPill.classList.contains('warn') && !statusPill.classList.contains('crit')) {
console.log('No active alarm to acknowledge');
return;
}
try {
const response = await fetch('/api/alarm/acknowledge', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
}
});
if (response.ok) {
const data = await response.json();
console.log('Alarm acknowledged:', data);
alarmAcknowledged = true;
alarmActive = false;
updateAlarmVisualState();
logEvent('info', 'Alarm acknowledged - buzzer muted');
} else {
console.error('Failed to acknowledge alarm:', response.statusText);
logEvent('warn', 'Failed to acknowledge alarm');
}
} catch (error) {
console.error('Error acknowledging alarm:', error);
logEvent('warn', 'Error acknowledging alarm: ' + error.message);
}
}
// Fetch alarm status from server
async function fetchAlarmStatus() {
try {
const response = await fetch('/api/alarm/status');
if (response.ok) {
const data = await response.json();
alarmActive = data.alarm_active;
alarmAcknowledged = data.alarm_acknowledged;
updateAlarmVisualState();
}
} catch (error) {
console.error('Error fetching alarm status:', error);
}
}
// Update alert banner
function updateAlertBanner(data) {
const banner = document.getElementById('alert-banner');
const alertText = document.getElementById('alertText');
const alertDistance = document.getElementById('alert-distance-value');
const alertIndicator = document.getElementById('alertIndicator');
if (!banner) return;
banner.classList.add('hidden');
banner.classList.remove('alert-critical', 'alert-warning');
if (data.has_alert && !data.is_valid && data.max_distance_km !== null) {
// Show distance alert
banner.classList.remove('hidden');
banner.classList.add('alert-critical');
if (alertDistance) {
alertDistance.textContent = `${data.max_distance_km.toFixed(1)} km`;
}
if (alertText) {
alertText.textContent = `GPS Jamming or Spoofing Alert! Location Distance: ${data.max_distance_km.toFixed(1)} km`;
}
if (alertIndicator) {
alertIndicator.style.background = 'var(--accent-red)';
}
} else if (data.has_alert) {
// Has alert but not distance-based (missing sources)
banner.classList.remove('hidden');
banner.classList.add('alert-warning');
if (alertText) {
alertText.textContent = 'Some GPS sources are missing or stale.';
}
if (alertIndicator) {
alertIndicator.style.background = 'var(--accent-amber)';
}
}
}
// Helper function to show degraded state
function showDegradedState(errorMessage) {
lastFetchSucceeded = false;
// Update status pill and border frame to degraded FIRST (before logging which could fail)
try {
const statusPill = document.getElementById('globalStatusPill');
const statusText = statusPill ? statusPill.querySelector('.status-text') : null;
if (statusPill) {
if (statusText) {
statusText.textContent = 'GNSS Integrity: Degraded';
} else {
statusPill.textContent = 'GNSS Integrity: Degraded';
}
statusPill.classList.remove('crit');
statusPill.classList.add('warn');
// Activate alarm state
if (!alarmAcknowledged) {
alarmActive = true;
updateAlarmVisualState();
}
}
const borderFrame = document.getElementById('healthBorderFrame');
if (borderFrame) {
borderFrame.classList.remove('crit');
borderFrame.classList.add('warn');
}
} catch (e) {
console.error('Error updating status pill:', e);
}
// Mark all "Updated" timestamps as stale/error (red)
try {
const sources = ['nmea_primary', 'nmea_secondary', 'tm_ais', 'starlink_gps', 'starlink_location'];
sources.forEach(sourceName => {
const updateEl = document.getElementById(`update-${sourceName}`);
if (updateEl) {
updateEl.classList.add('stale-text');
}
});
} catch (e) {
console.error('Error updating source timestamps:', e);
}
// Log the error message last
logEvent('crit', errorMessage);
}
// Fetch and update data
async function fetchData() {
try {
const response = await fetch('/api/status');
if (!response.ok) {
console.error('Failed to fetch status:', response.statusText);
showDegradedState(`Server error: ${response.status} ${response.statusText}`);
return;
}
const data = await response.json();
currentData = data;
lastFetchSucceeded = true;
// Update all sources
const sources = ['nmea_primary', 'nmea_secondary', 'tm_ais', 'starlink_gps', 'starlink_location'];
sources.forEach(sourceName => {
updateSource(sourceName, data.sources[sourceName]);
});
// If distance-based alert (GPS jamming/spoofing), mark ALL enabled sources as "AT RISK"
if (data.has_alert && !data.is_valid && data.max_distance_km !== null) {
sources.forEach(sourceName => {
const sourceData = data.sources[sourceName];
// Only mark sources that have coordinates (participated in distance validation)
if (sourceData && sourceData.enabled && sourceData.status !== 'missing' && sourceData.coordinates) {
const card = document.getElementById(`card-${sourceName}`);
const badge = document.getElementById(`badge-${sourceName}`);
if (card) {
card.classList.remove('ok', 'stale');
card.classList.add('crit');
}
if (badge) {
badge.className = 'badge badge-danger';
badge.textContent = 'AT RISK';
}
}
});
}
// Capture initial load flag before updateMap modifies it
const wasInitialLoad = isInitialMapLoad;
// Update map
updateMap(data);
// Update global status
updateGlobalStatus(data);
// Update alert banner
updateAlertBanner(data);
// Load route data if enabled (fit bounds on initial load)
if (window.SHOW_ROUTE && showRouteEnabled) {
loadRouteData(wasInitialLoad);
}
} catch (error) {
console.error('Error fetching data:', error);
showDegradedState('Failed to fetch status data: ' + error.message);
}
}
// Update relative times
function updateRelativeTimes() {
if (!currentData) return;
const sources = ['nmea_primary', 'nmea_secondary', 'tm_ais', 'starlink_gps', 'starlink_location'];
sources.forEach(sourceName => {
const sourceData = currentData.sources[sourceName];
const updateEl = document.getElementById(`update-${sourceName}`);
if (updateEl && sourceData && sourceData.enabled && sourceData.last_update_unix) {
updateEl.textContent = formatRelativeTime(sourceData.last_update_unix);
}
});
}
// Tab switching for mobile
function initTabs() {
const tabBtns = document.querySelectorAll('.tab-btn');
const tabContents = document.querySelectorAll('.tab-content');
tabBtns.forEach(btn => {
btn.addEventListener('click', () => {
const tabName = btn.dataset.tab;
// Update button states
tabBtns.forEach(b => b.classList.remove('active'));
btn.classList.add('active');
// Update content visibility
tabContents.forEach(content => {
content.classList.remove('active');
if (content.id === `tab-${tabName}`) {
content.classList.add('active');
}
});
// Invalidate map size when switching to map tab
if (tabName === 'map' && map) {
setTimeout(() => {
map.invalidateSize();
}, 100);
}
});
});
}
// Initialize on page load
document.addEventListener('DOMContentLoaded', () => {
// Initialize tabs
initTabs();
// Initialize map
initMap();
// Initialize route checkbox from stored preference
initRouteCheckbox();
// Initial log
logEvent('info', 'TM GNSS Guard dashboard initialized.');
// Initial data fetch
fetchData();
// Initial alarm status fetch
fetchAlarmStatus();
// Update relative times every second
setInterval(updateRelativeTimes, 1000);
// Fetch data every 10 seconds
setInterval(fetchData, 10000);
// Fetch alarm status every 5 seconds (more frequent to stay in sync with buzzer)
setInterval(fetchAlarmStatus, 5000);
// Log data fetch events (only when data actually changed)
setInterval(() => {
if (currentData && lastFetchSucceeded) {
// Only log if validation timestamp changed (new data from backend)
if (currentData.validation_timestamp !== lastValidationTimestamp) {
lastValidationTimestamp = currentData.validation_timestamp;
if (currentData.has_alert && !currentData.is_valid && currentData.max_distance_km !== null) {
logEvent('crit', `Server reports alert: distance ${currentData.max_distance_km.toFixed(1)} km`);
} else if (!currentData.is_valid) {
logEvent('warn', 'Server reports validation issue.');
} else {
logEvent('info', 'Server status OK.');
}
}
}
}, 10000);
});
// =============================================================================
// ROUTE VISUALIZATION (24h history)
// =============================================================================
async function loadRouteData(fitBoundsAfter = false) {
if (!window.SHOW_ROUTE) return;
try {
const response = await fetch('/api/route');
if (!response.ok) {
// If route fetch fails on initial load, still fit bounds to source markers
if (fitBoundsAfter) {
fitBoundsToMarkers();
isInitialMapLoad = false;
}
return;
}
const routeData = await response.json();
renderRoute(routeData);
// Fit bounds to include route on initial load
if (fitBoundsAfter) {
// Collect current source marker coordinates
const sourceCoords = [];
Object.values(markers).forEach(marker => {
const latlng = marker.getLatLng();
sourceCoords.push([latlng.lat, latlng.lng]);
});
// Use all route points (full 24h) for bounds calculation
const routePointCoords = [];
routeData.forEach(point => {
if (point.lat && point.lng) {
routePointCoords.push([point.lat, point.lng]);
}
});
const allCoords = [...sourceCoords, ...routePointCoords];
if (allCoords.length > 0) {
const bounds = L.latLngBounds(allCoords);
map.fitBounds(bounds, {
padding: [50, 50],
maxZoom: 15
});
} else {
fitBoundsToMarkers();
}
isInitialMapLoad = false;
}
} catch (error) {
console.error('Error loading route:', error);
// If route fetch fails on initial load, still fit bounds to source markers
if (fitBoundsAfter) {
fitBoundsToMarkers();
isInitialMapLoad = false;
}
}
}
function fitBoundsToMarkers() {
const sourceCoords = [];
Object.values(markers).forEach(marker => {
const latlng = marker.getLatLng();
sourceCoords.push([latlng.lat, latlng.lng]);
});
if (sourceCoords.length > 0) {
const bounds = L.latLngBounds(sourceCoords);
map.fitBounds(bounds, {
padding: [50, 50],
maxZoom: 15
});
} else if (currentData && currentData.map_center) {
map.setView([currentData.map_center.latitude, currentData.map_center.longitude], 13);
}
}
function renderRoute(routeData) {
clearRouteMarkers();
routeCoords = []; // Reset route coordinates
if (!showRouteEnabled || !routeData || routeData.length === 0) return;
// Create small circle markers for route points
routeData.forEach(point => {
if (!point.lat || !point.lng) return;
// Store coordinates for bounds calculation
routeCoords.push([point.lat, point.lng]);
// Determine color based on status
let color;
switch (point.status) {
case 'alert':
color = '#c62828'; // Red
break;
case 'degraded':
color = '#ffa726'; // Amber
break;
default:
color = '#1fad3a'; // Green
}
const marker = L.circleMarker([point.lat, point.lng], {
radius: 4,
fillColor: color,
color: color,
weight: 1,
opacity: 0.8,
fillOpacity: 0.6
}).addTo(map);
// Add popup with details
const popupContent = `
<div class="route-popup">
<div class="popup-header status-${point.status}">${point.status.toUpperCase()}</div>
<div class="popup-row"><strong>Time:</strong> ${formatUTCTimestamp(point.timestamp)}</div>
${point.sources_missing?.length ? `<div class="popup-row"><strong>Missing:</strong> ${point.sources_missing.join(', ')}</div>` : ''}
${point.sources_stale?.length ? `<div class="popup-row"><strong>Stale:</strong> ${point.sources_stale.join(', ')}</div>` : ''}
${point.max_distance_m > point.threshold_m ? `<div class="popup-row"><strong>Distance:</strong> ${(point.max_distance_m/1000).toFixed(2)} km</div>` : ''}
</div>
`;
marker.bindPopup(popupContent);
routeMarkers.push(marker);
});
}
function clearRouteMarkers() {
routeMarkers.forEach(marker => {
map.removeLayer(marker);
});
routeMarkers = [];
routeCoords = [];
}
function toggleRoute() {
const checkbox = document.getElementById('showRoute');
showRouteEnabled = checkbox ? checkbox.checked : false;
// Persist the preference
saveRoutePreference(showRouteEnabled);
if (showRouteEnabled) {
loadRouteData();
} else {
clearRouteMarkers();
}
}
// Initialize route checkbox state from localStorage
function initRouteCheckbox() {
const checkbox = document.getElementById('showRoute');
if (checkbox) {
checkbox.checked = showRouteEnabled;
}
}

View File

@@ -0,0 +1,11 @@
<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 404 (Not Found)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>404.</b> <ins>Thats an error.</ins>
<p>The requested URL <code>/s/montserrat/v29/JTUHjIg1_i6t8kCHKm4532VJOt5-QNFgpCtr6Hw5aX8.woff2</code> was not found on this server. <ins>Thats all we know.</ins>

View File

@@ -0,0 +1,11 @@
<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 404 (Not Found)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>404.</b> <ins>Thats an error.</ins>
<p>The requested URL <code>/s/montserrat/v29/JTUHjIg1_i6t8kCHKm4532VJOt5-QNFgpCtr6Hw0aX8.woff2</code> was not found on this server. <ins>Thats all we know.</ins>

View File

@@ -0,0 +1,11 @@
<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 404 (Not Found)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>404.</b> <ins>Thats an error.</ins>
<p>The requested URL <code>/s/montserrat/v29/JTUHjIg1_i6t8kCHKm4532VJOt5-QNFgpCtZ6Hw5aX8.woff2</code> was not found on this server. <ins>Thats all we know.</ins>

View File

@@ -0,0 +1,11 @@
<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 404 (Not Found)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>404.</b> <ins>Thats an error.</ins>
<p>The requested URL <code>/s/montserrat/v29/JTUHjIg1_i6t8kCHKm4532VJOt5-QNFgpCu170w5aX8.woff2</code> was not found on this server. <ins>Thats all we know.</ins>

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 KiB

Some files were not shown because too many files have changed in this diff Show More