Compare commits
66 Commits
d12c7af664
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
10c200f994 | ||
|
|
031e1c3415 | ||
|
|
b5134098c0 | ||
|
|
c5e418eabc | ||
|
|
fe72619931 | ||
|
|
16bfc1e0e1 | ||
|
|
59f8ebe61d | ||
|
|
808fbf5c7c | ||
|
|
df180120aa | ||
|
|
c91cf6dd05 | ||
|
|
25bf710c67 | ||
|
|
2d6e5aa009 | ||
|
|
f42700848a | ||
|
|
ca27727137 | ||
|
|
55b8661a2e | ||
|
|
5f05663706 | ||
|
|
b1368b6e62 | ||
|
|
ec973cc2b3 | ||
|
|
e13ad3d8f9 | ||
|
|
0bbd62213c | ||
|
|
8b4930d4b9 | ||
|
|
fd4e54f125 | ||
|
|
196b13c2fa | ||
|
|
fd56ed4049 | ||
|
|
16c796b8af | ||
|
|
79a7f76a12 | ||
|
|
3909fd7cf1 | ||
|
|
595ae0dd35 | ||
|
|
a915e5a405 | ||
|
|
b39e73324f | ||
|
|
5238d457e8 | ||
|
|
ff6258c2af | ||
|
|
ea6f846021 | ||
|
|
a6e27219f4 | ||
|
|
4d5909904c | ||
|
|
2a9731754c | ||
|
|
2777811b32 | ||
|
|
90296498f5 | ||
|
|
7e1bf8a4c2 | ||
|
|
66ad3b0a39 | ||
|
|
58d9144752 | ||
|
|
9656771d5a | ||
|
|
fdadef0791 | ||
|
|
ccd99d8b06 | ||
|
|
8d50629b92 | ||
|
|
b99cc2520a | ||
|
|
499c14580e | ||
|
|
80614cb400 | ||
|
|
5372fcbb81 | ||
|
|
359645296e | ||
|
|
00d53b8158 | ||
|
|
a1c60cb7e4 | ||
|
|
b33afb41dc | ||
|
|
ed5e1a1101 | ||
|
|
97d55a1f90 | ||
|
|
42bacad329 | ||
|
|
1fdd1dd376 | ||
|
|
9098e820e6 | ||
|
|
9c533e95f9 | ||
|
|
d3c4e4b7f1 | ||
|
|
ad00491487 | ||
|
|
39aa042dc9 | ||
|
|
987e71c36e | ||
|
|
2d3687fb7c | ||
|
|
01a9f61ca5 | ||
|
|
41b7e95c96 |
12
.gitignore
vendored
Normal file
12
.gitignore
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
# Large binary / image files (do not commit)
|
||||
*.img.xz
|
||||
*.img.xz.bak
|
||||
*.img
|
||||
!emmc-provisioning/network-boot-initramfs/*.img
|
||||
|
||||
# Backup/data from devices (large DBs and logs)
|
||||
backup-from-device/**/data/*.db
|
||||
backup-from-device/**/logs/
|
||||
**/*.db
|
||||
*.sqlite
|
||||
*.sqlite3
|
||||
27
README.md
Normal file
27
README.md
Normal file
@@ -0,0 +1,27 @@
|
||||
<!-- Revision: 2 -->
|
||||
# reTerminal DM4
|
||||
|
||||
Project for **reTerminal DM4** (Seeed) with CM4: Chromium kiosk, eMMC provisioning (USB + network boot), and first-boot configuration via cloud-init.
|
||||
|
||||
## Revisions
|
||||
|
||||
A single **revision number** is kept in `REVISION` and in a comment line in tracked files (`# Revision: N` or `<!-- Revision: N -->`) so you can see what changed across hosts and deploys.
|
||||
|
||||
- **Bump revision (update all files):** from repo root run
|
||||
`./emmc-provisioning/scripts/bump-revision.sh`
|
||||
- **Auto-bump on every commit:** install the pre-commit hook
|
||||
`cp emmc-provisioning/scripts/pre-commit-revision.sh .git/hooks/pre-commit && chmod +x .git/hooks/pre-commit`
|
||||
Then every commit will bump the revision and update the revision line in all tracked files.
|
||||
|
||||
## Repository structure
|
||||
|
||||
| Path | Purpose |
|
||||
|------|---------|
|
||||
| **emmc-provisioning/** | **Main workflow:** eMMC deploy/backup, cloud-init first-boot, Chromium kiosk assets, file server, dashboard, network boot. See [emmc-provisioning/README.md](emmc-provisioning/README.md). |
|
||||
| **archive/** | Legacy or unused files (guides, old scripts). Not used for deployment. See [archive/README.md](archive/README.md). |
|
||||
|
||||
## Quick start
|
||||
|
||||
1. Read **emmc-provisioning/docs/EMMC-PROVISIONING-GUIDE.md** for full setup.
|
||||
2. Use **emmc-provisioning/scripts/sync-portal-files-to-lxc.sh** to sync first-boot assets (including kiosk) to the file server.
|
||||
3. Provision devices via USB boot or network boot; first-boot configures kiosk, labwc, rotation, wallpaper, dark theme, and optional CM4 boot order.
|
||||
12
TODO.MD
Normal file
12
TODO.MD
Normal file
@@ -0,0 +1,12 @@
|
||||
|
||||
- [x] change icon on taskbar (PiXtrix icon theme + icon cache rebuild).
|
||||
- [x] fix dark theme (Adwaita-dark, gtk-3.0/settings.ini, gsettings at login).
|
||||
- [x] check for duplicates commands in all scripts and cloud init during deployment.
|
||||
- [x] fix rotation race (kanshi config pre-created in step 11, restart in login/oneshot scripts).
|
||||
- [x] fix five-tap overlay for Wayland (layer-shell, gir1.2-gtklayershell-0.1).
|
||||
- [x] add VNC (wayvnc) to provisioning (step 06).
|
||||
- [x] add touch-friendly Chromium flags (start-chromium.sh).
|
||||
- [x] add no-select extension to prevent text selection in kiosk (chromium-kiosk-no-select/).
|
||||
- [x] fix curl timeout in report_status (first-boot.sh).
|
||||
- [ ] test text selection fix on different websites.
|
||||
- [ ] verify five-tap overlay works on device after full provision.
|
||||
10
archive/README.md
Normal file
10
archive/README.md
Normal file
@@ -0,0 +1,10 @@
|
||||
# Archive
|
||||
|
||||
This folder holds files that are no longer part of the active reTerminal DM4 / eMMC provisioning workflow. Kept for reference only.
|
||||
|
||||
| Subfolder | Contents |
|
||||
|-----------|----------|
|
||||
| **chromium-setup-legacy/** | Old Chromium-setup guides and scripts: KDE installation, LED/buzzer control, audio config, touchscreen options, Flask apps, test scripts, revert-to-lxde. Kiosk assets (start-chromium.sh, chromium-kiosk.desktop) live in `emmc-provisioning/cloud-init/` and `emmc-provisioning/cloud-init/config-files/`. |
|
||||
| **cloud-init-duplicates/** | Duplicate or superseded cloud-init files (e.g. plymouth-custom.script duplicate of `files-from-guard/plymouth-custom/custom.script`). |
|
||||
|
||||
Do not rely on archived files for deployment; use the main tree under **emmc-provisioning/**.
|
||||
40
archive/cloud-init-duplicates/plymouth-custom.script
Normal file
40
archive/cloud-init-duplicates/plymouth-custom.script
Normal file
@@ -0,0 +1,40 @@
|
||||
screen_width = Window.GetWidth();
|
||||
screen_height = Window.GetHeight();
|
||||
|
||||
theme_image = Image("splash.png");
|
||||
image_width = theme_image.GetWidth();
|
||||
image_height = theme_image.GetHeight();
|
||||
|
||||
scale_x = image_width / screen_width;
|
||||
scale_y = image_height / screen_height;
|
||||
|
||||
if (scale_x > 1 || scale_y > 1)
|
||||
{
|
||||
if (scale_x > scale_y)
|
||||
{
|
||||
resized_image = theme_image.Scale(screen_width, image_height / scale_x);
|
||||
image_x = 0;
|
||||
image_y = (screen_height - ((image_height * screen_width) / image_width)) / 2;
|
||||
}
|
||||
else
|
||||
{
|
||||
resized_image = theme_image.Scale(image_width / scale_y, screen_height);
|
||||
image_x = (screen_width - ((image_width * screen_height) / image_height)) / 2;
|
||||
image_y = 0;
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
resized_image = theme_image.Scale(image_width, image_height);
|
||||
image_x = (screen_width - image_width) / 2;
|
||||
image_y = (screen_height - image_height) / 2;
|
||||
}
|
||||
|
||||
if (Plymouth.GetMode() != "shutdown")
|
||||
{
|
||||
sprite = Sprite(resized_image);
|
||||
sprite.SetPosition(image_x, image_y, -100);
|
||||
}
|
||||
|
||||
fun message_callback(text) {
|
||||
}
|
||||
19
backup-from-device/gnss-guard/gnss-guard.service
Normal file
19
backup-from-device/gnss-guard/gnss-guard.service
Normal file
@@ -0,0 +1,19 @@
|
||||
[Unit]
|
||||
Description=TM GNSS Guard - GPS Spoofing and Jamming Monitor
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=pi
|
||||
WorkingDirectory=/home/pi/tm-gnss-guard
|
||||
ExecStart=/home/pi/tm-gnss-guard/.venv/bin/python /home/pi/tm-gnss-guard/main.py
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
StandardOutput=append:/home/pi/tm-gnss-guard/gnss_guard.log
|
||||
StandardError=append:/home/pi/tm-gnss-guard/gnss_guard.log
|
||||
|
||||
# Environment
|
||||
Environment=PYTHONUNBUFFERED=1
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
@@ -39,8 +39,6 @@ for i in {1..10}; do
|
||||
sleep 0.5
|
||||
done
|
||||
|
||||
|
||||
|
||||
# Keep script running
|
||||
wait
|
||||
|
||||
116
backup-from-device/gnss-guard/test_buzzer.py
Executable file
116
backup-from-device/gnss-guard/test_buzzer.py
Executable file
@@ -0,0 +1,116 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Buzzer Test Script for reTerminal DM4
|
||||
Tests various buzzer patterns and functions
|
||||
"""
|
||||
|
||||
import subprocess
|
||||
import time
|
||||
import sys
|
||||
|
||||
BUZZER_PATH = '/sys/class/leds/usr-buzzer/brightness'
|
||||
|
||||
def buzzer_on():
|
||||
"""Turn buzzer ON"""
|
||||
subprocess.run(['sudo', 'tee', BUZZER_PATH],
|
||||
input='1', text=True,
|
||||
stdout=subprocess.DEVNULL,
|
||||
stderr=subprocess.DEVNULL)
|
||||
|
||||
def buzzer_off():
|
||||
"""Turn buzzer OFF"""
|
||||
subprocess.run(['sudo', 'tee', BUZZER_PATH],
|
||||
input='0', text=True,
|
||||
stdout=subprocess.DEVNULL,
|
||||
stderr=subprocess.DEVNULL)
|
||||
|
||||
def beep(duration=0.2):
|
||||
"""Play a single beep"""
|
||||
buzzer_on()
|
||||
time.sleep(duration)
|
||||
buzzer_off()
|
||||
|
||||
def blink(count=3, on_time=0.1, off_time=0.1):
|
||||
"""Blink buzzer multiple times"""
|
||||
for _ in range(count):
|
||||
buzzer_on()
|
||||
time.sleep(on_time)
|
||||
buzzer_off()
|
||||
time.sleep(off_time)
|
||||
|
||||
def get_status():
|
||||
"""Get current buzzer status"""
|
||||
try:
|
||||
result = subprocess.run(['cat', BUZZER_PATH],
|
||||
capture_output=True, text=True, check=True)
|
||||
return 'ON' if result.stdout.strip() in ['1', '255'] else 'OFF'
|
||||
except:
|
||||
return 'UNKNOWN'
|
||||
|
||||
def main():
|
||||
print("=" * 50)
|
||||
print(" reTerminal DM4 Buzzer Test Script (Python)")
|
||||
print("=" * 50)
|
||||
print()
|
||||
|
||||
# Test 1: Single beep
|
||||
print("Test 1: Single beep (0.2s)")
|
||||
beep(0.2)
|
||||
time.sleep(0.5)
|
||||
|
||||
# Test 2: Double beep
|
||||
print("Test 2: Double beep")
|
||||
blink(2, 0.1, 0.1)
|
||||
time.sleep(0.5)
|
||||
|
||||
# Test 3: Triple beep
|
||||
print("Test 3: Triple beep")
|
||||
blink(3, 0.1, 0.1)
|
||||
time.sleep(0.5)
|
||||
|
||||
# Test 4: Long beep
|
||||
print("Test 4: Long beep (0.5s)")
|
||||
beep(0.5)
|
||||
time.sleep(0.5)
|
||||
|
||||
# Test 5: Rapid beeps
|
||||
print("Test 5: Rapid beeps (5x)")
|
||||
blink(5, 0.05, 0.05)
|
||||
time.sleep(0.5)
|
||||
|
||||
# Test 6: Slow beeps
|
||||
print("Test 6: Slow beeps (3x)")
|
||||
blink(3, 0.3, 0.3)
|
||||
time.sleep(0.5)
|
||||
|
||||
# Test 7: Success pattern
|
||||
print("Test 7: Success pattern (2 short)")
|
||||
blink(2, 0.1, 0.1)
|
||||
time.sleep(0.5)
|
||||
|
||||
# Test 8: Error pattern
|
||||
print("Test 8: Error pattern (3 fast)")
|
||||
blink(3, 0.05, 0.05)
|
||||
time.sleep(0.5)
|
||||
|
||||
# Ensure buzzer is off
|
||||
buzzer_off()
|
||||
|
||||
print()
|
||||
print("=" * 50)
|
||||
print(" Buzzer test complete!")
|
||||
print("=" * 50)
|
||||
print()
|
||||
print(f"Current buzzer status: {get_status()}")
|
||||
|
||||
if __name__ == '__main__':
|
||||
try:
|
||||
main()
|
||||
except KeyboardInterrupt:
|
||||
print("\n\nTest interrupted by user")
|
||||
buzzer_off()
|
||||
sys.exit(0)
|
||||
except Exception as e:
|
||||
print(f"\n\nError: {e}")
|
||||
buzzer_off()
|
||||
sys.exit(1)
|
||||
82
backup-from-device/gnss-guard/test_buzzer.sh
Executable file
82
backup-from-device/gnss-guard/test_buzzer.sh
Executable file
@@ -0,0 +1,82 @@
|
||||
#!/bin/bash
|
||||
# Buzzer Test Script for reTerminal DM4
|
||||
# Tests various buzzer patterns and functions
|
||||
|
||||
BUZZER_PATH='/sys/class/leds/usr-buzzer/brightness'
|
||||
|
||||
echo "=========================================="
|
||||
echo " reTerminal DM4 Buzzer Test Script"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
# Function to play a beep
|
||||
beep() {
|
||||
local duration=${1:-0.2}
|
||||
echo 1 | sudo tee $BUZZER_PATH > /dev/null 2>&1
|
||||
sleep $duration
|
||||
echo 0 | sudo tee $BUZZER_PATH > /dev/null 2>&1
|
||||
}
|
||||
|
||||
# Function to blink buzzer
|
||||
blink() {
|
||||
local count=${1:-3}
|
||||
local on_time=${2:-0.1}
|
||||
local off_time=${3:-0.1}
|
||||
|
||||
for i in $(seq 1 $count); do
|
||||
echo 1 | sudo tee $BUZZER_PATH > /dev/null 2>&1
|
||||
sleep $on_time
|
||||
echo 0 | sudo tee $BUZZER_PATH > /dev/null 2>&1
|
||||
sleep $off_time
|
||||
done
|
||||
}
|
||||
|
||||
# Test 1: Single beep
|
||||
echo "Test 1: Single beep (0.2s)"
|
||||
beep 0.2
|
||||
sleep 0.5
|
||||
|
||||
# Test 2: Double beep
|
||||
echo "Test 2: Double beep"
|
||||
blink 2 0.1 0.1
|
||||
sleep 0.5
|
||||
|
||||
# Test 3: Triple beep
|
||||
echo "Test 3: Triple beep"
|
||||
blink 3 0.1 0.1
|
||||
sleep 0.5
|
||||
|
||||
# Test 4: Long beep
|
||||
echo "Test 4: Long beep (0.5s)"
|
||||
beep 0.5
|
||||
sleep 0.5
|
||||
|
||||
# Test 5: Rapid beeps
|
||||
echo "Test 5: Rapid beeps (5x)"
|
||||
blink 5 0.05 0.05
|
||||
sleep 0.5
|
||||
|
||||
# Test 6: Slow beeps
|
||||
echo "Test 6: Slow beeps (3x)"
|
||||
blink 3 0.3 0.3
|
||||
sleep 0.5
|
||||
|
||||
# Test 7: Success pattern (2 short)
|
||||
echo "Test 7: Success pattern"
|
||||
blink 2 0.1 0.1
|
||||
sleep 0.5
|
||||
|
||||
# Test 8: Error pattern (3 fast)
|
||||
echo "Test 8: Error pattern"
|
||||
blink 3 0.05 0.05
|
||||
sleep 0.5
|
||||
|
||||
# Ensure buzzer is off
|
||||
echo 0 | sudo tee $BUZZER_PATH > /dev/null 2>&1
|
||||
|
||||
echo ""
|
||||
echo "=========================================="
|
||||
echo " Buzzer test complete!"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
echo "Current buzzer status: $(cat $BUZZER_PATH) (0=OFF, 1=ON)"
|
||||
@@ -0,0 +1,10 @@
|
||||
---
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
## Jira & Confluence
|
||||
- When creating Jira ticket, don't get deep into technical implementation or reference to code files, allowing some agility to developers. Even if the change was already implemented, write a task as requirements that need to be done, rather than done already (past tense).
|
||||
- Sometimes basic request to get Atlassian MCP resources fail, with code 401, in this case try few more times before giving up to allow tokens refresh.
|
||||
|
||||
## Documentation Files
|
||||
Do not create a documentation file unless the user requested so explicitly. Only update existing documentation files where necessary if major update was introduced or the documentation file context is insufficient without an amendment.
|
||||
121
backup-from-device/gnss-guard/tm-gnss-guard/.env.prod
Normal file
121
backup-from-device/gnss-guard/tm-gnss-guard/.env.prod
Normal file
@@ -0,0 +1,121 @@
|
||||
# ============================================================================
|
||||
# GNSS Guard Configuration
|
||||
# ============================================================================
|
||||
|
||||
# =============================================================================
|
||||
# ASSET NAME
|
||||
# =============================================================================
|
||||
ASSET_NAME=OFFICE_LAB
|
||||
|
||||
# =============================================================================
|
||||
# DEPLOYMENT TARGET (used by deploy_client.sh)
|
||||
# =============================================================================
|
||||
DEPLOY_USER=pi
|
||||
# DEPLOY_HOST=10.130.60.253
|
||||
DEPLOY_HOST=10.15.80.161
|
||||
DEPLOY_PORT=22
|
||||
DEPLOY_PASSWORD=sh1pb0x1
|
||||
DEPLOY_INJECTED_POSITIONS=.configs/injected_positions_office_lab.json
|
||||
|
||||
# ============================================================================
|
||||
# Timing configuration
|
||||
# ============================================================================
|
||||
ITERATION_PERIOD_SECONDS=30
|
||||
STALE_THRESHOLD_SECONDS=60
|
||||
VALIDATION_THRESHOLD_METERS=200
|
||||
STARTUP_WARMUP_SECONDS=5
|
||||
|
||||
# ============================================================================
|
||||
# TM AIS GPS Configuration
|
||||
# ============================================================================
|
||||
TM_AIS_ENABLED=true
|
||||
TM_AIS_URL=https://localhost:8443/location
|
||||
TM_AIS_TOKEN=xuNg8eewohcieru1Noto
|
||||
TM_AIS_MAX_RETRIES=1
|
||||
|
||||
# ============================================================================
|
||||
# Starlink Terminal Configuration
|
||||
# ============================================================================
|
||||
STARLINK_ENABLED=true
|
||||
STARLINK_IP=10.130.60.70
|
||||
STARLINK_PORT=9200
|
||||
STARLINK_MAX_RETRIES=1
|
||||
|
||||
# ============================================================================
|
||||
# NMEA Primary Vessel GPS Configuration
|
||||
# ============================================================================
|
||||
NMEA_PRIMARY_ENABLED=true
|
||||
NMEA_PRIMARY_IP=10.130.60.61
|
||||
NMEA_PRIMARY_PORT=4001
|
||||
|
||||
# ============================================================================
|
||||
# NMEA Secondary Vessel GPS Configuration
|
||||
# ============================================================================
|
||||
NMEA_SECONDARY_ENABLED=true
|
||||
NMEA_SECONDARY_IP=10.130.60.61
|
||||
NMEA_SECONDARY_PORT=4002
|
||||
|
||||
# ============================================================================
|
||||
# Storage Configuration
|
||||
# ============================================================================
|
||||
DATABASE_PATH=data/gnss_guard.db
|
||||
LOGS_BASE_PATH=logs
|
||||
|
||||
# ============================================================================
|
||||
# Web Server Configuration
|
||||
# ============================================================================
|
||||
# Enable/disable web dashboard
|
||||
WEB_ENABLED=true
|
||||
|
||||
# Web server host (0.0.0.0 = all interfaces, 127.0.0.1 = localhost only)
|
||||
WEB_HOST=0.0.0.0
|
||||
|
||||
# Web server port
|
||||
WEB_PORT=8080
|
||||
|
||||
# Show 24h route on map (requires local historical data)
|
||||
WEB_SHOW_ROUTE=true
|
||||
|
||||
# Demo mode: for demo units with pre-loaded historical data
|
||||
# When enabled:
|
||||
# - Data collection continues normally (real or injected)
|
||||
# - Dashboard shows live status (current validation is stored)
|
||||
# - Recent "live" records auto-deleted to preserve historical data
|
||||
# - NO server sync (validation not sent to cloud)
|
||||
# - Route shows last 24h of historical data (excludes live session)
|
||||
DEMO_UNIT=true
|
||||
|
||||
|
||||
# Access the dashboard at:
|
||||
# - http://localhost:8080
|
||||
# - http://<server-ip>:8080
|
||||
# - http://guard.lan:8080 (if guard.lan is configured in DNS/hosts)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Data Retenction Configuration
|
||||
# ============================================================================
|
||||
|
||||
POSITIONS_RAW_RETENTION_DAYS=5
|
||||
POSITIONS_VALIDATION_RETENTION_DAYS=5
|
||||
LOG_RETENTION_DAYS=14
|
||||
|
||||
# ============================================================================
|
||||
# Server Sync
|
||||
# ============================================================================
|
||||
|
||||
SERVER_ENABLED=true
|
||||
SERVER_URL=https://gnss.tototheo.com
|
||||
SERVER_TOKEN=a25dee6101b944495a98f2a2c529b926ea01f36807ccb06b18240c7134ea467e
|
||||
SERVER_SYNC_BATCH_SIZE=100
|
||||
SERVER_SYNC_MAX_QUEUE=1000
|
||||
|
||||
|
||||
# ssh -p 22 pi@10.130.60.253
|
||||
# ssh -p 22 -L 8080:localhost:8080 pi@10.130.60.253
|
||||
|
||||
# Download gnss_guard.db
|
||||
# scp pi@10.130.60.253:~/tm-gnss-guard/data/gnss_guard.db ./data/gnss_guard.db
|
||||
|
||||
# Upload gnss_guard.db
|
||||
# scp ./data/gnss_guard.db pi@10.130.60.253:~/tm-gnss-guard/data/gnss_guard.db
|
||||
6
backup-from-device/gnss-guard/tm-gnss-guard/__init__.py
Normal file
6
backup-from-device/gnss-guard/tm-gnss-guard/__init__.py
Normal file
@@ -0,0 +1,6 @@
|
||||
"""
|
||||
GNSS Guard - Multi-source GPS coordinate validation system
|
||||
"""
|
||||
|
||||
__version__ = "1.0.0"
|
||||
|
||||
151
backup-from-device/gnss-guard/tm-gnss-guard/config.py
Normal file
151
backup-from-device/gnss-guard/tm-gnss-guard/config.py
Normal file
@@ -0,0 +1,151 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Configuration management for GNSS Guard
|
||||
Loads configuration from .env or .env.prod files
|
||||
"""
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any
|
||||
from dotenv import load_dotenv
|
||||
|
||||
|
||||
class Config:
|
||||
"""Configuration manager for GNSS Guard"""
|
||||
|
||||
@staticmethod
|
||||
def _get_int_env(key: str, default: int) -> int:
|
||||
"""Get integer environment variable, handling empty strings"""
|
||||
value = os.getenv(key, "")
|
||||
if not value or value.strip() == "":
|
||||
return default
|
||||
try:
|
||||
return int(value)
|
||||
except ValueError:
|
||||
return default
|
||||
|
||||
def __init__(self):
|
||||
# Determine environment file to load
|
||||
# Priority: 1) ENV=prod -> .env.prod, 2) .env.prod exists -> .env.prod, 3) .env
|
||||
base_path = Path(__file__).parent
|
||||
|
||||
if os.getenv("ENV") == "prod":
|
||||
env_file = ".env.prod"
|
||||
elif (base_path / ".env.prod").exists():
|
||||
env_file = ".env.prod"
|
||||
else:
|
||||
env_file = ".env"
|
||||
|
||||
# Load environment variables
|
||||
env_path = base_path / env_file
|
||||
if env_path.exists():
|
||||
load_dotenv(env_path)
|
||||
else:
|
||||
# Try loading from current directory as fallback
|
||||
load_dotenv()
|
||||
|
||||
# Asset configuration
|
||||
self.asset_name = os.getenv("ASSET_NAME", "unknown")
|
||||
|
||||
# Timing configuration
|
||||
self.iteration_period_seconds = self._get_int_env("ITERATION_PERIOD_SECONDS", 10)
|
||||
self.stale_threshold_seconds = self._get_int_env("STALE_THRESHOLD_SECONDS", 60)
|
||||
self.validation_threshold_meters = float(os.getenv("VALIDATION_THRESHOLD_METERS", "200"))
|
||||
self.startup_warmup_seconds = self._get_int_env("STARTUP_WARMUP_SECONDS", 5)
|
||||
|
||||
# Data retention configuration
|
||||
self.positions_raw_retention_days = self._get_int_env("POSITIONS_RAW_RETENTION_DAYS", 14)
|
||||
self.positions_validation_retention_days = self._get_int_env("POSITIONS_VALIDATION_RETENTION_DAYS", 31)
|
||||
self.log_retention_days = self._get_int_env("LOG_RETENTION_DAYS", 14)
|
||||
|
||||
# TM AIS GPS configuration
|
||||
self.tm_ais_url = os.getenv("TM_AIS_URL", "https://localhost:8443/location")
|
||||
# Trim whitespace from token (common issue with .env files)
|
||||
self.tm_ais_token = os.getenv("TM_AIS_TOKEN", "").strip()
|
||||
self.tm_ais_max_retries = self._get_int_env("TM_AIS_MAX_RETRIES", 3)
|
||||
|
||||
# Starlink configuration
|
||||
self.starlink_ip = os.getenv("STARLINK_IP", "10.130.60.70")
|
||||
self.starlink_port = self._get_int_env("STARLINK_PORT", 9200)
|
||||
self.starlink_max_retries = self._get_int_env("STARLINK_MAX_RETRIES", 3)
|
||||
|
||||
# NMEA Primary GPS configuration
|
||||
self.nmea_primary_ip = os.getenv("NMEA_PRIMARY_IP", "")
|
||||
self.nmea_primary_port = self._get_int_env("NMEA_PRIMARY_PORT", 0)
|
||||
|
||||
# NMEA Secondary GPS configuration
|
||||
self.nmea_secondary_ip = os.getenv("NMEA_SECONDARY_IP", "")
|
||||
self.nmea_secondary_port = self._get_int_env("NMEA_SECONDARY_PORT", 0)
|
||||
|
||||
# Database configuration
|
||||
self.database_path = Path(os.getenv("DATABASE_PATH", "data/gnss_guard.db"))
|
||||
|
||||
# Logs configuration
|
||||
self.logs_base_path = Path(os.getenv("LOGS_BASE_PATH", "logs"))
|
||||
|
||||
# Web server configuration
|
||||
self.web_enabled = os.getenv("WEB_ENABLED", "true").lower() in ("true", "1", "yes")
|
||||
self.web_host = os.getenv("WEB_HOST", "0.0.0.0")
|
||||
self.web_port = self._get_int_env("WEB_PORT", 8080)
|
||||
self.web_show_route = os.getenv("WEB_SHOW_ROUTE", "false").lower() in ("true", "1", "yes")
|
||||
|
||||
# Demo mode - when enabled, route shows last 24h of available data instead of current time
|
||||
self.demo_unit = os.getenv("DEMO_UNIT", "false").lower() in ("true", "1", "yes")
|
||||
|
||||
# Source enablement flags
|
||||
self.tm_ais_enabled = os.getenv("TM_AIS_ENABLED", "true").lower() in ("true", "1", "yes")
|
||||
self.starlink_enabled = os.getenv("STARLINK_ENABLED", "true").lower() in ("true", "1", "yes")
|
||||
self.nmea_primary_enabled = os.getenv("NMEA_PRIMARY_ENABLED", "false").lower() in ("true", "1", "yes")
|
||||
self.nmea_secondary_enabled = os.getenv("NMEA_SECONDARY_ENABLED", "false").lower() in ("true", "1", "yes")
|
||||
|
||||
# NMEA verbose logging (log all NMEA sentences, not just GGA)
|
||||
self.nmea_verbose_logging = os.getenv("NMEA_VERBOSE_LOGGING", "false").lower() in ("true", "1", "yes")
|
||||
|
||||
# Server sync configuration
|
||||
self.server_enabled = os.getenv("SERVER_ENABLED", "false").lower() in ("true", "1", "yes")
|
||||
self.server_url = os.getenv("SERVER_URL", "").strip()
|
||||
self.server_token = os.getenv("SERVER_TOKEN", "").strip()
|
||||
self.server_sync_batch_size = self._get_int_env("SERVER_SYNC_BATCH_SIZE", 100)
|
||||
self.server_sync_max_queue = self._get_int_env("SERVER_SYNC_MAX_QUEUE", 1000)
|
||||
|
||||
def get_enabled_sources(self) -> list:
|
||||
"""Get list of enabled source names"""
|
||||
sources = []
|
||||
if self.tm_ais_enabled:
|
||||
sources.append("tm_ais")
|
||||
if self.starlink_enabled:
|
||||
sources.extend(["starlink_location", "starlink_gps"])
|
||||
if self.nmea_primary_enabled:
|
||||
sources.append("nmea_primary")
|
||||
if self.nmea_secondary_enabled:
|
||||
sources.append("nmea_secondary")
|
||||
return sources
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert configuration to dictionary"""
|
||||
return {
|
||||
"asset_name": self.asset_name,
|
||||
"iteration_period_seconds": self.iteration_period_seconds,
|
||||
"stale_threshold_seconds": self.stale_threshold_seconds,
|
||||
"validation_threshold_meters": self.validation_threshold_meters,
|
||||
"startup_warmup_seconds": self.startup_warmup_seconds,
|
||||
"positions_raw_retention_days": self.positions_raw_retention_days,
|
||||
"positions_validation_retention_days": self.positions_validation_retention_days,
|
||||
"log_retention_days": self.log_retention_days,
|
||||
"tm_ais_url": self.tm_ais_url,
|
||||
"tm_ais_enabled": self.tm_ais_enabled,
|
||||
"tm_ais_max_retries": self.tm_ais_max_retries,
|
||||
"starlink_ip": self.starlink_ip,
|
||||
"starlink_port": self.starlink_port,
|
||||
"starlink_enabled": self.starlink_enabled,
|
||||
"starlink_max_retries": self.starlink_max_retries,
|
||||
"nmea_primary_enabled": self.nmea_primary_enabled,
|
||||
"nmea_secondary_enabled": self.nmea_secondary_enabled,
|
||||
"database_path": str(self.database_path),
|
||||
"logs_base_path": str(self.logs_base_path),
|
||||
"web_enabled": self.web_enabled,
|
||||
"web_host": self.web_host,
|
||||
"web_port": self.web_port,
|
||||
"web_show_route": self.web_show_route,
|
||||
}
|
||||
|
||||
1070
backup-from-device/gnss-guard/tm-gnss-guard/deploy_server.sh
Executable file
1070
backup-from-device/gnss-guard/tm-gnss-guard/deploy_server.sh
Executable file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,46 @@
|
||||
{
|
||||
"_comment": "Injected positions file for GNSS Guard - Office Lab",
|
||||
"_instructions": [
|
||||
"1. Set position values for sources you want to inject (only those sources will use injected data)",
|
||||
"2. Sources NOT in this file will be fetched from real sources normally",
|
||||
"3. Set a source to 'null' to simulate its absence (skip fetching for that source)",
|
||||
"4. Prefix a source key with '//' to comment it out (same as not including it)"
|
||||
],
|
||||
"_fields": {
|
||||
"latitude": "REQUIRED - Latitude in decimal degrees (used for distance validation)",
|
||||
"longitude": "REQUIRED - Longitude in decimal degrees (used for distance validation)",
|
||||
"timestamp_unix": "OPTIONAL - Unix timestamp in seconds (defaults to current time if absent)",
|
||||
"altitude": "OPTIONAL - Altitude in meters (stored but NOT used for validation)",
|
||||
"position_uncertainty_m": "OPTIONAL - Position uncertainty in meters (stored but NOT used for validation, Starlink only)"
|
||||
},
|
||||
"nmea_primary": {
|
||||
"latitude": 36.11063,
|
||||
"longitude": 22.972875,
|
||||
"//timestamp_unix": 1768308542.0,
|
||||
"altitude": 14.0
|
||||
},
|
||||
"nmea_secondary": {
|
||||
"latitude": 36.11085833333333,
|
||||
"longitude": 22.572023333333334,
|
||||
"//timestamp_unix": 1732461600.0,
|
||||
"altitude": 13.2
|
||||
},
|
||||
"tm_ais": {
|
||||
"latitude": 36.110657,
|
||||
"longitude": 22.572672,
|
||||
"//timestamp_unix": 1732461600.0
|
||||
},
|
||||
"starlink_gps": {
|
||||
"latitude": 36.11055287599966,
|
||||
"longitude": 22.57289200819445,
|
||||
"//timestamp_unix": 1732461600.0,
|
||||
"altitude": 54.29000515150101
|
||||
},
|
||||
"starlink_location": {
|
||||
"latitude": 36.11055187009735,
|
||||
"longitude": 22.57289484169309,
|
||||
"//timestamp_unix": 1732461600.0,
|
||||
"altitude": 54.29000515150101,
|
||||
"position_uncertainty_m": 2.5
|
||||
}
|
||||
}
|
||||
678
backup-from-device/gnss-guard/tm-gnss-guard/main.py
Normal file
678
backup-from-device/gnss-guard/tm-gnss-guard/main.py
Normal file
@@ -0,0 +1,678 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
GNSS Guard - Main orchestrator
|
||||
Coordinates data collection from multiple GPS sources and validation
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
from typing import Dict, Any, Optional
|
||||
|
||||
from config import Config
|
||||
from sources.tm_ais_gps import TMAISGPSFetcher
|
||||
from sources.starlink_gps import StarlinkGPSFetcher
|
||||
from sources.nmea_gps import NMEAGPSCollector
|
||||
from storage.database import Database
|
||||
from storage.logger import StructuredLogger
|
||||
from storage.cleanup import CleanupManager
|
||||
from validation.coordinate_validator import CoordinateValidator
|
||||
from web.server import WebServer
|
||||
from services.server_sync import ServerSync
|
||||
from services.buzzer import get_buzzer_service
|
||||
|
||||
logger = logging.getLogger("gnss_guard.main")
|
||||
|
||||
|
||||
class GNSSGuard:
|
||||
"""Main orchestrator for GNSS Guard system"""
|
||||
|
||||
def __init__(self, config: Config):
|
||||
"""Initialize GNSS Guard"""
|
||||
self.config = config
|
||||
self.running = False
|
||||
|
||||
# Initialize components
|
||||
self.database = Database(config.database_path)
|
||||
self.structured_logger = StructuredLogger(
|
||||
config.logs_base_path,
|
||||
config.log_retention_days
|
||||
)
|
||||
|
||||
# Path to injected positions file (in same directory as main.py)
|
||||
script_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
self.injected_positions_path = os.path.join(script_dir, "injected_positions.json")
|
||||
|
||||
# Initialize data sources
|
||||
self.tm_ais_fetcher = TMAISGPSFetcher(config) if config.tm_ais_enabled else None
|
||||
self.starlink_fetcher = StarlinkGPSFetcher(config) if config.starlink_enabled else None
|
||||
|
||||
# Initialize NMEA collectors
|
||||
self.nmea_primary_collector = None
|
||||
if config.nmea_primary_enabled and config.nmea_primary_ip and config.nmea_primary_port > 0:
|
||||
self.nmea_primary_collector = NMEAGPSCollector(
|
||||
config,
|
||||
"nmea_primary",
|
||||
config.nmea_primary_ip,
|
||||
config.nmea_primary_port,
|
||||
structured_logger=self.structured_logger
|
||||
)
|
||||
|
||||
self.nmea_secondary_collector = None
|
||||
if config.nmea_secondary_enabled and config.nmea_secondary_ip and config.nmea_secondary_port > 0:
|
||||
self.nmea_secondary_collector = NMEAGPSCollector(
|
||||
config,
|
||||
"nmea_secondary",
|
||||
config.nmea_secondary_ip,
|
||||
config.nmea_secondary_port,
|
||||
structured_logger=self.structured_logger
|
||||
)
|
||||
|
||||
# Initialize validator
|
||||
expected_sources = config.get_enabled_sources()
|
||||
self.validator = CoordinateValidator(
|
||||
config.validation_threshold_meters,
|
||||
config.stale_threshold_seconds,
|
||||
expected_sources
|
||||
)
|
||||
|
||||
# Initialize buzzer service for hardware alarm (must be before web server)
|
||||
# Buzzer sounds with 1 second on / 1 second off pattern during GNSS alerts
|
||||
self.buzzer_service = get_buzzer_service(on_duration=1.0, off_duration=1.0)
|
||||
|
||||
# Track previous alert level to detect status changes
|
||||
# Alert levels: "healthy", "degraded", "at_risk"
|
||||
self._previous_alert_level = "healthy"
|
||||
|
||||
# Initialize web server (if enabled)
|
||||
self.web_server = None
|
||||
self.web_thread = None
|
||||
if config.web_enabled:
|
||||
try:
|
||||
self.web_server = WebServer(config, self.database, self.buzzer_service)
|
||||
logger.info("Web server initialized")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to initialize web server: {e}")
|
||||
self.web_server = None
|
||||
|
||||
# Initialize cleanup manager
|
||||
# In demo mode, skip database cleanup since data isn't growing
|
||||
# (demo mode creates and deletes records, maintaining a fixed dataset)
|
||||
self.cleanup_manager = CleanupManager(
|
||||
database_path=config.database_path,
|
||||
logs_base_path=config.logs_base_path,
|
||||
positions_raw_retention_days=config.positions_raw_retention_days,
|
||||
positions_validation_retention_days=config.positions_validation_retention_days,
|
||||
logs_retention_days=config.log_retention_days,
|
||||
demo_mode=config.demo_unit
|
||||
)
|
||||
if config.demo_unit:
|
||||
logger.info(
|
||||
f"Cleanup manager initialized in DEMO mode (logs only: {config.log_retention_days}d)"
|
||||
)
|
||||
else:
|
||||
logger.info(
|
||||
f"Cleanup manager initialized (raw: {config.positions_raw_retention_days}d, "
|
||||
f"validation: {config.positions_validation_retention_days}d, logs: {config.log_retention_days}d)"
|
||||
)
|
||||
|
||||
# Initialize server sync (if enabled)
|
||||
self.server_sync = None
|
||||
if config.server_enabled and config.server_url and config.server_token:
|
||||
try:
|
||||
self.server_sync = ServerSync(
|
||||
database_path=config.database_path,
|
||||
server_url=config.server_url,
|
||||
server_token=config.server_token,
|
||||
asset_name=config.asset_name,
|
||||
batch_size=config.server_sync_batch_size,
|
||||
max_queue_size=config.server_sync_max_queue
|
||||
)
|
||||
logger.info(f"Server sync enabled -> {config.server_url}")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to initialize server sync: {e}")
|
||||
self.server_sync = None
|
||||
|
||||
# Setup signal handlers
|
||||
signal.signal(signal.SIGINT, self._signal_handler)
|
||||
signal.signal(signal.SIGTERM, self._signal_handler)
|
||||
|
||||
def _signal_handler(self, signum, frame):
|
||||
"""Handle shutdown signals"""
|
||||
logger.info(f"Received signal {signum}, shutting down gracefully...")
|
||||
self.running = False
|
||||
|
||||
def _load_injected_positions(self) -> Optional[Dict[str, Dict[str, Any]]]:
|
||||
"""
|
||||
Load injected positions from JSON file if it exists
|
||||
|
||||
Returns:
|
||||
Dictionary mapping source names to position dictionaries, or None if file doesn't exist
|
||||
"""
|
||||
if not os.path.exists(self.injected_positions_path):
|
||||
return None
|
||||
|
||||
try:
|
||||
with open(self.injected_positions_path, 'r') as f:
|
||||
data = json.load(f)
|
||||
|
||||
# Validate and normalize positions
|
||||
injected = {}
|
||||
for source, position in data.items():
|
||||
# Skip metadata fields (those starting with underscore)
|
||||
if source.startswith("_"):
|
||||
continue
|
||||
|
||||
# Skip commented-out sources (those starting with //)
|
||||
if source.startswith("//"):
|
||||
continue
|
||||
|
||||
if position is None:
|
||||
# Null value means this source should be absent
|
||||
# Store it as None so we know to skip fetching for this source
|
||||
injected[source] = None
|
||||
continue
|
||||
|
||||
# Ensure required fields are present
|
||||
if not isinstance(position, dict):
|
||||
logger.warning(f"Invalid position format for {source} in injected_positions.json")
|
||||
continue
|
||||
|
||||
# Ensure source field matches the key
|
||||
position["source"] = source
|
||||
|
||||
# Ensure timestamp_unix is set if timestamp is provided
|
||||
if "timestamp" in position and "timestamp_unix" not in position:
|
||||
try:
|
||||
ts = datetime.fromisoformat(position["timestamp"].replace("Z", "+00:00"))
|
||||
if ts.tzinfo is None:
|
||||
ts = ts.replace(tzinfo=timezone.utc)
|
||||
position["timestamp_unix"] = ts.timestamp()
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to parse timestamp for {source}: {e}")
|
||||
# Use current time as fallback
|
||||
now = datetime.now(timezone.utc)
|
||||
position["timestamp"] = now.isoformat()
|
||||
position["timestamp_unix"] = now.timestamp()
|
||||
|
||||
# Ensure timestamp is set if timestamp_unix is provided
|
||||
if "timestamp_unix" in position and "timestamp" not in position:
|
||||
try:
|
||||
ts = datetime.fromtimestamp(position["timestamp_unix"], tz=timezone.utc)
|
||||
position["timestamp"] = ts.isoformat()
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to convert timestamp_unix for {source}: {e}")
|
||||
position["timestamp"] = datetime.now(timezone.utc).isoformat()
|
||||
|
||||
# Ensure both exist (use current time if neither provided)
|
||||
if "timestamp_unix" not in position:
|
||||
now = datetime.now(timezone.utc)
|
||||
position["timestamp"] = now.isoformat()
|
||||
position["timestamp_unix"] = now.timestamp()
|
||||
|
||||
injected[source] = position
|
||||
|
||||
logger.info(f"Loaded {len(injected)} injected position(s) from {self.injected_positions_path}")
|
||||
return injected
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error(f"Failed to parse injected_positions.json: {e}")
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.error(f"Error loading injected positions: {e}")
|
||||
return None
|
||||
|
||||
def _store_demo_validation(self, validation_result: Dict[str, Any]):
|
||||
"""
|
||||
Store validation in DEMO_UNIT mode.
|
||||
Keeps only the latest validation record to show live status on dashboard,
|
||||
while preserving historical data for route display.
|
||||
|
||||
Deletes any validation records from the last 5 minutes before inserting new one.
|
||||
"""
|
||||
import sqlite3 as sqlite3_module
|
||||
|
||||
try:
|
||||
conn = sqlite3_module.connect(str(self.database.database_path), timeout=5.0)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Delete recent "live" records (last 5 minutes) to prevent accumulation
|
||||
# This keeps historical data intact while allowing fresh dashboard display
|
||||
five_minutes_ago = time.time() - 300
|
||||
cursor.execute(
|
||||
"DELETE FROM positions_validation WHERE validation_timestamp_unix > ?",
|
||||
(five_minutes_ago,)
|
||||
)
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# Now store the new validation record
|
||||
self.database.store_validation(validation_result)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error storing demo validation: {e}")
|
||||
|
||||
def _handle_buzzer_alarm(self, is_valid: bool, missing_sources: list, stale_sources: list, distance_exceeded: bool):
|
||||
"""
|
||||
Handle buzzer alarm based on validation status.
|
||||
|
||||
Buzzer triggers when GNSS status is:
|
||||
- "at risk" (GPS jamming/spoofing detected - distance exceeds threshold)
|
||||
- "degraded" (sources missing or stale)
|
||||
- "no connection" (all sources missing)
|
||||
|
||||
Buzzer stops when:
|
||||
- Status returns to healthy (validation passes)
|
||||
- User acknowledges the alarm via the dashboard button
|
||||
|
||||
Buzzer restarts when:
|
||||
- Alert level changes (e.g., degraded → at_risk or vice versa)
|
||||
|
||||
Args:
|
||||
is_valid: Whether validation passed
|
||||
missing_sources: List of missing source names
|
||||
stale_sources: List of stale source names
|
||||
distance_exceeded: Whether coordinate distance exceeded threshold
|
||||
"""
|
||||
try:
|
||||
# Determine current alert level
|
||||
# "at_risk" = GPS spoofing/jamming (distance exceeded)
|
||||
# "degraded" = sources missing or stale but no distance issue
|
||||
# "healthy" = validation passed
|
||||
if is_valid:
|
||||
current_alert_level = "healthy"
|
||||
elif distance_exceeded:
|
||||
current_alert_level = "at_risk"
|
||||
else:
|
||||
current_alert_level = "degraded"
|
||||
|
||||
# Check if alert level changed
|
||||
alert_level_changed = current_alert_level != self._previous_alert_level
|
||||
|
||||
if alert_level_changed:
|
||||
logger.info(f"Alert level changed: {self._previous_alert_level} → {current_alert_level}")
|
||||
|
||||
# Reset acknowledged state when alert level changes
|
||||
# This allows buzzer to restart even if previously acknowledged
|
||||
if self.buzzer_service.is_alarm_acknowledged():
|
||||
logger.info("Resetting alarm acknowledged state (alert level changed)")
|
||||
self.buzzer_service.reset_acknowledged()
|
||||
|
||||
# Stop current alarm if running (will restart below if needed)
|
||||
if self.buzzer_service.is_alarm_active():
|
||||
self.buzzer_service.stop_alarm()
|
||||
|
||||
# Handle alarm based on current alert level
|
||||
if current_alert_level != "healthy":
|
||||
# Status is degraded or at risk
|
||||
# Start alarm if not already active and not acknowledged
|
||||
if not self.buzzer_service.is_alarm_active():
|
||||
if not self.buzzer_service.is_alarm_acknowledged():
|
||||
# Determine alarm reason for logging
|
||||
if current_alert_level == "at_risk":
|
||||
reason = "GPS jamming/spoofing detected (distance exceeded threshold)"
|
||||
elif missing_sources:
|
||||
reason = f"Sources missing: {', '.join(missing_sources)}"
|
||||
elif stale_sources:
|
||||
reason = f"Sources stale: {', '.join(stale_sources)}"
|
||||
else:
|
||||
reason = "Validation failed"
|
||||
|
||||
logger.warning(f"Starting buzzer alarm: {reason}")
|
||||
self.structured_logger.warning("buzzer", f"Alarm started: {reason}")
|
||||
self.buzzer_service.start_alarm()
|
||||
else:
|
||||
logger.debug("Alarm acknowledged, not restarting until alert level changes")
|
||||
else:
|
||||
# Status is healthy
|
||||
# Stop alarm if active
|
||||
if self.buzzer_service.is_alarm_active():
|
||||
logger.info("Status returned to healthy, stopping buzzer alarm")
|
||||
self.structured_logger.info("buzzer", "Alarm stopped (status healthy)")
|
||||
self.buzzer_service.stop_alarm()
|
||||
|
||||
# Reset acknowledged state when healthy
|
||||
if self.buzzer_service.is_alarm_acknowledged():
|
||||
logger.debug("Resetting alarm acknowledged state (status healthy)")
|
||||
self.buzzer_service.reset_acknowledged()
|
||||
|
||||
# Track alert level for next iteration
|
||||
self._previous_alert_level = current_alert_level
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error handling buzzer alarm: {e}")
|
||||
|
||||
async def start(self):
|
||||
"""Start GNSS Guard system"""
|
||||
logger.info("Starting GNSS Guard system")
|
||||
self.structured_logger.info("system", "GNSS Guard starting", {"config": self.config.to_dict()})
|
||||
|
||||
# Start web server in separate thread
|
||||
if self.web_server:
|
||||
self.web_thread = threading.Thread(
|
||||
target=self.web_server.run,
|
||||
kwargs={
|
||||
'host': self.config.web_host,
|
||||
'port': self.config.web_port,
|
||||
'debug': False
|
||||
},
|
||||
daemon=True
|
||||
)
|
||||
self.web_thread.start()
|
||||
logger.info(f"Web server started on {self.config.web_host}:{self.config.web_port}")
|
||||
|
||||
# Log DEMO_UNIT mode if enabled
|
||||
if self.config.demo_unit:
|
||||
logger.info("DEMO_UNIT mode enabled - data collection active but database writes disabled")
|
||||
self.structured_logger.info("system", "DEMO_UNIT mode - no database writes")
|
||||
|
||||
# Start NMEA collectors
|
||||
if self.nmea_primary_collector:
|
||||
await self.nmea_primary_collector.start()
|
||||
logger.info("Started NMEA primary collector")
|
||||
|
||||
if self.nmea_secondary_collector:
|
||||
await self.nmea_secondary_collector.start()
|
||||
logger.info("Started NMEA secondary collector")
|
||||
|
||||
# Startup warm-up period: wait for data sources to connect and receive initial data
|
||||
# This prevents false "missing" alerts on first validation after restart/deploy
|
||||
if self.config.startup_warmup_seconds > 0:
|
||||
logger.info(f"Waiting {self.config.startup_warmup_seconds}s for data sources to initialize...")
|
||||
self.structured_logger.info(
|
||||
"system",
|
||||
"Startup warm-up period",
|
||||
{"warmup_seconds": self.config.startup_warmup_seconds}
|
||||
)
|
||||
await asyncio.sleep(self.config.startup_warmup_seconds)
|
||||
logger.info("Warm-up complete, starting validation cycle")
|
||||
|
||||
self.running = True
|
||||
|
||||
# Main collection loop - ensure iterations start at regular intervals
|
||||
while self.running:
|
||||
iteration_start = time.time()
|
||||
|
||||
try:
|
||||
await self._iteration()
|
||||
except Exception as e:
|
||||
logger.error(f"Error in main loop: {e}")
|
||||
self.structured_logger.error("system", f"Error in main loop: {e}")
|
||||
|
||||
# Calculate how long the iteration took
|
||||
iteration_duration = time.time() - iteration_start
|
||||
|
||||
# Sleep for the remaining time to maintain the iteration period
|
||||
sleep_time = self.config.iteration_period_seconds - iteration_duration
|
||||
|
||||
if sleep_time > 0:
|
||||
logger.debug(f"Iteration took {iteration_duration:.2f}s, sleeping for {sleep_time:.2f}s")
|
||||
await asyncio.sleep(sleep_time)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Iteration took {iteration_duration:.2f}s, which exceeds the configured period "
|
||||
f"of {self.config.iteration_period_seconds}s. Starting next iteration immediately."
|
||||
)
|
||||
# No sleep, start next iteration immediately
|
||||
|
||||
async def _iteration(self):
|
||||
"""Execute one iteration of data collection and validation"""
|
||||
# Run daily cleanup if needed (runs once per day)
|
||||
self.cleanup_manager.run_cleanup_if_needed()
|
||||
|
||||
logger.info("Starting data collection iteration")
|
||||
positions = {}
|
||||
|
||||
# Check for injected positions (per-source injection)
|
||||
injected_positions = self._load_injected_positions() or {}
|
||||
|
||||
# Add injected positions (if any)
|
||||
if injected_positions:
|
||||
injected_sources = [s for s, p in injected_positions.items() if p is not None]
|
||||
if injected_sources:
|
||||
logger.info(f"Using injected positions for: {', '.join(injected_sources)}")
|
||||
|
||||
# DEMO_UNIT mode: skip database writes
|
||||
skip_db_writes = self.config.demo_unit
|
||||
|
||||
# Fetch from TM AIS GPS (skip if injected)
|
||||
if "tm_ais" not in injected_positions and self.tm_ais_fetcher:
|
||||
try:
|
||||
position = self.tm_ais_fetcher.fetch()
|
||||
if position:
|
||||
positions[position["source"]] = position
|
||||
if not skip_db_writes:
|
||||
self.database.store_position(position)
|
||||
self.structured_logger.info("tm_ais", "Fetched position", {"position": position})
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching TM AIS GPS: {e}")
|
||||
self.structured_logger.error("tm_ais", f"Fetch error: {e}")
|
||||
elif "tm_ais" in injected_positions:
|
||||
# Use injected position for tm_ais
|
||||
if injected_positions["tm_ais"] is not None:
|
||||
position = injected_positions["tm_ais"]
|
||||
positions[position["source"]] = position
|
||||
if not skip_db_writes:
|
||||
self.database.store_position(position)
|
||||
self.structured_logger.info("tm_ais", "Injected position", {"position": position})
|
||||
|
||||
# Fetch from Starlink GPS (always fetch, then override with injected if present)
|
||||
if self.starlink_fetcher:
|
||||
# Only fetch if at least one Starlink source is not injected
|
||||
if "starlink_location" not in injected_positions or "starlink_gps" not in injected_positions:
|
||||
logger.info("Fetching from Starlink GPS...")
|
||||
try:
|
||||
starlink_positions = self.starlink_fetcher.fetch()
|
||||
for position in starlink_positions:
|
||||
# Only use fetched position if this source is not injected
|
||||
if position["source"] not in injected_positions:
|
||||
positions[position["source"]] = position
|
||||
if not skip_db_writes:
|
||||
self.database.store_position(position)
|
||||
self.structured_logger.info(
|
||||
position["source"],
|
||||
"Fetched position",
|
||||
{"position": position}
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching Starlink GPS: {e}")
|
||||
self.structured_logger.error("starlink", f"Fetch error: {e}")
|
||||
|
||||
# Use injected positions for Starlink sources (if any)
|
||||
for starlink_source in ["starlink_location", "starlink_gps"]:
|
||||
if starlink_source in injected_positions and injected_positions[starlink_source] is not None:
|
||||
position = injected_positions[starlink_source]
|
||||
positions[position["source"]] = position
|
||||
if not skip_db_writes:
|
||||
self.database.store_position(position)
|
||||
self.structured_logger.info(starlink_source, "Injected position", {"position": position})
|
||||
|
||||
# Get latest positions from NMEA collectors (skip if injected)
|
||||
if "nmea_primary" not in injected_positions and self.nmea_primary_collector:
|
||||
try:
|
||||
position = await self.nmea_primary_collector.get_latest_position()
|
||||
if position:
|
||||
positions[position["source"]] = position
|
||||
if not skip_db_writes:
|
||||
self.database.store_position(position)
|
||||
self.structured_logger.info("nmea_primary", "Updated position", {"position": position})
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting NMEA primary position: {e}")
|
||||
self.structured_logger.error("nmea_primary", f"Position error: {e}")
|
||||
elif "nmea_primary" in injected_positions:
|
||||
# Use injected position for nmea_primary
|
||||
if injected_positions["nmea_primary"] is not None:
|
||||
position = injected_positions["nmea_primary"]
|
||||
positions[position["source"]] = position
|
||||
if not skip_db_writes:
|
||||
self.database.store_position(position)
|
||||
self.structured_logger.info("nmea_primary", "Injected position", {"position": position})
|
||||
|
||||
if "nmea_secondary" not in injected_positions and self.nmea_secondary_collector:
|
||||
try:
|
||||
position = await self.nmea_secondary_collector.get_latest_position()
|
||||
if position:
|
||||
positions[position["source"]] = position
|
||||
if not skip_db_writes:
|
||||
self.database.store_position(position)
|
||||
self.structured_logger.info("nmea_secondary", "Updated position", {"position": position})
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting NMEA secondary position: {e}")
|
||||
self.structured_logger.error("nmea_secondary", f"Position error: {e}")
|
||||
elif "nmea_secondary" in injected_positions:
|
||||
# Use injected position for nmea_secondary
|
||||
if injected_positions["nmea_secondary"] is not None:
|
||||
position = injected_positions["nmea_secondary"]
|
||||
positions[position["source"]] = position
|
||||
if not skip_db_writes:
|
||||
self.database.store_position(position)
|
||||
self.structured_logger.info("nmea_secondary", "Injected position", {"position": position})
|
||||
|
||||
# Run validation
|
||||
logger.info(f"Collected {len(positions)} positions, running validation")
|
||||
try:
|
||||
validation_result = self.validator.validate_positions(positions)
|
||||
|
||||
if skip_db_writes:
|
||||
# DEMO_UNIT mode: store validation for live dashboard display
|
||||
# but delete recent "live" records to prevent accumulation
|
||||
# (keeps only last few minutes of live data, historical data untouched)
|
||||
self._store_demo_validation(validation_result)
|
||||
else:
|
||||
self.database.store_validation(validation_result)
|
||||
|
||||
# Sync to server if enabled (only when not in DEMO_UNIT mode)
|
||||
if self.server_sync:
|
||||
try:
|
||||
if self.server_sync.sync_validation(validation_result):
|
||||
logger.debug("Validation synced to server")
|
||||
else:
|
||||
logger.debug("Validation queued for later sync")
|
||||
except Exception as e:
|
||||
logger.warning(f"Server sync error: {e}")
|
||||
|
||||
# Log validation result to terminal
|
||||
is_valid = validation_result["is_valid"]
|
||||
missing_sources = validation_result.get("sources_missing", [])
|
||||
stale_sources = validation_result.get("sources_stale", [])
|
||||
coordinate_differences = validation_result.get("coordinate_differences", {})
|
||||
validation_details = validation_result.get("validation_details", {})
|
||||
max_distance = validation_details.get("max_distance_meters", 0.0)
|
||||
|
||||
if is_valid:
|
||||
logger.info("✓ Validation PASSED")
|
||||
if missing_sources:
|
||||
logger.info(f" Missing sources: {', '.join(missing_sources)}")
|
||||
if stale_sources:
|
||||
logger.info(f" Stale sources: {', '.join(stale_sources)}")
|
||||
if coordinate_differences:
|
||||
logger.info(f" Max distance difference: {max_distance:.2f}m")
|
||||
else:
|
||||
logger.info(" All sources within threshold")
|
||||
else:
|
||||
logger.warning("✗ Validation FAILED")
|
||||
|
||||
# Check if failure is due to distance (GPS jamming/spoofing alert)
|
||||
threshold = validation_details.get('threshold_meters', 0)
|
||||
if max_distance > threshold:
|
||||
distance_km = max_distance / 1000.0
|
||||
logger.warning("")
|
||||
logger.warning("=" * 60)
|
||||
logger.warning("🚨 GPS Jamming or Spoofing Alert! 🚨")
|
||||
logger.warning(f" Location Distance: {distance_km:.1f} km")
|
||||
logger.warning("=" * 60)
|
||||
logger.warning("")
|
||||
|
||||
if missing_sources:
|
||||
logger.warning(f" Missing sources: {', '.join(missing_sources)}")
|
||||
if stale_sources:
|
||||
logger.warning(f" Stale sources: {', '.join(stale_sources)}")
|
||||
if coordinate_differences:
|
||||
logger.warning(f" Max distance difference: {max_distance:.2f}m (threshold: {threshold}m)")
|
||||
# Log individual differences if there are any
|
||||
for pair, diff_info in coordinate_differences.items():
|
||||
logger.warning(f" {pair}: {diff_info.get('distance_meters', 0):.2f}m")
|
||||
|
||||
# Log to structured logger
|
||||
if is_valid:
|
||||
self.structured_logger.info(
|
||||
"validation",
|
||||
"Validation passed",
|
||||
{"validation": validation_result}
|
||||
)
|
||||
else:
|
||||
self.structured_logger.warning(
|
||||
"validation",
|
||||
"Validation failed",
|
||||
{"validation": validation_result}
|
||||
)
|
||||
|
||||
# Handle buzzer alarm based on validation status
|
||||
# Alarm triggers when: degraded, at risk, or no connection (any validation failure)
|
||||
# Status changes:
|
||||
# - "at risk" (crit): has_alert AND distance exceeds threshold
|
||||
# - "degraded" (warn): validation failed but no distance alert
|
||||
# - "healthy": validation passed
|
||||
self._handle_buzzer_alarm(is_valid, missing_sources, stale_sources, max_distance > validation_details.get('threshold_meters', 0))
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error during validation: {e}")
|
||||
self.structured_logger.error("validation", f"Validation error: {e}")
|
||||
|
||||
logger.info("Iteration complete")
|
||||
|
||||
async def stop(self):
|
||||
"""Stop GNSS Guard system"""
|
||||
logger.info("Stopping GNSS Guard system")
|
||||
self.running = False
|
||||
|
||||
# Stop buzzer service
|
||||
if self.buzzer_service:
|
||||
self.buzzer_service.shutdown()
|
||||
|
||||
# Stop NMEA collectors
|
||||
if self.nmea_primary_collector:
|
||||
await self.nmea_primary_collector.stop()
|
||||
|
||||
if self.nmea_secondary_collector:
|
||||
await self.nmea_secondary_collector.stop()
|
||||
|
||||
# Log shutdown before closing logger
|
||||
self.structured_logger.info("system", "GNSS Guard stopped")
|
||||
|
||||
# Close logger
|
||||
self.structured_logger.close()
|
||||
|
||||
|
||||
async def main():
|
||||
"""Main entry point"""
|
||||
# Setup logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
|
||||
)
|
||||
|
||||
# Load configuration
|
||||
config = Config()
|
||||
|
||||
# Create and start GNSS Guard
|
||||
guard = GNSSGuard(config)
|
||||
|
||||
try:
|
||||
await guard.start()
|
||||
except KeyboardInterrupt:
|
||||
logger.info("Received keyboard interrupt")
|
||||
finally:
|
||||
await guard.stop()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
|
||||
16
backup-from-device/gnss-guard/tm-gnss-guard/requirements.txt
Normal file
16
backup-from-device/gnss-guard/tm-gnss-guard/requirements.txt
Normal file
@@ -0,0 +1,16 @@
|
||||
grpcio>=1.12.0
|
||||
grpcio-tools>=1.20.0
|
||||
protobuf>=3.6.0
|
||||
yagrc>=1.1.1
|
||||
typing-extensions>=4.3.0
|
||||
requests>=2.25.0
|
||||
python-dotenv>=0.19.0
|
||||
|
||||
# Web server dependencies
|
||||
Flask>=2.3.0
|
||||
|
||||
# Visualization dependencies
|
||||
pandas>=1.3.0
|
||||
numpy>=1.21.0
|
||||
folium>=0.12.0
|
||||
|
||||
@@ -0,0 +1,27 @@
|
||||
-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEowIBAAKCAQEAuFwehtR5QVRr/HAxmcrUvaMfj31HBhThtze/L7nwLLcpWwOo
|
||||
VugvCkVD/GgOUBPagnUjlfZ+MTR35k70pOybw+TjDHtqMdu2RuM67Ns3u0sx2mIr
|
||||
V5WZcc2zvsKyREd/uIVX8pe0VEvRpNoq420zdtY9J9Coy34grOLZlGsOELjnP+Hf
|
||||
0jcsw1rMgfvoKWffuOJk4qqGVq0a7cta3JURsUS4YqSDqybobRP+fArWfxOBitqS
|
||||
aNL78tMpnGr+wLykRkAbjulvZbibjr6N8/HjQKSYfxOlUNAci4K9QZaxGCdifgcz
|
||||
MZwnhu96XDm1gIFXeAN5nNKHjRo1fI8R53wSHwIDAQABAoIBAHXqTYgVS/zR/0N1
|
||||
ivP/vDQSqnP/P7cPEhM6r6jZ91jSSbwxybDUTon2JXbCIy1qlV7Nh1Y6UxoroeiH
|
||||
ZYg64aHYurPYF+MN0TbjzWODDtFXVeqE0Y3yXDNiyu1e3+A2DuW5O7go+ajU2aDj
|
||||
/Xx68ui2PGVD20JUSJfrfBimpFdipedFYw0obKEQ6L8c/AYWXSkCp9RXa+VAfJvB
|
||||
epO5Fi0eciaB+rblH/r36gYRY+ebMU3upvBgZXtL52MYj8aHhUlR8P+iwoDyBm2l
|
||||
eMJc5nH2M1iEfZ6I3PbPYL58oMwdxVw3Y/ZlxnidFQS9HRcBWYfOCnqZWPTxAf54
|
||||
Rh0N1zECgYEA53q0qzEsUtEY04n3bl20D4emZM2c1Gojm5suOWT8RTqcsgZb2Yrl
|
||||
bU5zy+EQjDUUXGbjUbgCYOHHg6JInI3R79rh6te+dg2w8aMTFG4NDeJ5p7WatpwT
|
||||
ynqsVSj0B4Z3XwZhTpyoxnLr9vtsPKjA5UDEotBTxRfZHUHmfUnongcCgYEAy+Oe
|
||||
pyf0vPOyHCWS0vSyySRnb7xtx6MvnfF5/kzRNmZME+NxoYo2Yn0ArMOLx1SAKZka
|
||||
sCYcGVlonA8O6g4t9zW7b0mV/2LDax1zev1iq2rnVK+aU4y5RR06J2VwSZ5mRWCk
|
||||
sExo4nWIJdiHi18ixtHDUSkxY4rnp01W0YWOZSkCgYA1M//IhSHR2xtgq4pCRKk5
|
||||
FI2LB7MvI0IR5sXmDS7qXoFbbZi41HLM/8YfqxgZka2fW0qOIsPxLpOjzq3vxazl
|
||||
+yIHzxSIn7b2ouuku3KmqVIa2OO5awAlfrKTVDlabW6MWbQN1HX6Prm7Z6hF/Odx
|
||||
CcToQwet+kA9uELYsx8TCwKBgDuMdnjxtYw+TMXlv3U3nMQcis1apmGJas3hijTY
|
||||
sL4HsK6aXkTE/k9TnQ/YaQnFx0ze96l85/YLY/84cq2viINMQTsmrdWSPesaBfFk
|
||||
8h2IspnMU/GVB0OFXsfE27/UsKAQsuj+2B9UHniXPjdZiOmyuC4LLu6Y0kHN186I
|
||||
CGfJAoGBAMqAMCMpfC8QZT5zQtzjOWV5iUvpsLwf5HikXw/U19uSW59jajGdiz7B
|
||||
Y3Wt2jslrYS/BmMVDOfgQfXTFfNuZFR1a9fB93rY14zhQ33ChzBaQUp83qRmy6Ae
|
||||
60aBUd+vBL/gV5sxdeOtCZSxZ+uPL4imk2L89efhPW7QiBXI6OQE
|
||||
-----END RSA PRIVATE KEY-----
|
||||
@@ -0,0 +1,49 @@
|
||||
# Git
|
||||
.git
|
||||
.gitignore
|
||||
|
||||
# Python
|
||||
__pycache__
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
.Python
|
||||
*.egg-info
|
||||
dist
|
||||
build
|
||||
.venv
|
||||
venv
|
||||
|
||||
# Environment files (uploaded separately)
|
||||
.env
|
||||
.env.*
|
||||
env.example
|
||||
|
||||
# Docker
|
||||
Dockerfile
|
||||
docker-compose*.yml
|
||||
.dockerignore
|
||||
|
||||
# IDE
|
||||
.vscode
|
||||
.idea
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# SSH keys
|
||||
.cert/
|
||||
*.pem
|
||||
|
||||
# Logs
|
||||
*.log
|
||||
logs/
|
||||
|
||||
# Data
|
||||
data/
|
||||
*.db
|
||||
|
||||
@@ -0,0 +1,34 @@
|
||||
# Local server configuration (auto-generated)
|
||||
# SQLite database for local testing
|
||||
|
||||
GNSS_SERVER_DATABASE_URL=sqlite:////Users/alexandershulman/projects2/tm-gnss-guard/server/data/server_local.db
|
||||
GNSS_SERVER_WEB_USERNAME=test
|
||||
GNSS_SERVER_WEB_PASSWORD=Tototheo.25!
|
||||
GNSS_SERVER_SECRET_KEY=local-dev-secret-key-change-in-production
|
||||
GNSS_SERVER_DEBUG=true
|
||||
GNSS_SERVER_HOST=127.0.0.1
|
||||
GNSS_SERVER_PORT=8000
|
||||
|
||||
# ============================================================================
|
||||
# Telegram Bot Configuration (Optional)
|
||||
# ============================================================================
|
||||
# 1. Create bot: Open Telegram → Search @BotFather → /newbot
|
||||
# 2. Get chat ID:
|
||||
# - Start chat with your bot, send any message
|
||||
# - Visit: https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates
|
||||
# - Find "chat":{"id":123456789} (positive for DM, negative for groups)
|
||||
# 3. Fill in values below
|
||||
#
|
||||
# Each asset can override the chat_id to send to a different chat/group.
|
||||
# ============================================================================
|
||||
|
||||
GNSS_SERVER_TELEGRAM_BOT_TOKEN=8319259186:AAGfg2tHPlnHduAPvsnODLPA1kaRDIsbx0A
|
||||
GNSS_SERVER_TELEGRAM_CHAT_ID=-4863784324
|
||||
|
||||
# =============================================================================
|
||||
# ASSET OFFLINE DETECTION
|
||||
# =============================================================================
|
||||
|
||||
# Seconds without updates before an asset is considered offline (default: 120)
|
||||
# Triggers Telegram notification when asset goes offline/online
|
||||
GNSS_SERVER_ASSET_OFFLINE_SECONDS=120
|
||||
93
backup-from-device/gnss-guard/tm-gnss-guard/server/.env.prod
Normal file
93
backup-from-device/gnss-guard/tm-gnss-guard/server/.env.prod
Normal file
@@ -0,0 +1,93 @@
|
||||
# =============================================================================
|
||||
# GNSS Guard Server Configuration
|
||||
# =============================================================================
|
||||
|
||||
# =============================================================================
|
||||
# SERVER SETTINGS
|
||||
# =============================================================================
|
||||
|
||||
# Host to bind to (127.0.0.1 when behind Nginx proxy)
|
||||
GNSS_SERVER_HOST=127.0.0.1
|
||||
|
||||
# Port to bind to
|
||||
GNSS_SERVER_PORT=8000
|
||||
|
||||
# Enable debug mode (set to false in production)
|
||||
GNSS_SERVER_DEBUG=false
|
||||
|
||||
# =============================================================================
|
||||
# DATABASE (PostgreSQL RDS)
|
||||
# =============================================================================
|
||||
|
||||
# Full database connection URL
|
||||
# Format: postgresql://USER:PASSWORD@HOST:PORT/DATABASE
|
||||
GNSS_SERVER_DATABASE_URL=postgresql://postgres:!ks-hUe8@gnss-guard.cn06uuuk8ttq.eu-west-1.rds.amazonaws.com:5432/gnss_guard
|
||||
|
||||
# =============================================================================
|
||||
# SECURITY
|
||||
# =============================================================================
|
||||
|
||||
# Secret key for session encryption (generate with: python -c "import secrets; print(secrets.token_urlsafe(32))")
|
||||
GNSS_SERVER_SECRET_KEY=e0QnYxAvisgbOqzTIl-rlLyczsNOpP7hEc26ea22ikI
|
||||
|
||||
# Session expiration in minutes (default: 24 hours)
|
||||
GNSS_SERVER_SESSION_EXPIRE_MINUTES=1440
|
||||
|
||||
# =============================================================================
|
||||
# WEB UI AUTHENTICATION
|
||||
# =============================================================================
|
||||
|
||||
# Username for web dashboard login
|
||||
GNSS_SERVER_WEB_USERNAME=test
|
||||
|
||||
# Password for web dashboard login
|
||||
GNSS_SERVER_WEB_PASSWORD=Tototheo.25!
|
||||
|
||||
# =============================================================================
|
||||
# DOMAIN (for SSL/HTTPS)
|
||||
# =============================================================================
|
||||
|
||||
# Server domain name (for Let's Encrypt SSL)
|
||||
GNSS_SERVER_DOMAIN=gnss.tototheo.com
|
||||
|
||||
# =============================================================================
|
||||
# =============================================================================
|
||||
# VALIDATION
|
||||
# =============================================================================
|
||||
|
||||
# Staleness threshold in seconds (data older than this is considered stale)
|
||||
GNSS_SERVER_STALE_THRESHOLD_SECONDS=60
|
||||
|
||||
# DATA RETENTION
|
||||
# =============================================================================
|
||||
|
||||
# Days to keep validation history (default: 90)
|
||||
GNSS_SERVER_VALIDATION_HISTORY_DAYS=90
|
||||
|
||||
# Email for Let's Encrypt certificate notifications
|
||||
LETSENCRYPT_EMAIL=alexander.s@tototheo.com
|
||||
|
||||
# ============================================================================
|
||||
# Telegram Bot Configuration (Optional)
|
||||
# ============================================================================
|
||||
# 1. Create bot: Open Telegram → Search @BotFather → /newbot
|
||||
# 2. Get chat ID:
|
||||
# - Start chat with your bot, send any message
|
||||
# - Visit: https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates
|
||||
# - Find "chat":{"id":123456789} (positive for DM, negative for groups)
|
||||
# 3. Fill in values below
|
||||
#
|
||||
# Each asset can override the chat_id to send to a different chat/group.
|
||||
# ============================================================================
|
||||
|
||||
GNSS_SERVER_TELEGRAM_BOT_TOKEN=8319259186:AAGfg2tHPlnHduAPvsnODLPA1kaRDIsbx0A
|
||||
GNSS_SERVER_TELEGRAM_CHAT_ID=-4863784324
|
||||
|
||||
# =============================================================================
|
||||
# ASSET OFFLINE DETECTION
|
||||
# =============================================================================
|
||||
|
||||
# Seconds without updates before an asset is considered offline (default: 120)
|
||||
# Triggers Telegram notification when asset goes offline/online
|
||||
GNSS_SERVER_ASSET_OFFLINE_SECONDS=120
|
||||
|
||||
@@ -0,0 +1,40 @@
|
||||
# GNSS Guard Server - Dockerfile
|
||||
FROM python:3.11-slim
|
||||
|
||||
# Set environment variables
|
||||
ENV PYTHONDONTWRITEBYTECODE=1
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
gcc \
|
||||
libpq-dev \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements first (for better caching)
|
||||
COPY requirements.txt .
|
||||
|
||||
# Install Python dependencies
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY . .
|
||||
|
||||
# Create non-root user for security
|
||||
RUN useradd --create-home --shell /bin/bash appuser && \
|
||||
chown -R appuser:appuser /app
|
||||
USER appuser
|
||||
|
||||
# Expose port
|
||||
EXPOSE 8000
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
|
||||
CMD python -c "import requests; requests.get('http://localhost:8000/auth/check', timeout=5)" || exit 1
|
||||
|
||||
# Run uvicorn
|
||||
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
|
||||
|
||||
@@ -0,0 +1,4 @@
|
||||
"""
|
||||
GNSS Guard Server - Centralized monitoring server for multiple assets
|
||||
"""
|
||||
|
||||
75
backup-from-device/gnss-guard/tm-gnss-guard/server/config.py
Normal file
75
backup-from-device/gnss-guard/tm-gnss-guard/server/config.py
Normal file
@@ -0,0 +1,75 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Server configuration management for GNSS Guard Server
|
||||
Loads configuration from environment variables
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
from pydantic_settings import BaseSettings
|
||||
from pydantic import field_validator
|
||||
|
||||
|
||||
class ServerConfig(BaseSettings):
|
||||
"""Server configuration loaded from environment variables"""
|
||||
|
||||
# Server settings
|
||||
server_host: str = "0.0.0.0"
|
||||
server_port: int = 8000
|
||||
debug: bool = False
|
||||
|
||||
# Database settings (PostgreSQL) - REQUIRED, no insecure default
|
||||
database_url: str
|
||||
|
||||
# Security settings
|
||||
secret_key: str = "change-this-in-production-to-a-random-secret-key"
|
||||
session_expire_minutes: int = 1440 # 24 hours
|
||||
|
||||
# Web UI authentication - REQUIRED, no insecure defaults
|
||||
# Must be set via environment variables GNSS_SERVER_WEB_USERNAME and GNSS_SERVER_WEB_PASSWORD
|
||||
web_username: str
|
||||
web_password: str
|
||||
|
||||
@field_validator('web_password')
|
||||
@classmethod
|
||||
def password_strength(cls, v: str) -> str:
|
||||
"""Ensure password meets minimum security requirements"""
|
||||
if len(v) < 10:
|
||||
raise ValueError('Password must be at least 10 characters long')
|
||||
if v.lower() in ['password', 'admin', 'test', '123456', 'tototheo']:
|
||||
raise ValueError('Password is too common/weak')
|
||||
return v
|
||||
|
||||
# Validation settings
|
||||
stale_threshold_seconds: int = 60 # Data older than this is considered stale
|
||||
|
||||
# Asset offline detection
|
||||
asset_offline_seconds: int = 120 # Consider asset offline after this many seconds without updates
|
||||
|
||||
# Data retention
|
||||
validation_history_days: int = 90 # Keep 90 days of validation history
|
||||
|
||||
# Domain for SSL (optional)
|
||||
server_domain: Optional[str] = None
|
||||
|
||||
# Telegram notification settings (optional)
|
||||
telegram_bot_token: Optional[str] = None
|
||||
telegram_chat_id: Optional[str] = None # Default chat ID for all assets
|
||||
|
||||
@property
|
||||
def telegram_enabled(self) -> bool:
|
||||
"""Check if Telegram notifications are configured"""
|
||||
return bool(self.telegram_bot_token and self.telegram_chat_id)
|
||||
|
||||
class Config:
|
||||
env_file = ".env"
|
||||
env_prefix = "GNSS_SERVER_"
|
||||
case_sensitive = False
|
||||
|
||||
|
||||
def get_config() -> ServerConfig:
|
||||
"""Get server configuration instance"""
|
||||
return ServerConfig()
|
||||
|
||||
105
backup-from-device/gnss-guard/tm-gnss-guard/server/database.py
Normal file
105
backup-from-device/gnss-guard/tm-gnss-guard/server/database.py
Normal file
@@ -0,0 +1,105 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Database connection and session management for GNSS Guard Server
|
||||
"""
|
||||
|
||||
import logging
|
||||
from contextlib import contextmanager
|
||||
from typing import Generator
|
||||
|
||||
from sqlalchemy import create_engine, event
|
||||
from sqlalchemy.orm import sessionmaker, Session
|
||||
from sqlalchemy.pool import QueuePool
|
||||
|
||||
from config import get_config
|
||||
from models import Base
|
||||
|
||||
logger = logging.getLogger("gnss_guard.server.database")
|
||||
|
||||
# Global engine and session factory
|
||||
_engine = None
|
||||
_SessionLocal = None
|
||||
|
||||
|
||||
def get_engine():
|
||||
"""Get or create the database engine"""
|
||||
global _engine
|
||||
|
||||
if _engine is None:
|
||||
config = get_config()
|
||||
|
||||
# Check if using SQLite (local development)
|
||||
is_sqlite = config.database_url.startswith("sqlite")
|
||||
|
||||
if is_sqlite:
|
||||
# SQLite-specific settings
|
||||
from sqlalchemy.pool import StaticPool
|
||||
_engine = create_engine(
|
||||
config.database_url,
|
||||
connect_args={"check_same_thread": False},
|
||||
poolclass=StaticPool,
|
||||
echo=config.debug,
|
||||
)
|
||||
logger.info(f"SQLite database engine created: {config.database_url}")
|
||||
else:
|
||||
# PostgreSQL with connection pooling
|
||||
_engine = create_engine(
|
||||
config.database_url,
|
||||
poolclass=QueuePool,
|
||||
pool_size=5,
|
||||
max_overflow=10,
|
||||
pool_pre_ping=True, # Verify connections before using
|
||||
echo=config.debug,
|
||||
)
|
||||
logger.info(f"Database engine created for: {config.database_url.split('@')[-1]}")
|
||||
|
||||
return _engine
|
||||
|
||||
|
||||
def get_session_factory():
|
||||
"""Get or create the session factory"""
|
||||
global _SessionLocal
|
||||
|
||||
if _SessionLocal is None:
|
||||
engine = get_engine()
|
||||
_SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||
|
||||
return _SessionLocal
|
||||
|
||||
|
||||
def init_db():
|
||||
"""Initialize database - create all tables"""
|
||||
engine = get_engine()
|
||||
Base.metadata.create_all(bind=engine)
|
||||
logger.info("Database tables created/verified")
|
||||
|
||||
|
||||
def get_db() -> Generator[Session, None, None]:
|
||||
"""
|
||||
Dependency for FastAPI to get database session.
|
||||
Yields a session and ensures it's closed after use.
|
||||
"""
|
||||
SessionLocal = get_session_factory()
|
||||
db = SessionLocal()
|
||||
try:
|
||||
yield db
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
|
||||
@contextmanager
|
||||
def get_db_session() -> Generator[Session, None, None]:
|
||||
"""
|
||||
Context manager for database sessions (for use outside FastAPI dependencies).
|
||||
"""
|
||||
SessionLocal = get_session_factory()
|
||||
db = SessionLocal()
|
||||
try:
|
||||
yield db
|
||||
db.commit()
|
||||
except Exception:
|
||||
db.rollback()
|
||||
raise
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
@@ -0,0 +1,34 @@
|
||||
# GNSS Guard Server - Development Docker Compose
|
||||
# No nginx, no SSL - direct access to FastAPI on port 8000
|
||||
#
|
||||
# Usage:
|
||||
# cp env.example .env.dev
|
||||
# # Edit .env.dev (can use SQLite for dev: sqlite:///./data/gnss_guard.db)
|
||||
# docker compose -f docker-compose.dev.yml up -d
|
||||
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
gnss-server:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
container_name: gnss-guard-server-dev
|
||||
restart: unless-stopped
|
||||
env_file:
|
||||
- .env.dev
|
||||
ports:
|
||||
- "8000:8000"
|
||||
volumes:
|
||||
# Mount source code for live reload (development only)
|
||||
- .:/app
|
||||
environment:
|
||||
- GNSS_SERVER_DEBUG=true
|
||||
command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8000/auth/check"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
|
||||
@@ -0,0 +1,76 @@
|
||||
# GNSS Guard Server - Docker Compose with Nginx + SSL
|
||||
#
|
||||
# Usage:
|
||||
# 1. cp env.example .env.prod
|
||||
# 2. Edit .env.prod with your configuration
|
||||
# 3. docker compose up -d
|
||||
# 4. Run SSL setup: docker compose exec certbot certbot certonly ...
|
||||
#
|
||||
# For development (no SSL): use docker-compose.dev.yml
|
||||
|
||||
services:
|
||||
# ==========================================================================
|
||||
# GNSS Guard Server (FastAPI/Uvicorn)
|
||||
# ==========================================================================
|
||||
gnss-server:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
container_name: gnss-guard-server
|
||||
restart: unless-stopped
|
||||
env_file:
|
||||
- .env.prod
|
||||
expose:
|
||||
- "8000"
|
||||
networks:
|
||||
- gnss-network
|
||||
healthcheck:
|
||||
test: ["CMD", "python", "-c", "import requests; requests.get('http://localhost:8000/auth/check', timeout=5)"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
|
||||
# ==========================================================================
|
||||
# Nginx Reverse Proxy
|
||||
# ==========================================================================
|
||||
nginx:
|
||||
image: nginx:alpine
|
||||
container_name: gnss-nginx
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
volumes:
|
||||
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
|
||||
- ./nginx/conf.d:/etc/nginx/conf.d:ro
|
||||
- certbot-etc:/etc/letsencrypt:ro
|
||||
- certbot-var:/var/lib/letsencrypt
|
||||
- certbot-webroot:/var/www/certbot
|
||||
# Mount nginx logs to host for fail2ban monitoring
|
||||
- /var/log/nginx:/var/log/nginx
|
||||
depends_on:
|
||||
- gnss-server
|
||||
networks:
|
||||
- gnss-network
|
||||
|
||||
# ==========================================================================
|
||||
# Certbot (SSL Certificate Management)
|
||||
# ==========================================================================
|
||||
certbot:
|
||||
image: certbot/certbot
|
||||
container_name: gnss-certbot
|
||||
volumes:
|
||||
- certbot-etc:/etc/letsencrypt
|
||||
- certbot-var:/var/lib/letsencrypt
|
||||
- certbot-webroot:/var/www/certbot
|
||||
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
|
||||
|
||||
networks:
|
||||
gnss-network:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
certbot-etc:
|
||||
certbot-var:
|
||||
certbot-webroot:
|
||||
103
backup-from-device/gnss-guard/tm-gnss-guard/server/env.example
Normal file
103
backup-from-device/gnss-guard/tm-gnss-guard/server/env.example
Normal file
@@ -0,0 +1,103 @@
|
||||
# =============================================================================
|
||||
# GNSS Guard Server Configuration
|
||||
# =============================================================================
|
||||
# Copy this file to .env.prod and configure for your environment
|
||||
# Example: cp env.example .env.prod
|
||||
|
||||
# =============================================================================
|
||||
# SERVER SETTINGS
|
||||
# =============================================================================
|
||||
|
||||
# Host to bind to (127.0.0.1 when behind Nginx proxy)
|
||||
GNSS_SERVER_HOST=127.0.0.1
|
||||
|
||||
# Port to bind to
|
||||
GNSS_SERVER_PORT=8000
|
||||
|
||||
# Enable debug mode (set to false in production)
|
||||
GNSS_SERVER_DEBUG=false
|
||||
|
||||
# =============================================================================
|
||||
# DATABASE (PostgreSQL RDS) - REQUIRED!
|
||||
# =============================================================================
|
||||
# The server will NOT start without a valid database URL!
|
||||
|
||||
# Full database connection URL
|
||||
# Format: postgresql://USER:PASSWORD@HOST:PORT/DATABASE
|
||||
GNSS_SERVER_DATABASE_URL=postgresql://gnss_admin:your-password@your-rds-endpoint.rds.amazonaws.com:5432/gnss_guard
|
||||
|
||||
# =============================================================================
|
||||
# SECURITY
|
||||
# =============================================================================
|
||||
|
||||
# Secret key for session encryption (generate with: python -c "import secrets; print(secrets.token_urlsafe(32))")
|
||||
GNSS_SERVER_SECRET_KEY=change-this-to-a-random-secret-key
|
||||
|
||||
# Session expiration in minutes (default: 24 hours)
|
||||
GNSS_SERVER_SESSION_EXPIRE_MINUTES=1440
|
||||
|
||||
# =============================================================================
|
||||
# WEB UI AUTHENTICATION (REQUIRED - no defaults!)
|
||||
# =============================================================================
|
||||
# These credentials are used to login to the web dashboard.
|
||||
# The server will NOT start without these being set!
|
||||
|
||||
# Username for web dashboard login (REQUIRED)
|
||||
GNSS_SERVER_WEB_USERNAME=your_username_here
|
||||
|
||||
# Password for web dashboard login (REQUIRED)
|
||||
# Requirements:
|
||||
# - At least 12 characters long
|
||||
# - Cannot be common passwords like 'password', 'admin', 'test'
|
||||
# Generate a secure password: python -c "import secrets; print(secrets.token_urlsafe(16))"
|
||||
GNSS_SERVER_WEB_PASSWORD=your_secure_password_here
|
||||
|
||||
# =============================================================================
|
||||
# DOMAIN (for SSL/HTTPS)
|
||||
# =============================================================================
|
||||
|
||||
# Server domain name (for Let's Encrypt SSL)
|
||||
GNSS_SERVER_DOMAIN=gnss.yourdomain.com
|
||||
|
||||
# =============================================================================
|
||||
# VALIDATION
|
||||
# =============================================================================
|
||||
|
||||
# Staleness threshold in seconds (data older than this is considered stale)
|
||||
GNSS_SERVER_STALE_THRESHOLD_SECONDS=60
|
||||
|
||||
# =============================================================================
|
||||
# ASSET OFFLINE DETECTION
|
||||
# =============================================================================
|
||||
|
||||
# Seconds without updates before an asset is considered offline (default: 120)
|
||||
# Triggers Telegram notification when asset goes offline/online
|
||||
GNSS_SERVER_ASSET_OFFLINE_SECONDS=120
|
||||
|
||||
# =============================================================================
|
||||
# DATA RETENTION
|
||||
# =============================================================================
|
||||
|
||||
# Days to keep validation history (default: 90)
|
||||
GNSS_SERVER_VALIDATION_HISTORY_DAYS=90
|
||||
|
||||
# =============================================================================
|
||||
# TELEGRAM NOTIFICATIONS (Optional)
|
||||
# =============================================================================
|
||||
# Server-side Telegram notifications for all assets.
|
||||
# Each asset can override the chat_id to send to a different chat/group.
|
||||
|
||||
# Telegram bot token (from @BotFather)
|
||||
GNSS_SERVER_TELEGRAM_BOT_TOKEN=
|
||||
|
||||
# Default Telegram chat ID (negative for groups)
|
||||
# Individual assets can override this in the database
|
||||
GNSS_SERVER_TELEGRAM_CHAT_ID=
|
||||
|
||||
# =============================================================================
|
||||
# SSL (for Docker deployment with Traefik)
|
||||
# =============================================================================
|
||||
|
||||
# Email for Let's Encrypt certificate notifications
|
||||
LETSENCRYPT_EMAIL=admin@yourdomain.com
|
||||
|
||||
@@ -0,0 +1,3 @@
|
||||
# Keep this directory for importing client database files
|
||||
# Place .db files here with format: {id}_{name}.db
|
||||
# Example: 2_msc_charlotte.db
|
||||
408
backup-from-device/gnss-guard/tm-gnss-guard/server/main.py
Normal file
408
backup-from-device/gnss-guard/tm-gnss-guard/server/main.py
Normal file
@@ -0,0 +1,408 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
FastAPI main application for GNSS Guard Server
|
||||
Centralized monitoring server for multiple GNSS Guard assets
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import json
|
||||
import random
|
||||
from contextlib import asynccontextmanager
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from fastapi import FastAPI, Request, Depends, HTTPException
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
from fastapi.templating import Jinja2Templates
|
||||
from fastapi.responses import HTMLResponse, RedirectResponse, JSONResponse
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from sqlalchemy.orm import Session
|
||||
from slowapi import Limiter, _rate_limit_exceeded_handler
|
||||
from slowapi.util import get_remote_address
|
||||
from slowapi.errors import RateLimitExceeded
|
||||
|
||||
from config import get_config
|
||||
from database import init_db, get_db, get_session_factory
|
||||
from routes import api, auth
|
||||
from routes.auth import get_optional_user, get_current_user
|
||||
from services.asset_service import AssetService
|
||||
from services.telegram_service import get_telegram_service
|
||||
from models import Asset, AssetNotificationState
|
||||
|
||||
# Initialize rate limiter
|
||||
limiter = Limiter(key_func=get_remote_address)
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
|
||||
)
|
||||
logger = logging.getLogger("gnss_guard.server")
|
||||
|
||||
# Create FastAPI app
|
||||
app = FastAPI(
|
||||
title="GNSS Guard Server",
|
||||
description="Centralized monitoring server for GNSS Guard assets",
|
||||
version="1.0.0"
|
||||
)
|
||||
|
||||
# Setup rate limiting
|
||||
app.state.limiter = limiter
|
||||
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
|
||||
|
||||
# Add CORS middleware - restricted to same-origin only
|
||||
# Since the dashboard is served from the same domain, we only need
|
||||
# to allow requests from the same origin. This prevents CSRF attacks.
|
||||
config = get_config()
|
||||
allowed_origins = []
|
||||
if config.server_domain:
|
||||
allowed_origins = [
|
||||
f"https://{config.server_domain}",
|
||||
f"http://{config.server_domain}", # For initial setup before SSL
|
||||
]
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=allowed_origins,
|
||||
allow_credentials=True,
|
||||
allow_methods=["GET", "POST", "DELETE"],
|
||||
allow_headers=["Content-Type", "Authorization", "Cookie"],
|
||||
)
|
||||
|
||||
# Setup static files and templates
|
||||
static_path = Path(__file__).parent / "static"
|
||||
templates_path = Path(__file__).parent / "templates"
|
||||
|
||||
if static_path.exists():
|
||||
app.mount("/static", StaticFiles(directory=str(static_path)), name="static")
|
||||
|
||||
templates = Jinja2Templates(directory=str(templates_path)) if templates_path.exists() else None
|
||||
|
||||
# Include routers
|
||||
app.include_router(api.router)
|
||||
app.include_router(auth.router)
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Health Check Endpoint (public, no auth required)
|
||||
# =============================================================================
|
||||
|
||||
@app.get("/health")
|
||||
async def health_check():
|
||||
"""Health check endpoint - always accessible"""
|
||||
return {"status": "ok", "timestamp": datetime.utcnow().isoformat()}
|
||||
|
||||
|
||||
async def check_offline_assets():
|
||||
"""Background task to check for assets that have gone offline"""
|
||||
config = get_config()
|
||||
telegram_service = get_telegram_service()
|
||||
|
||||
if not telegram_service.enabled:
|
||||
return
|
||||
|
||||
threshold = datetime.utcnow() - timedelta(seconds=config.asset_offline_seconds)
|
||||
|
||||
SessionLocal = get_session_factory()
|
||||
db = SessionLocal()
|
||||
try:
|
||||
# Find assets that are marked online but haven't reported recently
|
||||
states = db.query(AssetNotificationState).join(Asset).filter(
|
||||
AssetNotificationState.is_online == True,
|
||||
AssetNotificationState.last_validation_at != None,
|
||||
AssetNotificationState.last_validation_at < threshold,
|
||||
Asset.is_active == True,
|
||||
Asset.telegram_enabled == True
|
||||
).all()
|
||||
|
||||
for state in states:
|
||||
chat_id = state.asset.telegram_chat_id or telegram_service.default_chat_id
|
||||
if chat_id:
|
||||
logger.info(f"Asset '{state.asset.name}' detected as offline (last seen: {state.last_validation_at})")
|
||||
telegram_service.send_asset_offline_alert(
|
||||
chat_id=chat_id,
|
||||
asset_name=state.asset.name,
|
||||
last_seen=state.last_validation_at,
|
||||
offline_threshold_seconds=config.asset_offline_seconds
|
||||
)
|
||||
state.is_online = False
|
||||
|
||||
if states:
|
||||
db.commit()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error checking offline assets: {e}")
|
||||
db.rollback()
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
|
||||
async def offline_checker_loop():
|
||||
"""Background loop that periodically checks for offline assets"""
|
||||
while True:
|
||||
await asyncio.sleep(30) # Check every 30 seconds
|
||||
try:
|
||||
await check_offline_assets()
|
||||
except Exception as e:
|
||||
logger.error(f"Error in offline checker loop: {e}")
|
||||
|
||||
|
||||
@app.on_event("startup")
|
||||
async def startup_event():
|
||||
"""Initialize database and background tasks on startup"""
|
||||
logger.info("Starting GNSS Guard Server...")
|
||||
init_db()
|
||||
logger.info("Database initialized")
|
||||
|
||||
# Start background task for offline detection
|
||||
asyncio.create_task(offline_checker_loop())
|
||||
logger.info("Offline asset checker started")
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Web UI Routes
|
||||
# =============================================================================
|
||||
|
||||
@app.get("/", response_class=HTMLResponse)
|
||||
async def index(request: Request, user: Optional[str] = Depends(get_optional_user)):
|
||||
"""Main dashboard page"""
|
||||
if not user:
|
||||
return RedirectResponse(url="/login", status_code=302)
|
||||
|
||||
if not templates:
|
||||
return HTMLResponse("<h1>GNSS Guard Server</h1><p>Templates not configured</p>")
|
||||
|
||||
return templates.TemplateResponse("dashboard.html", {
|
||||
"request": request,
|
||||
"username": user,
|
||||
"cache_buster": random.randint(100000, 999999)
|
||||
})
|
||||
|
||||
|
||||
@app.get("/login", response_class=HTMLResponse)
|
||||
async def login_page(request: Request, user: Optional[str] = Depends(get_optional_user)):
|
||||
"""Login page"""
|
||||
if user:
|
||||
return RedirectResponse(url="/", status_code=302)
|
||||
|
||||
if not templates:
|
||||
return HTMLResponse("""
|
||||
<h1>GNSS Guard Server - Login</h1>
|
||||
<form method="post" action="/login">
|
||||
<input name="username" placeholder="Username"><br>
|
||||
<input name="password" type="password" placeholder="Password"><br>
|
||||
<button type="submit">Login</button>
|
||||
</form>
|
||||
""")
|
||||
|
||||
return templates.TemplateResponse("login.html", {
|
||||
"request": request,
|
||||
"cache_buster": random.randint(100000, 999999)
|
||||
})
|
||||
|
||||
|
||||
@app.get("/api/dashboard/assets")
|
||||
async def dashboard_assets(
|
||||
user: str = Depends(get_current_user),
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""Get all assets status for dashboard"""
|
||||
service = AssetService(db)
|
||||
return service.get_all_assets_status()
|
||||
|
||||
|
||||
@app.get("/api/dashboard/asset/{asset_name}/status")
|
||||
async def dashboard_asset_status(
|
||||
asset_name: str,
|
||||
at: Optional[float] = None,
|
||||
user: str = Depends(get_current_user),
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Get detailed status for a specific asset (for dashboard display).
|
||||
Matches the format expected by the client dashboard.
|
||||
|
||||
Args:
|
||||
at: Optional Unix timestamp to get historical data at that time.
|
||||
If not provided, returns the latest data.
|
||||
"""
|
||||
service = AssetService(db)
|
||||
asset = service.get_asset_by_name(asset_name)
|
||||
|
||||
if not asset:
|
||||
raise HTTPException(status_code=404, detail=f"Asset '{asset_name}' not found")
|
||||
|
||||
if at is not None:
|
||||
# Get historical validation at specified timestamp
|
||||
latest = service.get_validation_at_timestamp(asset.id, at)
|
||||
else:
|
||||
latest = service.get_latest_validation(asset.id)
|
||||
|
||||
if not latest:
|
||||
return {
|
||||
"error": "No validation data available",
|
||||
"timestamp": datetime.utcnow().isoformat()
|
||||
}
|
||||
|
||||
# Parse JSON fields
|
||||
sources_missing = json.loads(latest.sources_missing or "[]")
|
||||
sources_stale = json.loads(latest.sources_stale or "[]")
|
||||
coordinate_differences = json.loads(latest.coordinate_differences or "{}")
|
||||
source_coordinates = json.loads(latest.source_coordinates or "{}")
|
||||
validation_details = json.loads(latest.validation_details or "{}")
|
||||
|
||||
# Get enabled sources from validation_details
|
||||
expected_sources = validation_details.get("expected_sources", [])
|
||||
|
||||
# Build sources status (matching client format)
|
||||
source_display_names = {
|
||||
"nmea_primary": "Primary GPS",
|
||||
"nmea_secondary": "Secondary GPS",
|
||||
"tm_ais": "TM AIS GPS",
|
||||
"starlink_gps": "Starlink GPS",
|
||||
"starlink_location": "Starlink Location"
|
||||
}
|
||||
|
||||
sources = {}
|
||||
all_source_names = ["nmea_primary", "nmea_secondary", "tm_ais", "starlink_gps", "starlink_location"]
|
||||
|
||||
for source_name in all_source_names:
|
||||
display_name = source_display_names.get(source_name, source_name)
|
||||
|
||||
if source_name not in expected_sources:
|
||||
sources[source_name] = {
|
||||
"display_name": display_name,
|
||||
"enabled": False,
|
||||
"status": "not_configured",
|
||||
"is_stale": False,
|
||||
"coordinates": None,
|
||||
"last_update": None,
|
||||
"last_update_unix": None
|
||||
}
|
||||
continue
|
||||
|
||||
source_data = source_coordinates.get(source_name)
|
||||
is_stale = source_name in sources_stale
|
||||
|
||||
if not source_data:
|
||||
sources[source_name] = {
|
||||
"display_name": display_name,
|
||||
"enabled": True,
|
||||
"status": "missing",
|
||||
"is_stale": is_stale,
|
||||
"coordinates": None,
|
||||
"last_update": None,
|
||||
"last_update_unix": None
|
||||
}
|
||||
else:
|
||||
status = "stale" if is_stale else "ok"
|
||||
sources[source_name] = {
|
||||
"display_name": display_name,
|
||||
"enabled": True,
|
||||
"status": status,
|
||||
"is_stale": is_stale,
|
||||
"coordinates": {
|
||||
"latitude": source_data.get("latitude"),
|
||||
"longitude": source_data.get("longitude")
|
||||
},
|
||||
"last_update": source_data.get("timestamp"),
|
||||
"last_update_unix": source_data.get("timestamp_unix")
|
||||
}
|
||||
|
||||
# Calculate max distance
|
||||
threshold_meters = validation_details.get("threshold_meters", 200.0)
|
||||
max_distance_km = None
|
||||
max_distance_m = 0.0
|
||||
|
||||
if not latest.is_valid and coordinate_differences:
|
||||
for diff_data in coordinate_differences.values():
|
||||
if isinstance(diff_data, dict):
|
||||
distance = diff_data.get("distance_meters", diff_data.get("distance_m", 0))
|
||||
if distance > max_distance_m:
|
||||
max_distance_m = distance
|
||||
|
||||
if max_distance_m > threshold_meters:
|
||||
max_distance_km = max_distance_m / 1000.0
|
||||
|
||||
has_alert = (not latest.is_valid and max_distance_km is not None) or len(sources_missing) > 0
|
||||
|
||||
# Find map center
|
||||
map_center = None
|
||||
for priority_source in ["nmea_primary", "tm_ais", "starlink_location"]:
|
||||
if sources.get(priority_source, {}).get("coordinates"):
|
||||
coords = sources[priority_source]["coordinates"]
|
||||
if coords.get("latitude") and coords.get("longitude"):
|
||||
map_center = coords
|
||||
break
|
||||
|
||||
if not map_center:
|
||||
for source_data in sources.values():
|
||||
if source_data.get("coordinates"):
|
||||
coords = source_data["coordinates"]
|
||||
if coords.get("latitude") and coords.get("longitude"):
|
||||
map_center = coords
|
||||
break
|
||||
|
||||
return {
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"validation_timestamp": latest.validation_timestamp,
|
||||
"validation_timestamp_unix": latest.validation_timestamp_unix,
|
||||
"is_valid": latest.is_valid,
|
||||
"has_alert": has_alert,
|
||||
"max_distance_km": max_distance_km,
|
||||
"threshold_meters": threshold_meters,
|
||||
"sources": sources,
|
||||
"sources_stale": sources_stale,
|
||||
"map_center": map_center,
|
||||
"asset_name": asset_name
|
||||
}
|
||||
|
||||
|
||||
@app.get("/api/dashboard/asset/{asset_name}/route")
|
||||
async def dashboard_asset_route(
|
||||
asset_name: str,
|
||||
hours: int = 72,
|
||||
until: Optional[float] = None,
|
||||
user: str = Depends(get_current_user),
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Get route data for map visualization.
|
||||
|
||||
Args:
|
||||
hours: Number of hours of history (default 72)
|
||||
until: Optional Unix timestamp to show route up to this time.
|
||||
If not provided, shows route up to current time.
|
||||
"""
|
||||
service = AssetService(db)
|
||||
asset = service.get_asset_by_name(asset_name)
|
||||
|
||||
if not asset:
|
||||
raise HTTPException(status_code=404, detail=f"Asset '{asset_name}' not found")
|
||||
|
||||
return service.get_route_data(asset.id, hours, until_timestamp=until)
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Main entry point
|
||||
# =============================================================================
|
||||
|
||||
def run_server():
|
||||
"""Run the server using uvicorn"""
|
||||
import uvicorn
|
||||
config = get_config()
|
||||
|
||||
uvicorn.run(
|
||||
"server.main:app",
|
||||
host=config.server_host,
|
||||
port=config.server_port,
|
||||
reload=config.debug,
|
||||
log_level="info"
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
run_server()
|
||||
|
||||
211
backup-from-device/gnss-guard/tm-gnss-guard/server/models.py
Normal file
211
backup-from-device/gnss-guard/tm-gnss-guard/server/models.py
Normal file
@@ -0,0 +1,211 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
SQLAlchemy and Pydantic models for GNSS Guard Server
|
||||
"""
|
||||
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any, List, Optional
|
||||
from sqlalchemy import Column, Integer, String, Float, Boolean, DateTime, ForeignKey, Text, Index
|
||||
from sqlalchemy.orm import relationship, declarative_base
|
||||
from pydantic import BaseModel, Field
|
||||
import hashlib
|
||||
import secrets
|
||||
|
||||
Base = declarative_base()
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# SQLAlchemy Database Models
|
||||
# =============================================================================
|
||||
|
||||
class Asset(Base):
|
||||
"""Asset (client device) registered with the server"""
|
||||
__tablename__ = "assets"
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
name = Column(String(255), unique=True, nullable=False, index=True)
|
||||
token_hash = Column(String(64), nullable=False) # SHA-256 hash of token
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
is_active = Column(Boolean, default=True)
|
||||
description = Column(String(500), nullable=True)
|
||||
|
||||
# Telegram notification settings (optional override for this asset)
|
||||
telegram_chat_id = Column(String(100), nullable=True) # Override default chat ID
|
||||
telegram_enabled = Column(Boolean, default=True) # Enable/disable notifications for this asset
|
||||
|
||||
# Relationship to validation history
|
||||
validations = relationship("ValidationHistory", back_populates="asset", cascade="all, delete-orphan")
|
||||
|
||||
# Relationship to notification state
|
||||
notification_state = relationship("AssetNotificationState", back_populates="asset", uselist=False, cascade="all, delete-orphan")
|
||||
|
||||
@staticmethod
|
||||
def hash_token(token: str) -> str:
|
||||
"""Hash a token using SHA-256"""
|
||||
return hashlib.sha256(token.encode()).hexdigest()
|
||||
|
||||
@staticmethod
|
||||
def generate_token() -> str:
|
||||
"""Generate a secure random token"""
|
||||
return secrets.token_urlsafe(32)
|
||||
|
||||
def verify_token(self, token: str) -> bool:
|
||||
"""Verify if provided token matches stored hash"""
|
||||
return self.token_hash == self.hash_token(token)
|
||||
|
||||
|
||||
class AssetNotificationState(Base):
|
||||
"""Tracks the previous notification state for each asset to detect changes"""
|
||||
__tablename__ = "asset_notification_state"
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
asset_id = Column(Integer, ForeignKey("assets.id", ondelete="CASCADE"), unique=True, nullable=False)
|
||||
|
||||
# Previous state (JSON arrays stored as text)
|
||||
prev_sources_missing = Column(Text, nullable=True) # JSON array
|
||||
prev_sources_stale = Column(Text, nullable=True) # JSON array
|
||||
prev_threshold_breached = Column(Boolean, default=False)
|
||||
|
||||
# Last notification timestamp
|
||||
last_notification_at = Column(DateTime, nullable=True)
|
||||
|
||||
# Asset online/offline tracking
|
||||
is_online = Column(Boolean, default=True) # Whether asset is currently reporting
|
||||
last_validation_at = Column(DateTime, nullable=True) # Last time we received validation data
|
||||
|
||||
# Relationship
|
||||
asset = relationship("Asset", back_populates="notification_state")
|
||||
|
||||
|
||||
class ValidationHistory(Base):
|
||||
"""Historical validation records from assets"""
|
||||
__tablename__ = "validation_history"
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
asset_id = Column(Integer, ForeignKey("assets.id", ondelete="CASCADE"), nullable=False)
|
||||
|
||||
# Validation timestamps
|
||||
validation_timestamp = Column(String(50), nullable=False) # ISO format
|
||||
validation_timestamp_unix = Column(Float, nullable=False, index=True)
|
||||
|
||||
# Validation result
|
||||
is_valid = Column(Boolean, nullable=False)
|
||||
|
||||
# JSON fields stored as text
|
||||
sources_missing = Column(Text, nullable=True) # JSON array
|
||||
sources_stale = Column(Text, nullable=True) # JSON array
|
||||
coordinate_differences = Column(Text, nullable=True) # JSON object
|
||||
source_coordinates = Column(Text, nullable=True) # JSON object
|
||||
validation_details = Column(Text, nullable=True) # JSON object
|
||||
|
||||
# Server-side metadata
|
||||
received_at = Column(DateTime, default=datetime.utcnow, index=True)
|
||||
|
||||
# Relationship
|
||||
asset = relationship("Asset", back_populates="validations")
|
||||
|
||||
# Indexes for common queries
|
||||
__table_args__ = (
|
||||
Index('ix_validation_asset_timestamp', 'asset_id', 'validation_timestamp_unix'),
|
||||
)
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Pydantic Request/Response Models
|
||||
# =============================================================================
|
||||
|
||||
class AssetCreate(BaseModel):
|
||||
"""Request model for creating a new asset"""
|
||||
name: str = Field(..., min_length=1, max_length=255)
|
||||
description: Optional[str] = Field(None, max_length=500)
|
||||
telegram_chat_id: Optional[str] = Field(None, max_length=100) # Override default chat ID
|
||||
telegram_enabled: bool = True # Enable notifications for this asset
|
||||
|
||||
|
||||
class AssetResponse(BaseModel):
|
||||
"""Response model for asset data"""
|
||||
id: int
|
||||
name: str
|
||||
is_active: bool
|
||||
created_at: datetime
|
||||
description: Optional[str] = None
|
||||
telegram_chat_id: Optional[str] = None
|
||||
telegram_enabled: bool = True
|
||||
|
||||
class Config:
|
||||
from_attributes = True
|
||||
|
||||
|
||||
class AssetWithToken(AssetResponse):
|
||||
"""Response model for newly created asset (includes token)"""
|
||||
token: str # Only returned when asset is created
|
||||
|
||||
|
||||
class AssetImport(BaseModel):
|
||||
"""Request model for importing an asset with a specific token"""
|
||||
name: str = Field(..., min_length=1, max_length=255)
|
||||
token: str = Field(..., min_length=32, max_length=128)
|
||||
description: Optional[str] = Field(None, max_length=500)
|
||||
telegram_chat_id: Optional[str] = Field(None, max_length=100)
|
||||
telegram_enabled: bool = True
|
||||
|
||||
|
||||
class AssetBatchImport(BaseModel):
|
||||
"""Request model for batch importing assets"""
|
||||
assets: List[AssetImport]
|
||||
|
||||
|
||||
class ValidationSubmission(BaseModel):
|
||||
"""Request model for submitting validation data"""
|
||||
validation_timestamp: str
|
||||
validation_timestamp_unix: float
|
||||
is_valid: bool
|
||||
sources_missing: List[str] = []
|
||||
sources_stale: List[str] = []
|
||||
coordinate_differences: Dict[str, Any] = {}
|
||||
source_coordinates: Dict[str, Any] = {}
|
||||
validation_details: Dict[str, Any] = {}
|
||||
|
||||
|
||||
class ValidationBatchSubmission(BaseModel):
|
||||
"""Request model for submitting multiple validation records"""
|
||||
records: List[ValidationSubmission]
|
||||
|
||||
|
||||
class ValidationResponse(BaseModel):
|
||||
"""Response model for validation data"""
|
||||
id: int
|
||||
asset_name: str
|
||||
validation_timestamp: str
|
||||
validation_timestamp_unix: float
|
||||
is_valid: bool
|
||||
sources_missing: List[str]
|
||||
sources_stale: List[str]
|
||||
coordinate_differences: Dict[str, Any]
|
||||
source_coordinates: Dict[str, Any]
|
||||
validation_details: Dict[str, Any]
|
||||
received_at: datetime
|
||||
|
||||
class Config:
|
||||
from_attributes = True
|
||||
|
||||
|
||||
class AssetStatus(BaseModel):
|
||||
"""Current status of an asset (latest validation)"""
|
||||
asset_name: str
|
||||
is_online: bool # Has reported in last 5 minutes
|
||||
last_seen: Optional[datetime] = None
|
||||
latest_validation: Optional[ValidationResponse] = None
|
||||
|
||||
|
||||
class LoginRequest(BaseModel):
|
||||
"""Request model for user login"""
|
||||
username: str
|
||||
password: str
|
||||
|
||||
|
||||
class LoginResponse(BaseModel):
|
||||
"""Response model for successful login"""
|
||||
message: str
|
||||
username: str
|
||||
|
||||
@@ -0,0 +1,101 @@
|
||||
# GNSS Guard Server - Nginx Configuration
|
||||
# This file is used for initial setup (HTTP only)
|
||||
# After SSL setup, this file is replaced with the SSL configuration
|
||||
|
||||
upstream gnss_server {
|
||||
server gnss-server:8000;
|
||||
}
|
||||
|
||||
# =============================================================================
|
||||
# IP WHITELIST FOR DASHBOARD ACCESS
|
||||
# =============================================================================
|
||||
# These IPs can access the web dashboard and admin endpoints.
|
||||
# The validation API endpoints (/api/v1/validation*) are open to all.
|
||||
#
|
||||
# To update: edit this file and run ./deploy_server.sh --restart
|
||||
# =============================================================================
|
||||
|
||||
geo $ip_whitelist {
|
||||
default 0;
|
||||
|
||||
# Office IPs - Whitelisted for dashboard access
|
||||
213.149.164.73 1; # Socrates Office 5G
|
||||
87.228.228.45 1; # Thaleias Office
|
||||
93.109.218.195 1; # HQ Cyta
|
||||
65.18.217.50 1; # HQ Cablenet
|
||||
93.109.218.196 1; # HQ Cyta 2
|
||||
62.228.7.94 1; # Socrates Home 3
|
||||
195.97.70.162 1; # Piraeus Office
|
||||
|
||||
# Localhost only (for internal health checks)
|
||||
127.0.0.1 1;
|
||||
# NOTE: Docker internal networks (10.0.0.0/8, 172.16.0.0/12) are NOT whitelisted
|
||||
# to prevent privilege escalation if an attacker gains container access
|
||||
}
|
||||
|
||||
# HTTP server
|
||||
server {
|
||||
listen 80;
|
||||
server_name _;
|
||||
|
||||
# Let's Encrypt challenge location - always open
|
||||
location /.well-known/acme-challenge/ {
|
||||
root /var/www/certbot;
|
||||
}
|
||||
|
||||
# =========================================================================
|
||||
# PUBLIC ENDPOINTS - Open to all (asset token authentication)
|
||||
# =========================================================================
|
||||
|
||||
# Validation API - accessible from anywhere (clients authenticate with tokens)
|
||||
location /api/v1/validation {
|
||||
proxy_pass http://gnss_server;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_read_timeout 300;
|
||||
proxy_connect_timeout 300;
|
||||
}
|
||||
|
||||
# Health check endpoint - open
|
||||
location /health {
|
||||
proxy_pass http://gnss_server;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
}
|
||||
|
||||
# =========================================================================
|
||||
# RESTRICTED ENDPOINTS - Office IPs only (session authentication)
|
||||
# =========================================================================
|
||||
|
||||
# All other endpoints require IP whitelist
|
||||
location / {
|
||||
# Check IP whitelist
|
||||
# TEMPORARILY DISABLED - uncomment to re-enable IP whitelisting
|
||||
# if ($ip_whitelist = 0) {
|
||||
# return 403;
|
||||
# }
|
||||
|
||||
proxy_pass http://gnss_server;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection 'upgrade';
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
proxy_read_timeout 300;
|
||||
proxy_connect_timeout 300;
|
||||
}
|
||||
|
||||
# Custom error page for 403
|
||||
error_page 403 /403.html;
|
||||
location = /403.html {
|
||||
internal;
|
||||
default_type text/html;
|
||||
return 403 '<!DOCTYPE html><html><head><title>Access Denied</title><style>body{font-family:sans-serif;display:flex;justify-content:center;align-items:center;height:100vh;margin:0;background:#060b10;color:#e5e9f5;}.container{text-align:center;}.title{font-size:48px;margin-bottom:20px;color:#c62828;}.msg{font-size:18px;color:#9aa3b8;}</style></head><body><div class="container"><div class="title">403</div><div class="msg">Access Denied<br>Your IP is not authorized to access this resource.</div></div></body></html>';
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,148 @@
|
||||
# GNSS Guard Server - Nginx Configuration with SSL
|
||||
#
|
||||
# After obtaining SSL certificate, copy this file:
|
||||
# cp gnss-guard-ssl.conf.template gnss-guard-ssl.conf
|
||||
# Then edit and set your domain, and restart nginx
|
||||
|
||||
upstream gnss_server {
|
||||
server gnss-server:8000;
|
||||
}
|
||||
|
||||
# =============================================================================
|
||||
# IP WHITELIST FOR DASHBOARD ACCESS
|
||||
# =============================================================================
|
||||
# These IPs can access the web dashboard and admin endpoints.
|
||||
# The validation API endpoints (/api/v1/validation*) are open to all.
|
||||
#
|
||||
# To update: edit this file and run ./deploy_server.sh --restart
|
||||
# =============================================================================
|
||||
|
||||
geo $ip_whitelist {
|
||||
default 0;
|
||||
|
||||
# Office IPs - Whitelisted for dashboard access
|
||||
213.149.164.73 1; # Socrates Office 5G
|
||||
87.228.228.45 1; # Thaleias Office
|
||||
93.109.218.195 1; # HQ Cyta
|
||||
65.18.217.50 1; # HQ Cablenet
|
||||
93.109.218.196 1; # HQ Cyta 2
|
||||
62.228.7.94 1; # Socrates Home 3
|
||||
195.97.70.162 1; # Piraeus Office
|
||||
|
||||
# Localhost only (for internal health checks)
|
||||
127.0.0.1 1;
|
||||
# NOTE: Docker internal networks (10.0.0.0/8, 172.16.0.0/12) are NOT whitelisted
|
||||
# to prevent privilege escalation if an attacker gains container access
|
||||
}
|
||||
|
||||
# HTTP -> HTTPS redirect
|
||||
server {
|
||||
listen 80;
|
||||
server_name YOUR_DOMAIN_HERE;
|
||||
|
||||
location /.well-known/acme-challenge/ {
|
||||
root /var/www/certbot;
|
||||
}
|
||||
|
||||
location / {
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
}
|
||||
|
||||
# HTTPS server
|
||||
server {
|
||||
listen 443 ssl;
|
||||
http2 on;
|
||||
server_name YOUR_DOMAIN_HERE;
|
||||
|
||||
# SSL certificates (Let's Encrypt)
|
||||
ssl_certificate /etc/letsencrypt/live/YOUR_DOMAIN_HERE/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/YOUR_DOMAIN_HERE/privkey.pem;
|
||||
|
||||
# SSL configuration
|
||||
ssl_session_timeout 1d;
|
||||
ssl_session_cache shared:SSL:50m;
|
||||
ssl_session_tickets off;
|
||||
|
||||
# Modern TLS configuration
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
|
||||
ssl_prefer_server_ciphers off;
|
||||
|
||||
# HSTS - Force HTTPS for 2 years, include subdomains
|
||||
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains" always;
|
||||
|
||||
# Content Security Policy - restrict resource loading
|
||||
# Allows: self, Leaflet from unpkg, map tiles, marker icons
|
||||
add_header Content-Security-Policy "default-src 'self'; script-src 'self' https://unpkg.com 'unsafe-inline'; style-src 'self' https://unpkg.com 'unsafe-inline'; img-src 'self' data: https://*.basemaps.cartocdn.com https://raw.githubusercontent.com https://cdnjs.cloudflare.com https://*.openstreetmap.org; font-src 'self'; connect-src 'self'; frame-ancestors 'self'; base-uri 'self'; form-action 'self'" always;
|
||||
|
||||
# =========================================================================
|
||||
# PUBLIC ENDPOINTS - Open to all (asset token authentication)
|
||||
# =========================================================================
|
||||
|
||||
# Validation API - accessible from anywhere (clients authenticate with tokens)
|
||||
location /api/v1/validation {
|
||||
proxy_pass http://gnss_server;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_read_timeout 300;
|
||||
proxy_connect_timeout 300;
|
||||
}
|
||||
|
||||
# Health check endpoint - open
|
||||
location /health {
|
||||
proxy_pass http://gnss_server;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
}
|
||||
|
||||
# =========================================================================
|
||||
# RESTRICTED ENDPOINTS - Office IPs only (session authentication)
|
||||
# =========================================================================
|
||||
|
||||
# All other endpoints require IP whitelist
|
||||
location / {
|
||||
# Check IP whitelist
|
||||
# TEMPORARILY DISABLED - uncomment to re-enable IP whitelisting
|
||||
# if ($ip_whitelist = 0) {
|
||||
# return 403;
|
||||
# }
|
||||
|
||||
proxy_pass http://gnss_server;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection 'upgrade';
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
proxy_read_timeout 300;
|
||||
proxy_connect_timeout 300;
|
||||
proxy_buffering off;
|
||||
}
|
||||
|
||||
# Static files - also restricted
|
||||
location /static/ {
|
||||
# TEMPORARILY DISABLED - uncomment to re-enable IP whitelisting
|
||||
# if ($ip_whitelist = 0) {
|
||||
# return 403;
|
||||
# }
|
||||
|
||||
proxy_pass http://gnss_server/static/;
|
||||
proxy_cache_valid 200 1d;
|
||||
expires 1d;
|
||||
add_header Cache-Control "public, immutable";
|
||||
}
|
||||
|
||||
# Custom error page for 403
|
||||
error_page 403 /403.html;
|
||||
location = /403.html {
|
||||
internal;
|
||||
default_type text/html;
|
||||
return 403 '<!DOCTYPE html><html><head><title>Access Denied</title><style>body{font-family:sans-serif;display:flex;justify-content:center;align-items:center;height:100vh;margin:0;background:#060b10;color:#e5e9f5;}.container{text-align:center;}.title{font-size:48px;margin-bottom:20px;color:#c62828;}.msg{font-size:18px;color:#9aa3b8;}</style></head><body><div class="container"><div class="title">403</div><div class="msg">Access Denied<br>Your IP is not authorized to access this resource.</div></div></body></html>';
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,40 @@
|
||||
user nginx;
|
||||
worker_processes auto;
|
||||
error_log /var/log/nginx/error.log warn;
|
||||
pid /var/run/nginx.pid;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for"';
|
||||
|
||||
access_log /var/log/nginx/access.log main;
|
||||
|
||||
sendfile on;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
keepalive_timeout 65;
|
||||
types_hash_max_size 2048;
|
||||
|
||||
# Gzip compression
|
||||
gzip on;
|
||||
gzip_vary on;
|
||||
gzip_proxied any;
|
||||
gzip_comp_level 6;
|
||||
gzip_types text/plain text/css text/xml application/json application/javascript application/xml;
|
||||
|
||||
# Security headers
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
}
|
||||
|
||||
@@ -0,0 +1,28 @@
|
||||
# GNSS Guard Server Dependencies
|
||||
|
||||
# Web framework
|
||||
fastapi>=0.104.0
|
||||
uvicorn[standard]>=0.24.0
|
||||
|
||||
# Database
|
||||
sqlalchemy>=2.0.0
|
||||
psycopg2-binary>=2.9.9 # PostgreSQL driver
|
||||
alembic>=1.12.0 # Database migrations (optional)
|
||||
|
||||
# Configuration
|
||||
pydantic>=2.5.0
|
||||
pydantic-settings>=2.1.0
|
||||
python-dotenv>=1.0.0
|
||||
|
||||
# Templates and static files
|
||||
jinja2>=3.1.2
|
||||
python-multipart>=0.0.6 # For form data
|
||||
|
||||
# Security
|
||||
passlib[bcrypt]>=1.7.4 # Password hashing
|
||||
slowapi>=0.1.9 # Rate limiting
|
||||
|
||||
# HTTP client (for health checks and Telegram API)
|
||||
httpx>=0.25.0
|
||||
requests>=2.31.0
|
||||
|
||||
@@ -0,0 +1,4 @@
|
||||
"""
|
||||
API routes for GNSS Guard Server
|
||||
"""
|
||||
|
||||
488
backup-from-device/gnss-guard/tm-gnss-guard/server/routes/api.py
Normal file
488
backup-from-device/gnss-guard/tm-gnss-guard/server/routes/api.py
Normal file
@@ -0,0 +1,488 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
REST API endpoints for GNSS Guard Server
|
||||
Handles validation data submission and retrieval
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from typing import List, Optional
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, Header, Query
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import desc
|
||||
|
||||
from database import get_db
|
||||
from models import (
|
||||
Asset, ValidationHistory, AssetNotificationState,
|
||||
ValidationSubmission, ValidationBatchSubmission,
|
||||
ValidationResponse, AssetStatus, AssetResponse, AssetCreate, AssetWithToken,
|
||||
AssetImport, AssetBatchImport
|
||||
)
|
||||
from routes.auth import get_current_user
|
||||
from services.telegram_service import get_telegram_service
|
||||
|
||||
logger = logging.getLogger("gnss_guard.server.api")
|
||||
|
||||
router = APIRouter(prefix="/api/v1", tags=["api"])
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Asset Token Authentication Dependency
|
||||
# =============================================================================
|
||||
|
||||
async def get_current_asset(
|
||||
authorization: str = Header(..., description="Bearer token for asset authentication"),
|
||||
db: Session = Depends(get_db)
|
||||
) -> Asset:
|
||||
"""
|
||||
Dependency to authenticate asset using Bearer token.
|
||||
Returns the authenticated asset or raises 401.
|
||||
"""
|
||||
if not authorization.startswith("Bearer "):
|
||||
raise HTTPException(status_code=401, detail="Invalid authorization header format")
|
||||
|
||||
token = authorization[7:] # Remove "Bearer " prefix
|
||||
token_hash = Asset.hash_token(token)
|
||||
|
||||
asset = db.query(Asset).filter(
|
||||
Asset.token_hash == token_hash,
|
||||
Asset.is_active == True
|
||||
).first()
|
||||
|
||||
if not asset:
|
||||
raise HTTPException(status_code=401, detail="Invalid or inactive token")
|
||||
|
||||
return asset
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Validation Endpoints (Asset Authentication Required)
|
||||
# =============================================================================
|
||||
|
||||
@router.post("/validation", status_code=201)
|
||||
async def submit_validation(
|
||||
data: ValidationSubmission,
|
||||
asset: Asset = Depends(get_current_asset),
|
||||
db: Session = Depends(get_db)
|
||||
) -> dict:
|
||||
"""
|
||||
Submit a single validation record from an asset.
|
||||
Also triggers Telegram notifications if state changed.
|
||||
"""
|
||||
try:
|
||||
validation = ValidationHistory(
|
||||
asset_id=asset.id,
|
||||
validation_timestamp=data.validation_timestamp,
|
||||
validation_timestamp_unix=data.validation_timestamp_unix,
|
||||
is_valid=data.is_valid,
|
||||
sources_missing=json.dumps(data.sources_missing),
|
||||
sources_stale=json.dumps(data.sources_stale),
|
||||
coordinate_differences=json.dumps(data.coordinate_differences),
|
||||
source_coordinates=json.dumps(data.source_coordinates),
|
||||
validation_details=json.dumps(data.validation_details),
|
||||
)
|
||||
|
||||
db.add(validation)
|
||||
db.commit()
|
||||
|
||||
logger.info(f"Validation received from asset '{asset.name}' at {data.validation_timestamp}")
|
||||
|
||||
# Process Telegram notification (will only send if state changed)
|
||||
try:
|
||||
telegram_service = get_telegram_service()
|
||||
validation_data = {
|
||||
"sources_missing": data.sources_missing,
|
||||
"sources_stale": data.sources_stale,
|
||||
"validation_details": data.validation_details,
|
||||
"source_coordinates": data.source_coordinates,
|
||||
}
|
||||
telegram_service.process_validation(db, asset, validation_data)
|
||||
except Exception as e:
|
||||
logger.warning(f"Telegram notification error for {asset.name}: {e}")
|
||||
|
||||
return {
|
||||
"status": "success",
|
||||
"message": "Validation record saved",
|
||||
"id": validation.id
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error saving validation from {asset.name}: {e}")
|
||||
db.rollback()
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.post("/validation/batch", status_code=201)
|
||||
async def submit_validation_batch(
|
||||
data: ValidationBatchSubmission,
|
||||
asset: Asset = Depends(get_current_asset),
|
||||
db: Session = Depends(get_db)
|
||||
) -> dict:
|
||||
"""
|
||||
Submit multiple validation records (for catching up after offline period).
|
||||
Only sends Telegram notification for the most recent record to avoid spam.
|
||||
"""
|
||||
try:
|
||||
saved_count = 0
|
||||
skipped_count = 0
|
||||
latest_record = None
|
||||
latest_timestamp = 0
|
||||
|
||||
for record in data.records:
|
||||
# Check if this timestamp already exists for this asset
|
||||
existing = db.query(ValidationHistory).filter(
|
||||
ValidationHistory.asset_id == asset.id,
|
||||
ValidationHistory.validation_timestamp_unix == record.validation_timestamp_unix
|
||||
).first()
|
||||
|
||||
if existing:
|
||||
skipped_count += 1
|
||||
continue
|
||||
|
||||
validation = ValidationHistory(
|
||||
asset_id=asset.id,
|
||||
validation_timestamp=record.validation_timestamp,
|
||||
validation_timestamp_unix=record.validation_timestamp_unix,
|
||||
is_valid=record.is_valid,
|
||||
sources_missing=json.dumps(record.sources_missing),
|
||||
sources_stale=json.dumps(record.sources_stale),
|
||||
coordinate_differences=json.dumps(record.coordinate_differences),
|
||||
source_coordinates=json.dumps(record.source_coordinates),
|
||||
validation_details=json.dumps(record.validation_details),
|
||||
)
|
||||
db.add(validation)
|
||||
saved_count += 1
|
||||
|
||||
# Track the most recent record for notification
|
||||
if record.validation_timestamp_unix > latest_timestamp:
|
||||
latest_timestamp = record.validation_timestamp_unix
|
||||
latest_record = record
|
||||
|
||||
db.commit()
|
||||
|
||||
logger.info(f"Batch validation from '{asset.name}': {saved_count} saved, {skipped_count} skipped")
|
||||
|
||||
# Process Telegram notification for the most recent record only
|
||||
if latest_record:
|
||||
try:
|
||||
telegram_service = get_telegram_service()
|
||||
validation_data = {
|
||||
"sources_missing": latest_record.sources_missing,
|
||||
"sources_stale": latest_record.sources_stale,
|
||||
"validation_details": latest_record.validation_details,
|
||||
"source_coordinates": latest_record.source_coordinates,
|
||||
}
|
||||
telegram_service.process_validation(db, asset, validation_data)
|
||||
except Exception as e:
|
||||
logger.warning(f"Telegram notification error for {asset.name}: {e}")
|
||||
|
||||
return {
|
||||
"status": "success",
|
||||
"saved": saved_count,
|
||||
"skipped": skipped_count
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error saving batch validation from {asset.name}: {e}")
|
||||
db.rollback()
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Read Endpoints (Session Authentication Required)
|
||||
# =============================================================================
|
||||
|
||||
@router.get("/assets", response_model=List[AssetResponse])
|
||||
async def list_assets(
|
||||
user: str = Depends(get_current_user),
|
||||
db: Session = Depends(get_db)
|
||||
) -> List[AssetResponse]:
|
||||
"""
|
||||
List all registered assets.
|
||||
Requires user session authentication.
|
||||
"""
|
||||
assets = db.query(Asset).filter(Asset.is_active == True).all()
|
||||
return assets
|
||||
|
||||
|
||||
@router.get("/assets/{asset_name}/status")
|
||||
async def get_asset_status(
|
||||
asset_name: str,
|
||||
user: str = Depends(get_current_user),
|
||||
db: Session = Depends(get_db)
|
||||
) -> AssetStatus:
|
||||
"""
|
||||
Get current status of an asset (latest validation).
|
||||
Requires user session authentication.
|
||||
"""
|
||||
asset = db.query(Asset).filter(
|
||||
Asset.name == asset_name,
|
||||
Asset.is_active == True
|
||||
).first()
|
||||
|
||||
if not asset:
|
||||
raise HTTPException(status_code=404, detail=f"Asset '{asset_name}' not found")
|
||||
|
||||
# Get latest validation
|
||||
latest = db.query(ValidationHistory).filter(
|
||||
ValidationHistory.asset_id == asset.id
|
||||
).order_by(desc(ValidationHistory.validation_timestamp_unix)).first()
|
||||
|
||||
# Get online status from notification state (consistent with Telegram alerts)
|
||||
notification_state = db.query(AssetNotificationState).filter(
|
||||
AssetNotificationState.asset_id == asset.id
|
||||
).first()
|
||||
|
||||
is_online = notification_state.is_online if notification_state else False
|
||||
last_seen = notification_state.last_validation_at if notification_state else None
|
||||
|
||||
# Fall back to validation timestamp if no notification state
|
||||
if not last_seen and latest and latest.received_at:
|
||||
last_seen = latest.received_at
|
||||
|
||||
latest_validation = None
|
||||
if latest:
|
||||
latest_validation = ValidationResponse(
|
||||
id=latest.id,
|
||||
asset_name=asset.name,
|
||||
validation_timestamp=latest.validation_timestamp,
|
||||
validation_timestamp_unix=latest.validation_timestamp_unix,
|
||||
is_valid=latest.is_valid,
|
||||
sources_missing=json.loads(latest.sources_missing or "[]"),
|
||||
sources_stale=json.loads(latest.sources_stale or "[]"),
|
||||
coordinate_differences=json.loads(latest.coordinate_differences or "{}"),
|
||||
source_coordinates=json.loads(latest.source_coordinates or "{}"),
|
||||
validation_details=json.loads(latest.validation_details or "{}"),
|
||||
received_at=latest.received_at
|
||||
)
|
||||
|
||||
return AssetStatus(
|
||||
asset_name=asset.name,
|
||||
is_online=is_online,
|
||||
last_seen=last_seen,
|
||||
latest_validation=latest_validation
|
||||
)
|
||||
|
||||
|
||||
@router.get("/assets/{asset_name}/history")
|
||||
async def get_asset_history(
|
||||
asset_name: str,
|
||||
hours: int = Query(default=72, ge=1, le=168, description="Hours of history (max 168 = 7 days)"),
|
||||
user: str = Depends(get_current_user),
|
||||
db: Session = Depends(get_db)
|
||||
) -> List[ValidationResponse]:
|
||||
"""
|
||||
Get validation history for an asset (default: 72 hours).
|
||||
Requires user session authentication.
|
||||
"""
|
||||
asset = db.query(Asset).filter(
|
||||
Asset.name == asset_name,
|
||||
Asset.is_active == True
|
||||
).first()
|
||||
|
||||
if not asset:
|
||||
raise HTTPException(status_code=404, detail=f"Asset '{asset_name}' not found")
|
||||
|
||||
# Calculate cutoff timestamp
|
||||
cutoff = datetime.utcnow() - timedelta(hours=hours)
|
||||
cutoff_unix = cutoff.timestamp()
|
||||
|
||||
# Get validation history
|
||||
validations = db.query(ValidationHistory).filter(
|
||||
ValidationHistory.asset_id == asset.id,
|
||||
ValidationHistory.validation_timestamp_unix >= cutoff_unix
|
||||
).order_by(desc(ValidationHistory.validation_timestamp_unix)).all()
|
||||
|
||||
return [
|
||||
ValidationResponse(
|
||||
id=v.id,
|
||||
asset_name=asset.name,
|
||||
validation_timestamp=v.validation_timestamp,
|
||||
validation_timestamp_unix=v.validation_timestamp_unix,
|
||||
is_valid=v.is_valid,
|
||||
sources_missing=json.loads(v.sources_missing or "[]"),
|
||||
sources_stale=json.loads(v.sources_stale or "[]"),
|
||||
coordinate_differences=json.loads(v.coordinate_differences or "{}"),
|
||||
source_coordinates=json.loads(v.source_coordinates or "{}"),
|
||||
validation_details=json.loads(v.validation_details or "{}"),
|
||||
received_at=v.received_at
|
||||
)
|
||||
for v in validations
|
||||
]
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Admin Endpoints (Session Authentication Required)
|
||||
# =============================================================================
|
||||
|
||||
@router.post("/admin/assets", response_model=AssetWithToken, status_code=201)
|
||||
async def create_asset(
|
||||
data: AssetCreate,
|
||||
user: str = Depends(get_current_user),
|
||||
db: Session = Depends(get_db)
|
||||
) -> AssetWithToken:
|
||||
"""
|
||||
Create a new asset and return its token.
|
||||
Requires user session authentication.
|
||||
"""
|
||||
# Check if asset already exists
|
||||
existing = db.query(Asset).filter(Asset.name == data.name).first()
|
||||
if existing:
|
||||
raise HTTPException(status_code=400, detail=f"Asset '{data.name}' already exists")
|
||||
|
||||
# Generate token
|
||||
token = Asset.generate_token()
|
||||
token_hash = Asset.hash_token(token)
|
||||
|
||||
asset = Asset(
|
||||
name=data.name,
|
||||
token_hash=token_hash,
|
||||
description=data.description,
|
||||
telegram_chat_id=data.telegram_chat_id,
|
||||
telegram_enabled=data.telegram_enabled
|
||||
)
|
||||
|
||||
db.add(asset)
|
||||
db.commit()
|
||||
db.refresh(asset)
|
||||
|
||||
logger.info(f"Created new asset: {data.name}")
|
||||
|
||||
# Return asset with the unhashed token (only shown once!)
|
||||
return AssetWithToken(
|
||||
id=asset.id,
|
||||
name=asset.name,
|
||||
is_active=asset.is_active,
|
||||
created_at=asset.created_at,
|
||||
description=asset.description,
|
||||
telegram_chat_id=asset.telegram_chat_id,
|
||||
telegram_enabled=asset.telegram_enabled,
|
||||
token=token
|
||||
)
|
||||
|
||||
|
||||
@router.delete("/admin/assets/{asset_name}")
|
||||
async def deactivate_asset(
|
||||
asset_name: str,
|
||||
user: str = Depends(get_current_user),
|
||||
db: Session = Depends(get_db)
|
||||
) -> dict:
|
||||
"""
|
||||
Deactivate an asset (soft delete).
|
||||
Requires user session authentication.
|
||||
"""
|
||||
asset = db.query(Asset).filter(Asset.name == asset_name).first()
|
||||
|
||||
if not asset:
|
||||
raise HTTPException(status_code=404, detail=f"Asset '{asset_name}' not found")
|
||||
|
||||
asset.is_active = False
|
||||
db.commit()
|
||||
|
||||
logger.info(f"Deactivated asset: {asset_name}")
|
||||
|
||||
return {"status": "success", "message": f"Asset '{asset_name}' deactivated"}
|
||||
|
||||
|
||||
@router.post("/admin/assets/import", response_model=AssetResponse, status_code=201)
|
||||
async def import_asset(
|
||||
data: AssetImport,
|
||||
user: str = Depends(get_current_user),
|
||||
db: Session = Depends(get_db)
|
||||
) -> AssetResponse:
|
||||
"""
|
||||
Import an asset with a specific token.
|
||||
If asset exists, updates its token. If not, creates it.
|
||||
Requires user session authentication.
|
||||
"""
|
||||
# Hash the provided token
|
||||
token_hash = Asset.hash_token(data.token)
|
||||
|
||||
# Check if asset already exists
|
||||
existing = db.query(Asset).filter(Asset.name == data.name).first()
|
||||
|
||||
if existing:
|
||||
# Update existing asset's token
|
||||
existing.token_hash = token_hash
|
||||
existing.is_active = True
|
||||
if data.description:
|
||||
existing.description = data.description
|
||||
if data.telegram_chat_id is not None:
|
||||
existing.telegram_chat_id = data.telegram_chat_id
|
||||
existing.telegram_enabled = data.telegram_enabled
|
||||
db.commit()
|
||||
db.refresh(existing)
|
||||
logger.info(f"Updated token for existing asset: {data.name}")
|
||||
return existing
|
||||
else:
|
||||
# Create new asset with provided token
|
||||
asset = Asset(
|
||||
name=data.name,
|
||||
token_hash=token_hash,
|
||||
description=data.description,
|
||||
telegram_chat_id=data.telegram_chat_id,
|
||||
telegram_enabled=data.telegram_enabled
|
||||
)
|
||||
db.add(asset)
|
||||
db.commit()
|
||||
db.refresh(asset)
|
||||
logger.info(f"Imported new asset: {data.name}")
|
||||
return asset
|
||||
|
||||
|
||||
@router.post("/admin/assets/import/batch")
|
||||
async def import_assets_batch(
|
||||
data: AssetBatchImport,
|
||||
user: str = Depends(get_current_user),
|
||||
db: Session = Depends(get_db)
|
||||
) -> dict:
|
||||
"""
|
||||
Batch import assets with specific tokens.
|
||||
Creates new assets or updates existing ones.
|
||||
Requires user session authentication.
|
||||
"""
|
||||
created = 0
|
||||
updated = 0
|
||||
errors = []
|
||||
|
||||
for asset_data in data.assets:
|
||||
try:
|
||||
token_hash = Asset.hash_token(asset_data.token)
|
||||
existing = db.query(Asset).filter(Asset.name == asset_data.name).first()
|
||||
|
||||
if existing:
|
||||
existing.token_hash = token_hash
|
||||
existing.is_active = True
|
||||
if asset_data.description:
|
||||
existing.description = asset_data.description
|
||||
if asset_data.telegram_chat_id is not None:
|
||||
existing.telegram_chat_id = asset_data.telegram_chat_id
|
||||
existing.telegram_enabled = asset_data.telegram_enabled
|
||||
updated += 1
|
||||
logger.info(f"Updated token for asset: {asset_data.name}")
|
||||
else:
|
||||
asset = Asset(
|
||||
name=asset_data.name,
|
||||
token_hash=token_hash,
|
||||
description=asset_data.description,
|
||||
telegram_chat_id=asset_data.telegram_chat_id,
|
||||
telegram_enabled=asset_data.telegram_enabled
|
||||
)
|
||||
db.add(asset)
|
||||
created += 1
|
||||
logger.info(f"Created asset: {asset_data.name}")
|
||||
except Exception as e:
|
||||
errors.append({"name": asset_data.name, "error": str(e)})
|
||||
logger.error(f"Failed to import asset {asset_data.name}: {e}")
|
||||
|
||||
db.commit()
|
||||
|
||||
return {
|
||||
"status": "success",
|
||||
"created": created,
|
||||
"updated": updated,
|
||||
"errors": errors
|
||||
}
|
||||
|
||||
@@ -0,0 +1,150 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Authentication routes for GNSS Guard Server
|
||||
Handles user session authentication for the web UI
|
||||
"""
|
||||
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, Response, Request
|
||||
from fastapi.responses import RedirectResponse
|
||||
from pydantic import BaseModel
|
||||
from slowapi import Limiter
|
||||
from slowapi.util import get_remote_address
|
||||
|
||||
from config import get_config
|
||||
|
||||
logger = logging.getLogger("gnss_guard.server.auth")
|
||||
|
||||
router = APIRouter(tags=["auth"])
|
||||
|
||||
# Rate limiter instance (uses app.state.limiter set in main.py)
|
||||
limiter = Limiter(key_func=get_remote_address)
|
||||
|
||||
# Simple in-memory session storage (for single-user scenario)
|
||||
# In production with multiple servers, use Redis or database
|
||||
_sessions: dict = {}
|
||||
|
||||
|
||||
class LoginRequest(BaseModel):
|
||||
username: str
|
||||
password: str
|
||||
|
||||
|
||||
def create_session(username: str) -> str:
|
||||
"""Create a new session and return session ID"""
|
||||
import secrets
|
||||
session_id = secrets.token_urlsafe(32)
|
||||
config = get_config()
|
||||
|
||||
_sessions[session_id] = {
|
||||
"username": username,
|
||||
"created_at": datetime.utcnow(),
|
||||
"expires_at": datetime.utcnow() + timedelta(minutes=config.session_expire_minutes)
|
||||
}
|
||||
|
||||
return session_id
|
||||
|
||||
|
||||
def validate_session(session_id: str) -> Optional[str]:
|
||||
"""Validate session and return username if valid"""
|
||||
if not session_id or session_id not in _sessions:
|
||||
return None
|
||||
|
||||
session = _sessions[session_id]
|
||||
if datetime.utcnow() > session["expires_at"]:
|
||||
del _sessions[session_id]
|
||||
return None
|
||||
|
||||
return session["username"]
|
||||
|
||||
|
||||
def get_current_user(request: Request) -> str:
|
||||
"""
|
||||
Dependency to get current authenticated user.
|
||||
Raises 401 if not authenticated.
|
||||
"""
|
||||
session_id = request.cookies.get("session_id")
|
||||
username = validate_session(session_id)
|
||||
|
||||
if not username:
|
||||
raise HTTPException(
|
||||
status_code=401,
|
||||
detail="Not authenticated",
|
||||
headers={"WWW-Authenticate": "Bearer"}
|
||||
)
|
||||
|
||||
return username
|
||||
|
||||
|
||||
def get_optional_user(request: Request) -> Optional[str]:
|
||||
"""
|
||||
Dependency to get current user if authenticated, None otherwise.
|
||||
"""
|
||||
session_id = request.cookies.get("session_id")
|
||||
return validate_session(session_id)
|
||||
|
||||
|
||||
@router.post("/login")
|
||||
@limiter.limit("5/minute") # Rate limit: 5 login attempts per minute per IP
|
||||
async def login(request: Request, data: LoginRequest, response: Response):
|
||||
"""
|
||||
Login endpoint - validates credentials and sets session cookie.
|
||||
Rate limited to prevent brute force attacks.
|
||||
"""
|
||||
config = get_config()
|
||||
|
||||
# Verify credentials against hardcoded user
|
||||
if data.username != config.web_username or data.password != config.web_password:
|
||||
logger.warning(f"Failed login attempt for user: {data.username} from IP: {request.client.host}")
|
||||
raise HTTPException(status_code=401, detail="Invalid credentials")
|
||||
|
||||
# Create session
|
||||
session_id = create_session(data.username)
|
||||
|
||||
# Set session cookie
|
||||
# secure=True ensures cookie only sent over HTTPS
|
||||
response.set_cookie(
|
||||
key="session_id",
|
||||
value=session_id,
|
||||
httponly=True,
|
||||
secure=True, # Only send over HTTPS
|
||||
samesite="lax",
|
||||
max_age=config.session_expire_minutes * 60
|
||||
)
|
||||
|
||||
logger.info(f"User logged in: {data.username}")
|
||||
|
||||
return {"message": "Login successful", "username": data.username}
|
||||
|
||||
|
||||
@router.post("/logout")
|
||||
async def logout(request: Request, response: Response):
|
||||
"""
|
||||
Logout endpoint - clears session.
|
||||
"""
|
||||
session_id = request.cookies.get("session_id")
|
||||
|
||||
if session_id and session_id in _sessions:
|
||||
del _sessions[session_id]
|
||||
|
||||
response.delete_cookie("session_id")
|
||||
|
||||
return {"message": "Logged out successfully"}
|
||||
|
||||
|
||||
@router.get("/auth/check")
|
||||
async def check_auth(request: Request):
|
||||
"""
|
||||
Check if current session is authenticated.
|
||||
"""
|
||||
session_id = request.cookies.get("session_id")
|
||||
username = validate_session(session_id)
|
||||
|
||||
if username:
|
||||
return {"authenticated": True, "username": username}
|
||||
else:
|
||||
return {"authenticated": False}
|
||||
|
||||
@@ -0,0 +1,4 @@
|
||||
"""
|
||||
Services for GNSS Guard Server
|
||||
"""
|
||||
|
||||
@@ -0,0 +1,225 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Asset management service for GNSS Guard Server
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from typing import List, Optional, Dict, Any
|
||||
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import desc, func
|
||||
|
||||
from models import Asset, ValidationHistory, AssetNotificationState
|
||||
|
||||
logger = logging.getLogger("gnss_guard.server.asset_service")
|
||||
|
||||
|
||||
class AssetService:
|
||||
"""Service for asset-related operations"""
|
||||
|
||||
def __init__(self, db: Session):
|
||||
self.db = db
|
||||
|
||||
def get_all_assets(self, include_inactive: bool = False) -> List[Asset]:
|
||||
"""Get all assets"""
|
||||
query = self.db.query(Asset)
|
||||
if not include_inactive:
|
||||
query = query.filter(Asset.is_active == True)
|
||||
return query.all()
|
||||
|
||||
def get_asset_by_name(self, name: str) -> Optional[Asset]:
|
||||
"""Get asset by name"""
|
||||
return self.db.query(Asset).filter(Asset.name == name).first()
|
||||
|
||||
def get_asset_by_token(self, token: str) -> Optional[Asset]:
|
||||
"""Get active asset by token"""
|
||||
token_hash = Asset.hash_token(token)
|
||||
return self.db.query(Asset).filter(
|
||||
Asset.token_hash == token_hash,
|
||||
Asset.is_active == True
|
||||
).first()
|
||||
|
||||
def get_latest_validation(self, asset_id: int) -> Optional[ValidationHistory]:
|
||||
"""Get the latest validation record for an asset"""
|
||||
return self.db.query(ValidationHistory).filter(
|
||||
ValidationHistory.asset_id == asset_id
|
||||
).order_by(desc(ValidationHistory.validation_timestamp_unix)).first()
|
||||
|
||||
def get_validation_at_timestamp(
|
||||
self,
|
||||
asset_id: int,
|
||||
target_timestamp: float
|
||||
) -> Optional[ValidationHistory]:
|
||||
"""
|
||||
Get the validation record closest to (but not after) the specified timestamp.
|
||||
This is useful for viewing historical data at a specific point in time.
|
||||
"""
|
||||
return self.db.query(ValidationHistory).filter(
|
||||
ValidationHistory.asset_id == asset_id,
|
||||
ValidationHistory.validation_timestamp_unix <= target_timestamp
|
||||
).order_by(desc(ValidationHistory.validation_timestamp_unix)).first()
|
||||
|
||||
def get_validation_history(
|
||||
self,
|
||||
asset_id: int,
|
||||
hours: int = 72,
|
||||
limit: Optional[int] = None
|
||||
) -> List[ValidationHistory]:
|
||||
"""Get validation history for an asset"""
|
||||
cutoff = datetime.utcnow() - timedelta(hours=hours)
|
||||
cutoff_unix = cutoff.timestamp()
|
||||
|
||||
query = self.db.query(ValidationHistory).filter(
|
||||
ValidationHistory.asset_id == asset_id,
|
||||
ValidationHistory.validation_timestamp_unix >= cutoff_unix
|
||||
).order_by(desc(ValidationHistory.validation_timestamp_unix))
|
||||
|
||||
if limit:
|
||||
query = query.limit(limit)
|
||||
|
||||
return query.all()
|
||||
|
||||
def get_all_assets_status(self) -> List[Dict[str, Any]]:
|
||||
"""Get status summary for all active assets"""
|
||||
assets = self.get_all_assets()
|
||||
statuses = []
|
||||
|
||||
for asset in assets:
|
||||
latest = self.get_latest_validation(asset.id)
|
||||
|
||||
# Get online status from notification state (consistent with Telegram alerts)
|
||||
notification_state = self.db.query(AssetNotificationState).filter(
|
||||
AssetNotificationState.asset_id == asset.id
|
||||
).first()
|
||||
|
||||
is_online = notification_state.is_online if notification_state else False
|
||||
last_seen = notification_state.last_validation_at if notification_state else None
|
||||
|
||||
# Fall back to validation timestamp if no notification state
|
||||
if not last_seen and latest and latest.received_at:
|
||||
last_seen = latest.received_at
|
||||
|
||||
is_valid = None
|
||||
has_distance_alert = False # True if distance threshold exceeded
|
||||
|
||||
if latest:
|
||||
is_valid = latest.is_valid
|
||||
|
||||
# Check if there's a distance alert (AT RISK vs DEGRADED)
|
||||
if not is_valid:
|
||||
validation_details = json.loads(latest.validation_details or "{}")
|
||||
coordinate_differences = json.loads(latest.coordinate_differences or "{}")
|
||||
threshold = validation_details.get("threshold_meters", 200)
|
||||
max_distance = validation_details.get("max_distance_meters", 0)
|
||||
|
||||
# Also check coordinate_differences for max distance
|
||||
if not max_distance and coordinate_differences:
|
||||
for diff_data in coordinate_differences.values():
|
||||
if isinstance(diff_data, dict):
|
||||
dist = diff_data.get("distance_meters", 0)
|
||||
if dist > max_distance:
|
||||
max_distance = dist
|
||||
|
||||
has_distance_alert = max_distance > threshold
|
||||
|
||||
statuses.append({
|
||||
"name": asset.name,
|
||||
"is_online": is_online,
|
||||
"is_valid": is_valid,
|
||||
"has_distance_alert": has_distance_alert,
|
||||
"last_seen": last_seen.isoformat() if last_seen else None,
|
||||
"description": asset.description
|
||||
})
|
||||
|
||||
return statuses
|
||||
|
||||
def get_route_data(
|
||||
self,
|
||||
asset_id: int,
|
||||
hours: int = 72,
|
||||
until_timestamp: Optional[float] = None
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Get route data for map visualization.
|
||||
Returns list of points with coordinates and validation status.
|
||||
|
||||
Args:
|
||||
asset_id: The asset ID
|
||||
hours: Number of hours of history to retrieve
|
||||
until_timestamp: Optional Unix timestamp to show route up to this time.
|
||||
If provided, returns `hours` of history ending at this timestamp.
|
||||
"""
|
||||
if until_timestamp is not None:
|
||||
# Get history ending at the specified timestamp
|
||||
cutoff_unix = until_timestamp - (hours * 3600)
|
||||
validations = self.db.query(ValidationHistory).filter(
|
||||
ValidationHistory.asset_id == asset_id,
|
||||
ValidationHistory.validation_timestamp_unix >= cutoff_unix,
|
||||
ValidationHistory.validation_timestamp_unix <= until_timestamp
|
||||
).order_by(desc(ValidationHistory.validation_timestamp_unix)).all()
|
||||
else:
|
||||
validations = self.get_validation_history(asset_id, hours)
|
||||
route_points = []
|
||||
|
||||
for v in validations:
|
||||
source_coordinates = json.loads(v.source_coordinates or "{}")
|
||||
|
||||
# Get primary coordinate (prefer nmea_primary, then tm_ais, then any)
|
||||
coord = None
|
||||
for source in ["nmea_primary", "tm_ais", "starlink_location"]:
|
||||
if source in source_coordinates:
|
||||
coord = source_coordinates[source]
|
||||
break
|
||||
|
||||
if not coord and source_coordinates:
|
||||
# Use first available
|
||||
coord = list(source_coordinates.values())[0]
|
||||
|
||||
if coord and coord.get("latitude") and coord.get("longitude"):
|
||||
# Determine status color
|
||||
sources_missing = json.loads(v.sources_missing or "[]")
|
||||
sources_stale = json.loads(v.sources_stale or "[]")
|
||||
validation_details = json.loads(v.validation_details or "{}")
|
||||
|
||||
threshold = validation_details.get("threshold_meters", 200)
|
||||
max_distance = validation_details.get("max_distance_meters", 0)
|
||||
|
||||
if not v.is_valid and max_distance > threshold:
|
||||
status = "alert" # Red - distance exceeded
|
||||
elif sources_missing or sources_stale:
|
||||
status = "degraded" # Orange - missing/stale
|
||||
else:
|
||||
status = "valid" # Green - all OK
|
||||
|
||||
route_points.append({
|
||||
"id": v.id,
|
||||
"timestamp": v.validation_timestamp,
|
||||
"timestamp_unix": v.validation_timestamp_unix,
|
||||
"latitude": coord["latitude"],
|
||||
"longitude": coord["longitude"],
|
||||
"status": status,
|
||||
"is_valid": v.is_valid,
|
||||
"sources_missing": sources_missing,
|
||||
"sources_stale": sources_stale,
|
||||
"max_distance_m": max_distance,
|
||||
"threshold_m": threshold
|
||||
})
|
||||
|
||||
return route_points
|
||||
|
||||
def cleanup_old_validations(self, days: int = 90) -> int:
|
||||
"""Remove validation records older than specified days"""
|
||||
cutoff = datetime.utcnow() - timedelta(days=days)
|
||||
cutoff_unix = cutoff.timestamp()
|
||||
|
||||
deleted = self.db.query(ValidationHistory).filter(
|
||||
ValidationHistory.validation_timestamp_unix < cutoff_unix
|
||||
).delete()
|
||||
|
||||
self.db.commit()
|
||||
|
||||
logger.info(f"Cleaned up {deleted} old validation records")
|
||||
return deleted
|
||||
|
||||
@@ -0,0 +1,366 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Server-side Telegram Notification Service for GNSS Guard
|
||||
|
||||
Sends alerts to Telegram for GPS validation state changes:
|
||||
- Sources becoming missing or recovering
|
||||
- Sources becoming stale or recovering
|
||||
- Distance threshold breaches (possible jamming/spoofing)
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
import requests
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any, List, Optional, Set
|
||||
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from config import get_config
|
||||
from models import Asset, AssetNotificationState
|
||||
|
||||
logger = logging.getLogger("gnss_guard.server.telegram")
|
||||
|
||||
|
||||
class TelegramService:
|
||||
"""Server-side Telegram notification service"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize Telegram service with config"""
|
||||
config = get_config()
|
||||
self.bot_token = config.telegram_bot_token
|
||||
self.default_chat_id = config.telegram_chat_id
|
||||
self.enabled = config.telegram_enabled
|
||||
|
||||
if self.enabled:
|
||||
self.api_url = f"https://api.telegram.org/bot{self.bot_token}"
|
||||
logger.info("Telegram service initialized")
|
||||
else:
|
||||
self.api_url = None
|
||||
logger.info("Telegram service disabled (no bot token or chat ID configured)")
|
||||
|
||||
@staticmethod
|
||||
def escape_html(text: str) -> str:
|
||||
"""Escape HTML special characters for Telegram HTML parsing"""
|
||||
text = str(text)
|
||||
text = text.replace('&', '&')
|
||||
text = text.replace('<', '<')
|
||||
text = text.replace('>', '>')
|
||||
return text
|
||||
|
||||
def _send_message(self, chat_id: str, message: str) -> bool:
|
||||
"""Send a message to Telegram"""
|
||||
if not self.enabled:
|
||||
return False
|
||||
|
||||
try:
|
||||
url = f"{self.api_url}/sendMessage"
|
||||
payload = {
|
||||
"chat_id": chat_id,
|
||||
"text": message,
|
||||
"parse_mode": "HTML",
|
||||
"disable_web_page_preview": True
|
||||
}
|
||||
|
||||
response = requests.post(url, json=payload, timeout=10)
|
||||
|
||||
if response.status_code == 200:
|
||||
return True
|
||||
else:
|
||||
logger.error(f"Telegram API error: {response.status_code} - {response.text}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to send Telegram message: {e}")
|
||||
return False
|
||||
|
||||
def _get_chat_id_for_asset(self, asset: Asset) -> Optional[str]:
|
||||
"""Get the chat ID to use for an asset (asset-specific or default)"""
|
||||
if not asset.telegram_enabled:
|
||||
return None
|
||||
return asset.telegram_chat_id or self.default_chat_id
|
||||
|
||||
def process_validation(
|
||||
self,
|
||||
db: Session,
|
||||
asset: Asset,
|
||||
validation_data: Dict[str, Any]
|
||||
) -> bool:
|
||||
"""
|
||||
Process a validation submission and send notification if state changed.
|
||||
Also handles online/offline state transitions.
|
||||
|
||||
Args:
|
||||
db: Database session
|
||||
asset: Asset that submitted the validation
|
||||
validation_data: Validation data from the submission
|
||||
|
||||
Returns:
|
||||
bool: True if notification was sent
|
||||
"""
|
||||
chat_id = self._get_chat_id_for_asset(asset)
|
||||
|
||||
# Get or create notification state for this asset
|
||||
state = db.query(AssetNotificationState).filter(
|
||||
AssetNotificationState.asset_id == asset.id
|
||||
).first()
|
||||
|
||||
if not state:
|
||||
state = AssetNotificationState(asset_id=asset.id)
|
||||
db.add(state)
|
||||
db.flush()
|
||||
|
||||
notification_sent = False
|
||||
now = datetime.utcnow()
|
||||
|
||||
# Check if asset was offline and is now back online
|
||||
was_offline = state.is_online == False and state.last_validation_at is not None
|
||||
|
||||
if was_offline and self.enabled and chat_id:
|
||||
# Calculate how long it was offline
|
||||
offline_duration = (now - state.last_validation_at).total_seconds() if state.last_validation_at else None
|
||||
|
||||
notification_sent = self.send_asset_online_alert(
|
||||
chat_id=chat_id,
|
||||
asset_name=asset.name,
|
||||
offline_duration_seconds=offline_duration
|
||||
)
|
||||
|
||||
# Update online status and last validation time
|
||||
state.is_online = True
|
||||
state.last_validation_at = now
|
||||
|
||||
# Skip further processing if Telegram is disabled
|
||||
if not self.enabled or not chat_id:
|
||||
db.commit()
|
||||
return notification_sent
|
||||
|
||||
# Parse current state from validation
|
||||
sources_missing = set(validation_data.get("sources_missing", []))
|
||||
sources_stale = set(validation_data.get("sources_stale", []))
|
||||
validation_details = validation_data.get("validation_details", {})
|
||||
threshold = validation_details.get("threshold_meters", 0)
|
||||
max_distance = validation_details.get("max_distance_meters", 0)
|
||||
threshold_breached = max_distance > threshold if max_distance and threshold else False
|
||||
|
||||
# Parse previous state
|
||||
prev_missing = set(json.loads(state.prev_sources_missing or "[]"))
|
||||
prev_stale = set(json.loads(state.prev_sources_stale or "[]"))
|
||||
prev_threshold_breached = state.prev_threshold_breached or False
|
||||
|
||||
# Detect changes
|
||||
missing_added = sources_missing - prev_missing
|
||||
missing_removed = prev_missing - sources_missing
|
||||
stale_added = sources_stale - prev_stale
|
||||
stale_removed = prev_stale - sources_stale
|
||||
threshold_changed = threshold_breached != prev_threshold_breached
|
||||
|
||||
has_state_change = (
|
||||
missing_added or missing_removed or
|
||||
stale_added or stale_removed or
|
||||
threshold_changed
|
||||
)
|
||||
|
||||
if has_state_change:
|
||||
logger.info(f"State change detected for {asset.name}")
|
||||
|
||||
# Build and send notification
|
||||
source_coordinates = validation_data.get("source_coordinates", {})
|
||||
|
||||
message = self._build_state_change_message(
|
||||
asset_name=asset.name,
|
||||
missing_added=missing_added,
|
||||
missing_removed=missing_removed,
|
||||
stale_added=stale_added,
|
||||
stale_removed=stale_removed,
|
||||
threshold_breached=threshold_breached,
|
||||
prev_threshold_breached=prev_threshold_breached,
|
||||
max_distance_meters=max_distance,
|
||||
threshold_meters=threshold,
|
||||
source_coordinates=source_coordinates
|
||||
)
|
||||
|
||||
if self._send_message(chat_id, message):
|
||||
state.last_notification_at = now
|
||||
logger.info(f"Notification sent for {asset.name}")
|
||||
notification_sent = True
|
||||
|
||||
# Update state
|
||||
state.prev_sources_missing = json.dumps(list(sources_missing))
|
||||
state.prev_sources_stale = json.dumps(list(sources_stale))
|
||||
state.prev_threshold_breached = threshold_breached
|
||||
|
||||
db.commit()
|
||||
|
||||
return notification_sent
|
||||
|
||||
def _build_state_change_message(
|
||||
self,
|
||||
asset_name: str,
|
||||
missing_added: Set[str],
|
||||
missing_removed: Set[str],
|
||||
stale_added: Set[str],
|
||||
stale_removed: Set[str],
|
||||
threshold_breached: bool,
|
||||
prev_threshold_breached: bool,
|
||||
max_distance_meters: float,
|
||||
threshold_meters: float,
|
||||
source_coordinates: Dict[str, Any]
|
||||
) -> str:
|
||||
"""Build the state change notification message"""
|
||||
timestamp = datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S UTC")
|
||||
|
||||
# Determine if this is a degradation or recovery
|
||||
is_degradation = missing_added or stale_added or (threshold_breached and not prev_threshold_breached)
|
||||
is_recovery = missing_removed or stale_removed or (not threshold_breached and prev_threshold_breached)
|
||||
|
||||
if is_degradation and not is_recovery:
|
||||
emoji = "🚨"
|
||||
title = "GNSS STATE DEGRADED"
|
||||
elif is_recovery and not is_degradation:
|
||||
emoji = "✅"
|
||||
title = "GNSS STATE RECOVERED"
|
||||
else:
|
||||
emoji = "⚠️"
|
||||
title = "GNSS STATE CHANGED"
|
||||
|
||||
message = (
|
||||
f"{emoji} <b>{title}</b>\n\n"
|
||||
f"📍 <b>Asset:</b> {self.escape_html(asset_name)}\n"
|
||||
f"⏰ <b>Time:</b> {timestamp}\n\n"
|
||||
)
|
||||
|
||||
# Missing sources changes
|
||||
if missing_added:
|
||||
message += f"❌ <b>Sources now MISSING:</b> {', '.join(sorted(missing_added))}\n"
|
||||
if missing_removed:
|
||||
message += f"✅ <b>Sources RECOVERED (was missing):</b> {', '.join(sorted(missing_removed))}\n"
|
||||
|
||||
# Stale sources changes
|
||||
if stale_added:
|
||||
message += f"⏱️ <b>Sources now STALE:</b> {', '.join(sorted(stale_added))}\n"
|
||||
if stale_removed:
|
||||
message += f"✅ <b>Sources RECOVERED (was stale):</b> {', '.join(sorted(stale_removed))}\n"
|
||||
|
||||
# Threshold breach changes
|
||||
if threshold_breached and not prev_threshold_breached:
|
||||
message += (
|
||||
f"\n🚨 <b>DISTANCE THRESHOLD BREACHED!</b>\n"
|
||||
f" Max distance: {max_distance_meters:.1f}m (threshold: {threshold_meters:.1f}m)\n"
|
||||
f" ⚠️ Possible GPS jamming or spoofing!\n"
|
||||
)
|
||||
elif not threshold_breached and prev_threshold_breached:
|
||||
message += (
|
||||
f"\n✅ <b>Distance threshold OK</b>\n"
|
||||
f" Max distance: {max_distance_meters:.1f}m (threshold: {threshold_meters:.1f}m)\n"
|
||||
)
|
||||
|
||||
# Current coordinates summary
|
||||
if source_coordinates:
|
||||
message += f"\n📍 <b>Current Coordinates:</b>\n"
|
||||
for source, coords in source_coordinates.items():
|
||||
lat = coords.get("latitude", "N/A")
|
||||
lon = coords.get("longitude", "N/A")
|
||||
message += f" • {self.escape_html(source)}: {lat}, {lon}\n"
|
||||
|
||||
return message
|
||||
|
||||
def send_asset_offline_alert(
|
||||
self,
|
||||
chat_id: str,
|
||||
asset_name: str,
|
||||
last_seen: datetime,
|
||||
offline_threshold_seconds: int = 120
|
||||
) -> bool:
|
||||
"""Send notification when an asset goes offline (no updates received)"""
|
||||
if not self.enabled:
|
||||
return False
|
||||
|
||||
timestamp = datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S UTC")
|
||||
last_seen_str = last_seen.strftime("%Y-%m-%d %H:%M:%S UTC") if last_seen else "Unknown"
|
||||
|
||||
message = (
|
||||
f"📴 <b>ASSET OFFLINE</b>\n\n"
|
||||
f"📍 <b>Asset:</b> {self.escape_html(asset_name)}\n"
|
||||
f"⏰ <b>Detected at:</b> {timestamp}\n"
|
||||
f"🕐 <b>Last seen:</b> {last_seen_str}\n\n"
|
||||
f"⚠️ No updates received for over {offline_threshold_seconds} seconds.\n"
|
||||
f"Check client connectivity and service status."
|
||||
)
|
||||
|
||||
result = self._send_message(chat_id, message)
|
||||
if result:
|
||||
logger.info(f"Offline alert sent for {asset_name}")
|
||||
return result
|
||||
|
||||
def send_asset_online_alert(
|
||||
self,
|
||||
chat_id: str,
|
||||
asset_name: str,
|
||||
offline_duration_seconds: Optional[float] = None
|
||||
) -> bool:
|
||||
"""Send notification when an asset comes back online"""
|
||||
if not self.enabled:
|
||||
return False
|
||||
|
||||
timestamp = datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S UTC")
|
||||
|
||||
duration_str = ""
|
||||
if offline_duration_seconds:
|
||||
if offline_duration_seconds < 60:
|
||||
duration_str = f"{int(offline_duration_seconds)} seconds"
|
||||
elif offline_duration_seconds < 3600:
|
||||
duration_str = f"{int(offline_duration_seconds / 60)} minutes"
|
||||
else:
|
||||
hours = offline_duration_seconds / 3600
|
||||
duration_str = f"{hours:.1f} hours"
|
||||
|
||||
message = (
|
||||
f"📶 <b>ASSET BACK ONLINE</b>\n\n"
|
||||
f"📍 <b>Asset:</b> {self.escape_html(asset_name)}\n"
|
||||
f"⏰ <b>Time:</b> {timestamp}\n"
|
||||
)
|
||||
|
||||
if duration_str:
|
||||
message += f"⏱️ <b>Was offline for:</b> {duration_str}\n"
|
||||
|
||||
message += f"\n✅ Asset is now reporting normally."
|
||||
|
||||
result = self._send_message(chat_id, message)
|
||||
if result:
|
||||
logger.info(f"Online alert sent for {asset_name}")
|
||||
return result
|
||||
|
||||
def test_connection(self) -> bool:
|
||||
"""Test Telegram bot connection"""
|
||||
if not self.enabled:
|
||||
return False
|
||||
|
||||
try:
|
||||
url = f"{self.api_url}/getMe"
|
||||
response = requests.get(url, timeout=10)
|
||||
|
||||
if response.status_code == 200:
|
||||
bot_info = response.json()
|
||||
logger.info(f"Telegram bot connected: @{bot_info['result']['username']}")
|
||||
return True
|
||||
else:
|
||||
logger.error(f"Telegram connection failed: {response.status_code}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Telegram connection error: {e}")
|
||||
return False
|
||||
|
||||
|
||||
# Singleton instance
|
||||
_telegram_service: Optional[TelegramService] = None
|
||||
|
||||
|
||||
def get_telegram_service() -> TelegramService:
|
||||
"""Get the singleton Telegram service instance"""
|
||||
global _telegram_service
|
||||
if _telegram_service is None:
|
||||
_telegram_service = TelegramService()
|
||||
return _telegram_service
|
||||
|
||||
976
backup-from-device/gnss-guard/tm-gnss-guard/server/static/app.js
Normal file
976
backup-from-device/gnss-guard/tm-gnss-guard/server/static/app.js
Normal file
@@ -0,0 +1,976 @@
|
||||
/**
|
||||
* GNSS Guard Server - Dashboard JavaScript
|
||||
* Multi-asset monitoring with 72h route visualization
|
||||
*/
|
||||
|
||||
// Global state
|
||||
let map = null;
|
||||
let currentAsset = null;
|
||||
let currentData = null;
|
||||
let assets = [];
|
||||
let routeMarkers = [];
|
||||
let sourceMarkers = {};
|
||||
let showRouteEnabled = true;
|
||||
let lastFetchSucceeded = false;
|
||||
let lastValidationTimestamp = null;
|
||||
let isInitialMapLoad = true; // Only fit bounds on initial load or asset change
|
||||
|
||||
// Time mode state
|
||||
let timeMode = 'now'; // 'now' or 'select'
|
||||
let selectedTimestamp = null; // Unix timestamp when in 'select' mode
|
||||
let autoRefreshInterval = null;
|
||||
|
||||
// =============================================================================
|
||||
// AUTO-REFRESH PAGE (every 1 hour to pick up deployments)
|
||||
// =============================================================================
|
||||
const PAGE_LOAD_TIME = Date.now();
|
||||
const AUTO_REFRESH_INTERVAL_MS = 60 * 60 * 1000; // 1 hour
|
||||
let lastVisibilityCheck = Date.now();
|
||||
|
||||
function checkAutoRefresh() {
|
||||
const elapsed = Date.now() - PAGE_LOAD_TIME;
|
||||
if (elapsed >= AUTO_REFRESH_INTERVAL_MS) {
|
||||
console.log('Auto-refreshing page after 1 hour...');
|
||||
window.location.reload();
|
||||
}
|
||||
}
|
||||
|
||||
// Check for refresh on visibility change (tab becomes active)
|
||||
document.addEventListener('visibilitychange', () => {
|
||||
if (document.visibilityState === 'visible') {
|
||||
const now = Date.now();
|
||||
// Only check if at least 10 seconds since last check (prevents rapid refreshes)
|
||||
if (now - lastVisibilityCheck > 10000) {
|
||||
lastVisibilityCheck = now;
|
||||
checkAutoRefresh();
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Periodic check every 5 minutes while tab is active
|
||||
setInterval(checkAutoRefresh, 5 * 60 * 1000);
|
||||
|
||||
// Marker icons for sources
|
||||
const iconPrimary = makeIcon('violet');
|
||||
const iconSecondary = makeIcon('grey');
|
||||
const iconAis = makeIcon('blue');
|
||||
const iconStarlinkGps = makeIcon('yellow');
|
||||
const iconStarlinkLocation = makeIcon('green');
|
||||
|
||||
const sourceConfig = {
|
||||
'nmea_primary': { icon: iconPrimary, name: 'Primary GPS' },
|
||||
'nmea_secondary': { icon: iconSecondary, name: 'Secondary GPS' },
|
||||
'tm_ais': { icon: iconAis, name: 'TM AIS GPS' },
|
||||
'starlink_gps': { icon: iconStarlinkGps, name: 'Starlink GPS' },
|
||||
'starlink_location': { icon: iconStarlinkLocation, name: 'Starlink Location' }
|
||||
};
|
||||
|
||||
// Initialize on DOM ready
|
||||
document.addEventListener('DOMContentLoaded', () => {
|
||||
initMap();
|
||||
initTabs();
|
||||
initTimePicker();
|
||||
loadAssets();
|
||||
|
||||
// Auto-refresh every 10 seconds (only when in 'now' mode)
|
||||
startAutoRefresh();
|
||||
});
|
||||
|
||||
// =============================================================================
|
||||
// MAP INITIALIZATION
|
||||
// =============================================================================
|
||||
|
||||
function initMap() {
|
||||
map = L.map('map', { zoomControl: true }).setView([34.665151, 33.016326], 11);
|
||||
|
||||
// CartoDB Dark tiles
|
||||
L.tileLayer('https://{s}.basemaps.cartocdn.com/dark_all/{z}/{x}/{y}{r}.png', {
|
||||
maxZoom: 19,
|
||||
attribution: '© OpenStreetMap & CARTO'
|
||||
}).addTo(map);
|
||||
|
||||
// Recalculate marker offsets when zoom changes
|
||||
map.on('zoomend', () => {
|
||||
if (currentData) {
|
||||
updateMap(currentData);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
function makeIcon(color) {
|
||||
return new L.Icon({
|
||||
iconUrl: `https://raw.githubusercontent.com/pointhi/leaflet-color-markers/master/img/marker-icon-${color}.png`,
|
||||
shadowUrl: 'https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.9.4/images/marker-shadow.png',
|
||||
iconSize: [25, 41],
|
||||
iconAnchor: [12, 41],
|
||||
popupAnchor: [1, -34],
|
||||
shadowSize: [41, 41]
|
||||
});
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// TABS (Mobile)
|
||||
// =============================================================================
|
||||
|
||||
function initTabs() {
|
||||
const tabButtons = document.querySelectorAll('.tab-btn');
|
||||
tabButtons.forEach(btn => {
|
||||
btn.addEventListener('click', () => {
|
||||
const tabName = btn.dataset.tab;
|
||||
|
||||
// Update button states
|
||||
tabButtons.forEach(b => b.classList.remove('active'));
|
||||
btn.classList.add('active');
|
||||
|
||||
// Update tab content
|
||||
document.querySelectorAll('.tab-content').forEach(tab => {
|
||||
tab.classList.remove('active');
|
||||
});
|
||||
document.getElementById(`tab-${tabName}`).classList.add('active');
|
||||
|
||||
// Invalidate map size when showing map tab
|
||||
if (tabName === 'map' && map) {
|
||||
setTimeout(() => map.invalidateSize(), 100);
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// TIME SELECTOR
|
||||
// =============================================================================
|
||||
|
||||
function initTimePicker() {
|
||||
// Set default datetime value to now
|
||||
const now = new Date();
|
||||
const localDatetime = formatDatetimeLocal(now);
|
||||
|
||||
const desktopPicker = document.getElementById('selectedDatetime');
|
||||
const mobilePicker = document.getElementById('mobileSelectedDatetime');
|
||||
|
||||
if (desktopPicker) desktopPicker.value = localDatetime;
|
||||
if (mobilePicker) mobilePicker.value = localDatetime;
|
||||
}
|
||||
|
||||
function formatDatetimeLocal(date) {
|
||||
// Format date as YYYY-MM-DDTHH:mm for datetime-local input
|
||||
const year = date.getFullYear();
|
||||
const month = String(date.getMonth() + 1).padStart(2, '0');
|
||||
const day = String(date.getDate()).padStart(2, '0');
|
||||
const hours = String(date.getHours()).padStart(2, '0');
|
||||
const minutes = String(date.getMinutes()).padStart(2, '0');
|
||||
return `${year}-${month}-${day}T${hours}:${minutes}`;
|
||||
}
|
||||
|
||||
function setTimeMode(mode) {
|
||||
timeMode = mode;
|
||||
|
||||
// Update radio buttons (sync both desktop and mobile)
|
||||
document.querySelectorAll('input[name="timeMode"], input[name="timeModeM"]').forEach(radio => {
|
||||
radio.checked = radio.value === mode;
|
||||
});
|
||||
|
||||
// Show/hide datetime picker
|
||||
const pickers = ['datetimePicker', 'mobileDatetimePicker'];
|
||||
const displays = ['selectedTimeDisplay', 'mobileSelectedTimeDisplay'];
|
||||
|
||||
pickers.forEach(id => {
|
||||
const el = document.getElementById(id);
|
||||
if (el) el.classList.toggle('hidden', mode === 'now');
|
||||
});
|
||||
|
||||
if (mode === 'now') {
|
||||
// Hide the selected time display when switching to 'now'
|
||||
displays.forEach(id => {
|
||||
const el = document.getElementById(id);
|
||||
if (el) el.classList.add('hidden');
|
||||
});
|
||||
|
||||
// Clear selected timestamp
|
||||
selectedTimestamp = null;
|
||||
|
||||
// Reset map to fit bounds when switching to 'now'
|
||||
isInitialMapLoad = true;
|
||||
|
||||
// Restart auto-refresh and fetch current data
|
||||
startAutoRefresh();
|
||||
fetchData();
|
||||
loadRouteData();
|
||||
} else {
|
||||
// Stop auto-refresh when viewing historical data
|
||||
stopAutoRefresh();
|
||||
}
|
||||
}
|
||||
|
||||
function onDatetimeChange() {
|
||||
// Sync desktop and mobile pickers
|
||||
const desktopPicker = document.getElementById('selectedDatetime');
|
||||
const mobilePicker = document.getElementById('mobileSelectedDatetime');
|
||||
|
||||
// Get the value from whichever picker was changed
|
||||
const value = desktopPicker?.value || mobilePicker?.value;
|
||||
|
||||
if (desktopPicker) desktopPicker.value = value;
|
||||
if (mobilePicker) mobilePicker.value = value;
|
||||
}
|
||||
|
||||
function applySelectedTime() {
|
||||
const desktopPicker = document.getElementById('selectedDatetime');
|
||||
const value = desktopPicker?.value;
|
||||
|
||||
if (!value) {
|
||||
alert('Please select a date and time');
|
||||
return;
|
||||
}
|
||||
|
||||
// Convert to Unix timestamp
|
||||
const date = new Date(value);
|
||||
selectedTimestamp = date.getTime() / 1000;
|
||||
|
||||
// Update display
|
||||
const displayText = date.toLocaleString('en-US', {
|
||||
month: 'short',
|
||||
day: 'numeric',
|
||||
year: 'numeric',
|
||||
hour: '2-digit',
|
||||
minute: '2-digit',
|
||||
hour12: false
|
||||
});
|
||||
|
||||
const displays = ['selectedTimeDisplay', 'mobileSelectedTimeDisplay'];
|
||||
const textEls = ['selectedTimeText', 'mobileSelectedTimeText'];
|
||||
|
||||
displays.forEach(id => {
|
||||
const el = document.getElementById(id);
|
||||
if (el) el.classList.remove('hidden');
|
||||
});
|
||||
|
||||
textEls.forEach(id => {
|
||||
const el = document.getElementById(id);
|
||||
if (el) el.textContent = displayText;
|
||||
});
|
||||
|
||||
// Reset map to fit bounds when applying new time
|
||||
isInitialMapLoad = true;
|
||||
|
||||
// Fetch historical data
|
||||
fetchData();
|
||||
loadRouteData();
|
||||
|
||||
logEvent('info', `Viewing data at ${displayText}`);
|
||||
}
|
||||
|
||||
function startAutoRefresh() {
|
||||
if (autoRefreshInterval) return; // Already running
|
||||
autoRefreshInterval = setInterval(() => {
|
||||
if (timeMode === 'now') {
|
||||
fetchData();
|
||||
}
|
||||
}, 10000);
|
||||
}
|
||||
|
||||
function stopAutoRefresh() {
|
||||
if (autoRefreshInterval) {
|
||||
clearInterval(autoRefreshInterval);
|
||||
autoRefreshInterval = null;
|
||||
}
|
||||
}
|
||||
|
||||
function resetTimeMode() {
|
||||
// Reset to 'now' mode (called when switching assets)
|
||||
timeMode = 'now';
|
||||
selectedTimestamp = null;
|
||||
|
||||
// Update UI
|
||||
document.querySelectorAll('input[name="timeMode"], input[name="timeModeM"]').forEach(radio => {
|
||||
radio.checked = radio.value === 'now';
|
||||
});
|
||||
|
||||
const pickers = ['datetimePicker', 'mobileDatetimePicker'];
|
||||
const displays = ['selectedTimeDisplay', 'mobileSelectedTimeDisplay'];
|
||||
|
||||
pickers.forEach(id => {
|
||||
const el = document.getElementById(id);
|
||||
if (el) el.classList.add('hidden');
|
||||
});
|
||||
|
||||
displays.forEach(id => {
|
||||
const el = document.getElementById(id);
|
||||
if (el) el.classList.add('hidden');
|
||||
});
|
||||
|
||||
// Reset datetime picker to current time
|
||||
const now = new Date();
|
||||
const localDatetime = formatDatetimeLocal(now);
|
||||
const desktopPicker = document.getElementById('selectedDatetime');
|
||||
const mobilePicker = document.getElementById('mobileSelectedDatetime');
|
||||
if (desktopPicker) desktopPicker.value = localDatetime;
|
||||
if (mobilePicker) mobilePicker.value = localDatetime;
|
||||
|
||||
// Restart auto-refresh
|
||||
startAutoRefresh();
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// ASSET MANAGEMENT
|
||||
// =============================================================================
|
||||
|
||||
async function loadAssets() {
|
||||
try {
|
||||
const response = await fetch('/api/dashboard/assets');
|
||||
if (!response.ok) throw new Error('Failed to load assets');
|
||||
|
||||
assets = await response.json();
|
||||
renderAssetList();
|
||||
populateMobileDropdown();
|
||||
|
||||
// Auto-select last asset if available (most recently added)
|
||||
if (assets.length > 0) {
|
||||
selectAsset(assets[assets.length - 1].name);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error loading assets:', error);
|
||||
document.getElementById('assetList').innerHTML =
|
||||
'<div class="asset-loading">Failed to load assets</div>';
|
||||
}
|
||||
}
|
||||
|
||||
function renderAssetList() {
|
||||
const container = document.getElementById('assetList');
|
||||
|
||||
if (assets.length === 0) {
|
||||
container.innerHTML = '<div class="asset-loading">No assets registered</div>';
|
||||
return;
|
||||
}
|
||||
|
||||
container.innerHTML = assets.map(asset => {
|
||||
// Determine status class:
|
||||
// - online + valid = green (online)
|
||||
// - online + invalid + distance alert = red (alert)
|
||||
// - online + invalid + no distance alert = amber (degraded)
|
||||
// - offline = gray (no class)
|
||||
let statusClass = '';
|
||||
if (asset.is_online) {
|
||||
if (asset.is_valid === true) {
|
||||
statusClass = 'online'; // green
|
||||
} else if (asset.is_valid === false) {
|
||||
statusClass = asset.has_distance_alert ? 'alert' : 'degraded'; // red or amber
|
||||
} else {
|
||||
statusClass = 'online'; // null/unknown - assume ok
|
||||
}
|
||||
}
|
||||
|
||||
const isActive = currentAsset === asset.name;
|
||||
|
||||
return `
|
||||
<div class="asset-item ${isActive ? 'active' : ''} ${!asset.is_online ? 'offline' : ''}"
|
||||
onclick="selectAsset('${asset.name}')">
|
||||
<div class="asset-name">${asset.name}</div>
|
||||
<div class="asset-status">
|
||||
<span class="status-dot ${statusClass}"></span>
|
||||
<span>${asset.is_online ? 'Online' : 'Offline'}</span>
|
||||
</div>
|
||||
</div>
|
||||
`;
|
||||
}).join('');
|
||||
}
|
||||
|
||||
function populateMobileDropdown() {
|
||||
const select = document.getElementById('mobileAssetSelect');
|
||||
select.innerHTML = '<option value="">Select Asset...</option>' +
|
||||
assets.map(asset => `<option value="${asset.name}">${asset.name}</option>`).join('');
|
||||
|
||||
if (currentAsset) {
|
||||
select.value = currentAsset;
|
||||
}
|
||||
}
|
||||
|
||||
function selectAsset(assetName) {
|
||||
if (!assetName) return;
|
||||
|
||||
currentAsset = assetName;
|
||||
|
||||
// Update UI
|
||||
renderAssetList();
|
||||
document.getElementById('mobileAssetSelect').value = assetName;
|
||||
|
||||
// Reset time mode to 'now' when switching assets
|
||||
resetTimeMode();
|
||||
|
||||
// Clear current data and fetch new
|
||||
currentData = null;
|
||||
clearSourceMarkers();
|
||||
clearRouteMarkers();
|
||||
isInitialMapLoad = true; // Reset to fit bounds for new asset
|
||||
|
||||
// Show loading state immediately while fetching
|
||||
showLoadingState();
|
||||
|
||||
fetchData();
|
||||
loadRouteData();
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// DATA FETCHING
|
||||
// =============================================================================
|
||||
|
||||
async function fetchData() {
|
||||
if (!currentAsset) return;
|
||||
|
||||
try {
|
||||
// Build URL with optional timestamp parameter
|
||||
let url = `/api/dashboard/asset/${currentAsset}/status`;
|
||||
if (timeMode === 'select' && selectedTimestamp) {
|
||||
url += `?at=${selectedTimestamp}`;
|
||||
}
|
||||
|
||||
const response = await fetch(url);
|
||||
|
||||
if (!response.ok) {
|
||||
showDegradedState(`Server error: ${response.status}`);
|
||||
return;
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
|
||||
if (data.error) {
|
||||
showDegradedState(data.error);
|
||||
return;
|
||||
}
|
||||
|
||||
currentData = data;
|
||||
lastFetchSucceeded = true;
|
||||
|
||||
updateUI(data);
|
||||
updateMap(data);
|
||||
|
||||
// Log event if validation timestamp changed (only in 'now' mode)
|
||||
if (timeMode === 'now' && data.validation_timestamp !== lastValidationTimestamp) {
|
||||
lastValidationTimestamp = data.validation_timestamp;
|
||||
if (data.has_alert && !data.is_valid && data.max_distance_km !== null) {
|
||||
logEvent('crit', `Alert: distance ${data.max_distance_km.toFixed(1)} km`);
|
||||
} else if (!data.is_valid) {
|
||||
logEvent('warn', 'Validation issue detected');
|
||||
} else {
|
||||
logEvent('info', 'Cloud status OK');
|
||||
}
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
console.error('Fetch error:', error);
|
||||
showDegradedState('Connection failed: ' + error.message);
|
||||
}
|
||||
}
|
||||
|
||||
async function loadRouteData() {
|
||||
if (!currentAsset) return;
|
||||
|
||||
try {
|
||||
// Build URL with optional until parameter
|
||||
let url = `/api/dashboard/asset/${currentAsset}/route?hours=72`;
|
||||
if (timeMode === 'select' && selectedTimestamp) {
|
||||
url += `&until=${selectedTimestamp}`;
|
||||
}
|
||||
|
||||
const response = await fetch(url);
|
||||
if (!response.ok) return;
|
||||
|
||||
const routeData = await response.json();
|
||||
renderRoute(routeData);
|
||||
|
||||
} catch (error) {
|
||||
console.error('Error loading route:', error);
|
||||
}
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// UI UPDATES
|
||||
// =============================================================================
|
||||
|
||||
/**
|
||||
* Update both GNSS status pills (desktop and mobile)
|
||||
*/
|
||||
function updateStatusPills(status, text) {
|
||||
const pills = [
|
||||
document.getElementById('desktopStatusPill'),
|
||||
document.getElementById('mobileStatusPill')
|
||||
];
|
||||
|
||||
pills.forEach(pill => {
|
||||
if (!pill) return;
|
||||
pill.classList.remove('ok', 'warn', 'crit');
|
||||
pill.textContent = text;
|
||||
if (status) {
|
||||
pill.classList.add(status);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
function updateUI(data) {
|
||||
// Update GNSS status pills
|
||||
if (data.has_alert && data.max_distance_km !== null) {
|
||||
updateStatusPills('crit', 'GNSS Integrity: At Risk');
|
||||
} else if (!data.is_valid) {
|
||||
updateStatusPills('warn', 'GNSS Integrity: Degraded');
|
||||
} else {
|
||||
updateStatusPills('ok', 'GNSS Integrity: Stable');
|
||||
}
|
||||
|
||||
// Update alert banner
|
||||
const alertBanner = document.getElementById('alertBanner');
|
||||
const alertDistance = document.getElementById('alert-distance-value');
|
||||
|
||||
if (data.has_alert && data.max_distance_km !== null) {
|
||||
alertBanner.classList.remove('hidden');
|
||||
alertDistance.textContent = `${data.max_distance_km.toFixed(1)} km`;
|
||||
} else {
|
||||
alertBanner.classList.add('hidden');
|
||||
}
|
||||
|
||||
// Update sources - pass distance alert state
|
||||
const hasDistanceAlert = data.has_alert && data.max_distance_km !== null;
|
||||
renderSources(data.sources, hasDistanceAlert);
|
||||
}
|
||||
|
||||
function renderSources(sources, hasDistanceAlert = false) {
|
||||
const container = document.getElementById('sourcesContainer');
|
||||
const sourceOrder = ['nmea_primary', 'nmea_secondary', 'tm_ais', 'starlink_gps', 'starlink_location'];
|
||||
|
||||
container.innerHTML = sourceOrder.map(sourceName => {
|
||||
const source = sources[sourceName];
|
||||
if (!source) return '';
|
||||
|
||||
let cardClass = 'ok';
|
||||
let badgeClass = 'badge-healthy';
|
||||
let badgeText = 'HEALTHY';
|
||||
let coordsText = 'Loading...';
|
||||
let updateText = '-';
|
||||
let updateClass = '';
|
||||
|
||||
if (!source.enabled) {
|
||||
cardClass = 'offline';
|
||||
badgeClass = 'badge-offline';
|
||||
badgeText = 'NOT CONFIGURED';
|
||||
coordsText = 'No data source configured.';
|
||||
} else if (source.status === 'missing') {
|
||||
cardClass = 'crit';
|
||||
badgeClass = 'badge-danger';
|
||||
badgeText = 'MISSING';
|
||||
coordsText = 'No coordinates received.';
|
||||
updateClass = 'stale-text';
|
||||
} else if (source.status === 'stale' || source.is_stale) {
|
||||
cardClass = 'stale';
|
||||
badgeClass = 'badge-stale';
|
||||
badgeText = 'STALE';
|
||||
if (source.coordinates) {
|
||||
coordsText = `${source.coordinates.latitude.toFixed(6)}, ${source.coordinates.longitude.toFixed(6)}`;
|
||||
}
|
||||
updateClass = 'stale-text';
|
||||
} else {
|
||||
if (source.coordinates) {
|
||||
coordsText = `${source.coordinates.latitude.toFixed(6)}, ${source.coordinates.longitude.toFixed(6)}`;
|
||||
}
|
||||
|
||||
// If distance alert and source has coordinates, mark as AT RISK
|
||||
if (hasDistanceAlert && source.coordinates) {
|
||||
cardClass = 'crit';
|
||||
badgeClass = 'badge-danger';
|
||||
badgeText = 'AT RISK';
|
||||
}
|
||||
}
|
||||
|
||||
if (source.last_update_unix) {
|
||||
updateText = formatRelativeTime(source.last_update_unix);
|
||||
}
|
||||
|
||||
return `
|
||||
<div class="card ${cardClass}">
|
||||
<div class="card-header">
|
||||
<div class="card-title">${source.display_name}</div>
|
||||
<div class="badge ${badgeClass}">${badgeText}</div>
|
||||
</div>
|
||||
<div class="card-line"><strong>Lat/Lon</strong>: ${coordsText}</div>
|
||||
<div class="card-line"><strong>Updated</strong>: <span class="${updateClass}">${updateText}</span></div>
|
||||
</div>
|
||||
`;
|
||||
}).join('');
|
||||
}
|
||||
|
||||
/**
|
||||
* Show loading state while fetching data for a new asset
|
||||
*/
|
||||
function showLoadingState() {
|
||||
// Show neutral loading status
|
||||
updateStatusPills(null, 'GNSS Integrity: Loading...');
|
||||
|
||||
// Hide alert banner
|
||||
document.getElementById('alertBanner').classList.add('hidden');
|
||||
|
||||
// Show placeholder source cards
|
||||
renderPlaceholderSources('loading');
|
||||
}
|
||||
|
||||
/**
|
||||
* Show state when asset has never pushed any validation data
|
||||
*/
|
||||
function showNoDataState() {
|
||||
lastFetchSucceeded = false;
|
||||
|
||||
// Show neutral "no data" status
|
||||
updateStatusPills(null, 'GNSS Integrity: No Data');
|
||||
|
||||
// Hide alert banner
|
||||
document.getElementById('alertBanner').classList.add('hidden');
|
||||
|
||||
// Show placeholder source cards indicating awaiting data
|
||||
renderPlaceholderSources('nodata');
|
||||
|
||||
logEvent('warn', 'Asset has not pushed any validation data yet');
|
||||
}
|
||||
|
||||
/**
|
||||
* Render placeholder cards for all sources
|
||||
* @param {string} mode - 'loading' or 'nodata'
|
||||
*/
|
||||
function renderPlaceholderSources(mode) {
|
||||
const container = document.getElementById('sourcesContainer');
|
||||
const sourceNames = {
|
||||
'nmea_primary': 'Primary GPS',
|
||||
'nmea_secondary': 'Secondary GPS',
|
||||
'tm_ais': 'TM AIS GPS',
|
||||
'starlink_gps': 'Starlink GPS',
|
||||
'starlink_location': 'Starlink Location'
|
||||
};
|
||||
const sourceOrder = ['nmea_primary', 'nmea_secondary', 'tm_ais', 'starlink_gps', 'starlink_location'];
|
||||
|
||||
const isLoading = mode === 'loading';
|
||||
const badgeText = isLoading ? 'LOADING' : 'AWAITING';
|
||||
const coordsText = isLoading ? 'Loading...' : 'Awaiting first update...';
|
||||
const updateText = isLoading ? '...' : '—';
|
||||
|
||||
container.innerHTML = sourceOrder.map(sourceName => {
|
||||
return `
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<div class="card-title">${sourceNames[sourceName]}</div>
|
||||
<div class="badge badge-offline">${badgeText}</div>
|
||||
</div>
|
||||
<div class="card-line"><strong>Lat/Lon</strong>: ${coordsText}</div>
|
||||
<div class="card-line"><strong>Updated</strong>: <span>${updateText}</span></div>
|
||||
</div>
|
||||
`;
|
||||
}).join('');
|
||||
}
|
||||
|
||||
function showDegradedState(errorMessage) {
|
||||
lastFetchSucceeded = false;
|
||||
|
||||
// Check if this is a "no data" error
|
||||
if (errorMessage && errorMessage.includes('No validation data')) {
|
||||
showNoDataState();
|
||||
return;
|
||||
}
|
||||
|
||||
// Update status pills to degraded state
|
||||
updateStatusPills('warn', 'GNSS Integrity: Degraded');
|
||||
|
||||
// Mark all update times as stale
|
||||
document.querySelectorAll('.card-line').forEach(line => {
|
||||
if (line.textContent.includes('Updated')) {
|
||||
const span = line.querySelector('span');
|
||||
if (span) span.classList.add('stale-text');
|
||||
}
|
||||
});
|
||||
|
||||
logEvent('crit', errorMessage);
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// MAP UPDATES
|
||||
// =============================================================================
|
||||
|
||||
// Calculate offset for markers to spread them in a circle when close together
|
||||
function calculateMarkerOffsets(sourceCoords, zoomLevel) {
|
||||
if (Object.keys(sourceCoords).length <= 1) {
|
||||
// Single marker, no offset needed
|
||||
const result = {};
|
||||
for (const [name, coord] of Object.entries(sourceCoords)) {
|
||||
result[name] = { lat: coord.lat, lon: coord.lon, offsetLat: 0, offsetLon: 0 };
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
// Calculate centroid
|
||||
let sumLat = 0, sumLon = 0, count = 0;
|
||||
for (const coord of Object.values(sourceCoords)) {
|
||||
sumLat += coord.lat;
|
||||
sumLon += coord.lon;
|
||||
count++;
|
||||
}
|
||||
const centroidLat = sumLat / count;
|
||||
const centroidLon = sumLon / count;
|
||||
|
||||
// Check if markers are close together (within ~50 meters)
|
||||
const closeThreshold = 0.0005; // ~50m in degrees
|
||||
let maxDist = 0;
|
||||
for (const coord of Object.values(sourceCoords)) {
|
||||
const dist = Math.sqrt(
|
||||
Math.pow(coord.lat - centroidLat, 2) +
|
||||
Math.pow(coord.lon - centroidLon, 2)
|
||||
);
|
||||
maxDist = Math.max(maxDist, dist);
|
||||
}
|
||||
|
||||
// If markers are spread out enough, don't offset
|
||||
if (maxDist > closeThreshold) {
|
||||
const result = {};
|
||||
for (const [name, coord] of Object.entries(sourceCoords)) {
|
||||
result[name] = { lat: coord.lat, lon: coord.lon, offsetLat: 0, offsetLon: 0 };
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
// Calculate offset radius based on zoom level (smaller offset when zoomed in)
|
||||
// At zoom 15, offset ~30m; at zoom 10, offset ~100m
|
||||
const baseOffset = 0.0003; // ~30m base offset
|
||||
const zoomFactor = Math.pow(2, 15 - Math.min(zoomLevel, 18));
|
||||
const offsetRadius = baseOffset * zoomFactor;
|
||||
|
||||
// Arrange markers in a circle around centroid
|
||||
const result = {};
|
||||
const sourceNames = Object.keys(sourceCoords);
|
||||
const angleStep = (2 * Math.PI) / sourceNames.length;
|
||||
|
||||
sourceNames.forEach((name, index) => {
|
||||
const angle = angleStep * index - Math.PI / 2; // Start from top
|
||||
const offsetLat = offsetRadius * Math.cos(angle);
|
||||
const offsetLon = offsetRadius * Math.sin(angle) * 1.5; // Adjust for latitude distortion
|
||||
|
||||
result[name] = {
|
||||
lat: centroidLat + offsetLat,
|
||||
lon: centroidLon + offsetLon,
|
||||
offsetLat: offsetLat,
|
||||
offsetLon: offsetLon,
|
||||
originalLat: sourceCoords[name].lat,
|
||||
originalLon: sourceCoords[name].lon
|
||||
};
|
||||
});
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
function updateMap(data) {
|
||||
clearSourceMarkers();
|
||||
|
||||
const sources = data.sources || {};
|
||||
const allCoords = [];
|
||||
const sourceCoords = {};
|
||||
|
||||
// First pass: collect all valid coordinates
|
||||
Object.entries(sources).forEach(([sourceName, sourceData]) => {
|
||||
if (sourceData.coordinates && sourceData.coordinates.latitude && sourceData.coordinates.longitude) {
|
||||
const lat = sourceData.coordinates.latitude;
|
||||
const lon = sourceData.coordinates.longitude;
|
||||
if (sourceConfig[sourceName]) {
|
||||
sourceCoords[sourceName] = { lat, lon };
|
||||
allCoords.push([lat, lon]);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Calculate offsets for overlapping markers
|
||||
const zoomLevel = map.getZoom() || 13;
|
||||
const offsetPositions = calculateMarkerOffsets(sourceCoords, zoomLevel);
|
||||
|
||||
// Second pass: add markers with calculated positions
|
||||
Object.entries(sources).forEach(([sourceName, sourceData]) => {
|
||||
if (sourceData.coordinates && sourceData.coordinates.latitude && sourceData.coordinates.longitude) {
|
||||
const config = sourceConfig[sourceName];
|
||||
const position = offsetPositions[sourceName];
|
||||
|
||||
if (config && position) {
|
||||
// Build popup with original coordinates
|
||||
const origLat = sourceData.coordinates.latitude;
|
||||
const origLon = sourceData.coordinates.longitude;
|
||||
const popupContent = `<b>${config.name}</b><br>Lat: ${origLat.toFixed(6)}<br>Lon: ${origLon.toFixed(6)}`;
|
||||
|
||||
const marker = L.marker([position.lat, position.lon], { icon: config.icon })
|
||||
.bindPopup(popupContent)
|
||||
.addTo(map);
|
||||
sourceMarkers[sourceName] = marker;
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Fit map to show all markers (only on initial load or asset change, not on refresh)
|
||||
if (isInitialMapLoad) {
|
||||
if (allCoords.length > 0) {
|
||||
const bounds = L.latLngBounds(allCoords);
|
||||
map.fitBounds(bounds, {
|
||||
padding: [50, 50], // Add padding around markers
|
||||
maxZoom: 15 // Don't zoom in too much when markers are close
|
||||
});
|
||||
} else if (currentData && currentData.map_center && currentData.map_center.latitude && currentData.map_center.longitude) {
|
||||
// Fallback to center if no markers
|
||||
map.setView([currentData.map_center.latitude, currentData.map_center.longitude], 13);
|
||||
}
|
||||
isInitialMapLoad = false; // Don't auto-zoom on subsequent refreshes
|
||||
}
|
||||
}
|
||||
|
||||
function clearSourceMarkers() {
|
||||
Object.values(sourceMarkers).forEach(marker => {
|
||||
map.removeLayer(marker);
|
||||
});
|
||||
sourceMarkers = {};
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// ROUTE VISUALIZATION
|
||||
// =============================================================================
|
||||
|
||||
function renderRoute(routeData) {
|
||||
clearRouteMarkers();
|
||||
|
||||
if (!showRouteEnabled || !routeData || routeData.length === 0) return;
|
||||
|
||||
// Create small circle markers for route points
|
||||
routeData.forEach(point => {
|
||||
let color;
|
||||
let statusText;
|
||||
|
||||
switch (point.status) {
|
||||
case 'valid':
|
||||
color = '#1fad3a';
|
||||
statusText = 'Valid';
|
||||
break;
|
||||
case 'degraded':
|
||||
color = '#ffa726';
|
||||
statusText = 'Degraded';
|
||||
break;
|
||||
case 'alert':
|
||||
color = '#c62828';
|
||||
statusText = 'Alert';
|
||||
break;
|
||||
default:
|
||||
color = '#9aa3b8';
|
||||
statusText = 'Unknown';
|
||||
}
|
||||
|
||||
const marker = L.circleMarker([point.latitude, point.longitude], {
|
||||
radius: 5,
|
||||
fillColor: color,
|
||||
color: color,
|
||||
weight: 1,
|
||||
opacity: 0.8,
|
||||
fillOpacity: 0.6
|
||||
}).addTo(map);
|
||||
|
||||
// Create detailed popup
|
||||
const popupContent = `
|
||||
<div class="route-popup">
|
||||
<div class="popup-header">${formatTimestamp(point.timestamp)}</div>
|
||||
<div class="popup-row"><strong>Status:</strong> <span class="status-${point.status}">${statusText}</span></div>
|
||||
<div class="popup-row"><strong>Lat/Lon:</strong> ${point.latitude.toFixed(6)}, ${point.longitude.toFixed(6)}</div>
|
||||
${point.sources_missing?.length ? `<div class="popup-row"><strong>Missing:</strong> ${point.sources_missing.join(', ')}</div>` : ''}
|
||||
${point.sources_stale?.length ? `<div class="popup-row"><strong>Stale:</strong> ${point.sources_stale.join(', ')}</div>` : ''}
|
||||
${point.max_distance_m > point.threshold_m ? `<div class="popup-row"><strong>Distance:</strong> ${(point.max_distance_m/1000).toFixed(2)} km</div>` : ''}
|
||||
</div>
|
||||
`;
|
||||
|
||||
marker.bindPopup(popupContent);
|
||||
routeMarkers.push(marker);
|
||||
});
|
||||
}
|
||||
|
||||
function clearRouteMarkers() {
|
||||
routeMarkers.forEach(marker => {
|
||||
map.removeLayer(marker);
|
||||
});
|
||||
routeMarkers = [];
|
||||
}
|
||||
|
||||
function toggleRoute() {
|
||||
showRouteEnabled = document.getElementById('showRoute').checked;
|
||||
if (showRouteEnabled) {
|
||||
loadRouteData();
|
||||
} else {
|
||||
clearRouteMarkers();
|
||||
}
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// EVENT LOGGING
|
||||
// =============================================================================
|
||||
|
||||
function logEvent(level, message) {
|
||||
const log = document.getElementById('eventLog');
|
||||
const now = new Date();
|
||||
const time = now.toTimeString().slice(0, 8);
|
||||
|
||||
const levelMap = {
|
||||
'info': 'INFO',
|
||||
'warn': 'WARN',
|
||||
'crit': 'CRIT'
|
||||
};
|
||||
|
||||
const event = document.createElement('div');
|
||||
event.className = `event level-${level}`;
|
||||
event.innerHTML = `<span class="level">${levelMap[level]}</span> [${time}] ${message}`;
|
||||
|
||||
// Insert after title
|
||||
const title = log.querySelector('.event-log-title');
|
||||
if (title.nextSibling) {
|
||||
log.insertBefore(event, title.nextSibling);
|
||||
} else {
|
||||
log.appendChild(event);
|
||||
}
|
||||
|
||||
// Keep only 3 events
|
||||
const events = log.querySelectorAll('.event');
|
||||
while (events.length > 3) {
|
||||
const lastEvent = log.querySelector('.event:last-of-type');
|
||||
if (lastEvent) lastEvent.remove();
|
||||
else break;
|
||||
}
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// UTILITIES
|
||||
// =============================================================================
|
||||
|
||||
function formatRelativeTime(unixTimestamp) {
|
||||
const now = Date.now() / 1000;
|
||||
const diff = now - unixTimestamp;
|
||||
|
||||
if (diff < 60) return `${Math.floor(diff)}s ago`;
|
||||
if (diff < 3600) return `${Math.floor(diff / 60)}m ago`;
|
||||
if (diff < 86400) return `${Math.floor(diff / 3600)}h ago`;
|
||||
return `${Math.floor(diff / 86400)}d ago`;
|
||||
}
|
||||
|
||||
function formatTimestamp(isoString) {
|
||||
const date = new Date(isoString);
|
||||
return date.toLocaleString('en-US', {
|
||||
month: 'short',
|
||||
day: 'numeric',
|
||||
hour: '2-digit',
|
||||
minute: '2-digit',
|
||||
hour12: false
|
||||
});
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// AUTHENTICATION
|
||||
// =============================================================================
|
||||
|
||||
async function logout() {
|
||||
try {
|
||||
await fetch('/logout', { method: 'POST' });
|
||||
window.location.href = '/login';
|
||||
} catch (error) {
|
||||
console.error('Logout error:', error);
|
||||
window.location.href = '/login';
|
||||
}
|
||||
}
|
||||
|
||||
1023
backup-from-device/gnss-guard/tm-gnss-guard/server/static/style.css
Normal file
1023
backup-from-device/gnss-guard/tm-gnss-guard/server/static/style.css
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,160 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>TM GNSS Guard Cloud</title>
|
||||
|
||||
<!-- Leaflet CSS -->
|
||||
<link rel="stylesheet" href="https://unpkg.com/leaflet@1.9.4/dist/leaflet.css" />
|
||||
|
||||
<link rel="stylesheet" href="/static/style.css?v={{ cache_buster }}">
|
||||
</head>
|
||||
<body>
|
||||
|
||||
<!-- HEADER -->
|
||||
<div class="header">
|
||||
<div class="header-left">
|
||||
<div class="header-title">TM GNSS Guard</div>
|
||||
<div class="header-sub">Multi-Asset Monitoring Cloud</div>
|
||||
</div>
|
||||
<div class="header-right">
|
||||
<div class="user-menu">
|
||||
<span class="user-name">{{ username }}</span>
|
||||
<button class="logout-btn" onclick="logout()">Logout</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- ALERT BANNER (dynamic) -->
|
||||
<div class="alert-banner alert-critical hidden" id="alertBanner">
|
||||
<div class="alert-indicator" id="alertIndicator"></div>
|
||||
<div id="alertText">GPS Jamming or Spoofing Alert! Location Distance: <span id="alert-distance-value">-</span></div>
|
||||
</div>
|
||||
|
||||
<!-- MOBILE ASSET DROPDOWN -->
|
||||
<div class="mobile-asset-dropdown" id="mobileAssetDropdown">
|
||||
<select id="mobileAssetSelect" onchange="selectAsset(this.value)">
|
||||
<option value="">Select Asset...</option>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
<!-- MOBILE TIME SELECTOR -->
|
||||
<div class="mobile-time-selector" id="mobileTimeSelector">
|
||||
<div class="time-radio-group">
|
||||
<label class="time-radio">
|
||||
<input type="radio" name="timeModeM" value="now" checked onchange="setTimeMode('now')">
|
||||
<span>Now</span>
|
||||
</label>
|
||||
<label class="time-radio">
|
||||
<input type="radio" name="timeModeM" value="select" onchange="setTimeMode('select')">
|
||||
<span>Select Day/Time</span>
|
||||
</label>
|
||||
</div>
|
||||
<div class="datetime-picker hidden" id="mobileDatetimePicker">
|
||||
<input type="datetime-local" id="mobileSelectedDatetime" onchange="onDatetimeChange()">
|
||||
<button class="apply-time-btn" onclick="applySelectedTime()">Apply</button>
|
||||
</div>
|
||||
<div class="selected-time-display hidden" id="mobileSelectedTimeDisplay">
|
||||
Viewing: <span id="mobileSelectedTimeText"></span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- MOBILE GNSS STATUS (visible only in mobile view) -->
|
||||
<div class="mobile-gnss-status" id="mobileGnssStatus">
|
||||
<div class="status-pill" id="mobileStatusPill">GNSS Integrity: —</div>
|
||||
</div>
|
||||
|
||||
<!-- MOBILE TAB BAR (only visible in portrait mode) -->
|
||||
<div class="mobile-tabs">
|
||||
<button class="tab-btn active" data-tab="status">Status</button>
|
||||
<button class="tab-btn" data-tab="map">Map</button>
|
||||
</div>
|
||||
|
||||
<!-- MAIN LAYOUT -->
|
||||
<div class="layout">
|
||||
<!-- ASSET PANEL (desktop only) -->
|
||||
<div class="asset-panel" id="assetPanel">
|
||||
<div class="panel-title">Assets</div>
|
||||
<div class="asset-list" id="assetList">
|
||||
<!-- Assets populated by JavaScript -->
|
||||
<div class="asset-loading">Loading assets...</div>
|
||||
</div>
|
||||
|
||||
<!-- TIME SELECTOR -->
|
||||
<div class="time-selector" id="timeSelector">
|
||||
<div class="panel-title">Time</div>
|
||||
<div class="time-radio-group">
|
||||
<label class="time-radio">
|
||||
<input type="radio" name="timeMode" value="now" checked onchange="setTimeMode('now')">
|
||||
<span>Now</span>
|
||||
</label>
|
||||
<label class="time-radio">
|
||||
<input type="radio" name="timeMode" value="select" onchange="setTimeMode('select')">
|
||||
<span>Select Day/Time</span>
|
||||
</label>
|
||||
</div>
|
||||
<div class="datetime-picker hidden" id="datetimePicker">
|
||||
<input type="datetime-local" id="selectedDatetime" onchange="onDatetimeChange()">
|
||||
<button class="apply-time-btn" onclick="applySelectedTime()">Apply</button>
|
||||
</div>
|
||||
<div class="selected-time-display hidden" id="selectedTimeDisplay">
|
||||
Viewing: <span id="selectedTimeText"></span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- STATUS TAB CONTENT (Sources + Event Log) -->
|
||||
<div class="tab-content tab-status active" id="tab-status">
|
||||
<div class="left-panel">
|
||||
<!-- DESKTOP GNSS STATUS (visible only in desktop view) -->
|
||||
<div class="desktop-gnss-status" id="desktopGnssStatus">
|
||||
<div class="status-pill" id="desktopStatusPill">GNSS Integrity: —</div>
|
||||
</div>
|
||||
<div class="panel-title">GNSS Sources</div>
|
||||
<div id="sourcesContainer">
|
||||
<div class="no-asset-selected">Select an asset to view GNSS sources</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- EVENT LOG -->
|
||||
<div class="event-log" id="eventLog">
|
||||
<div class="event-log-title">Event Stream</div>
|
||||
</div>
|
||||
|
||||
<!-- COPYRIGHT -->
|
||||
<div class="copyright">Tototheo Global © 2025</div>
|
||||
</div>
|
||||
|
||||
<!-- MAP TAB CONTENT -->
|
||||
<div class="tab-content tab-map" id="tab-map">
|
||||
<div class="map-panel">
|
||||
<div id="map"></div>
|
||||
<div class="map-overlay-legend">
|
||||
<div class="legend-section">Sources</div>
|
||||
<div><span class="legend-dot legend-primary"></span>Primary GPS</div>
|
||||
<div><span class="legend-dot legend-secondary"></span>Secondary GPS</div>
|
||||
<div><span class="legend-dot legend-ais"></span>TM AIS GPS</div>
|
||||
<div><span class="legend-dot legend-starlink-gps"></span>Starlink GPS</div>
|
||||
<div><span class="legend-dot legend-starlink-location"></span>Starlink Location</div>
|
||||
<div class="legend-section">72h Route</div>
|
||||
<div><span class="legend-dot legend-valid"></span>Valid</div>
|
||||
<div><span class="legend-dot legend-degraded"></span>Degraded</div>
|
||||
<div><span class="legend-dot legend-alert"></span>Alert</div>
|
||||
</div>
|
||||
<div class="map-route-toggle">
|
||||
<label>
|
||||
<input type="checkbox" id="showRoute" checked onchange="toggleRoute()">
|
||||
Show 72h Route
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Leaflet JS -->
|
||||
<script src="https://unpkg.com/leaflet@1.9.4/dist/leaflet.js"></script>
|
||||
<script src="/static/app.js?v={{ cache_buster }}"></script>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
@@ -0,0 +1,71 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Login - GNSS Guard Cloud</title>
|
||||
<link rel="stylesheet" href="/static/style.css?v={{ cache_buster }}">
|
||||
</head>
|
||||
<body class="login-page">
|
||||
|
||||
<div class="login-container">
|
||||
<div class="login-box">
|
||||
<div class="login-header">
|
||||
<div class="login-title">TM GNSS Guard</div>
|
||||
<div class="login-subtitle">Cloud Dashboard</div>
|
||||
</div>
|
||||
|
||||
<form id="loginForm" class="login-form">
|
||||
<div class="form-group">
|
||||
<label for="username">Username</label>
|
||||
<input type="text" id="username" name="username" required autocomplete="username">
|
||||
</div>
|
||||
|
||||
<div class="form-group">
|
||||
<label for="password">Password</label>
|
||||
<input type="password" id="password" name="password" required autocomplete="current-password">
|
||||
</div>
|
||||
|
||||
<div class="form-error hidden" id="loginError">Invalid credentials</div>
|
||||
|
||||
<button type="submit" class="login-btn">Sign In</button>
|
||||
</form>
|
||||
|
||||
<div class="login-footer">
|
||||
Tototheo Global © 2025
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
document.getElementById('loginForm').addEventListener('submit', async (e) => {
|
||||
e.preventDefault();
|
||||
|
||||
const username = document.getElementById('username').value;
|
||||
const password = document.getElementById('password').value;
|
||||
const errorEl = document.getElementById('loginError');
|
||||
|
||||
errorEl.classList.add('hidden');
|
||||
|
||||
try {
|
||||
const response = await fetch('/login', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ username, password })
|
||||
});
|
||||
|
||||
if (response.ok) {
|
||||
window.location.href = '/';
|
||||
} else {
|
||||
errorEl.classList.remove('hidden');
|
||||
errorEl.textContent = 'Invalid username or password';
|
||||
}
|
||||
} catch (error) {
|
||||
errorEl.classList.remove('hidden');
|
||||
errorEl.textContent = 'Connection error. Please try again.';
|
||||
}
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
@@ -0,0 +1,4 @@
|
||||
"""
|
||||
Services for GNSS Guard client
|
||||
"""
|
||||
|
||||
258
backup-from-device/gnss-guard/tm-gnss-guard/services/buzzer.py
Normal file
258
backup-from-device/gnss-guard/tm-gnss-guard/services/buzzer.py
Normal file
@@ -0,0 +1,258 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Buzzer Service for reTerminal DM4
|
||||
Controls the hardware buzzer using the Linux LED subsystem
|
||||
"""
|
||||
|
||||
import logging
|
||||
import os
|
||||
import subprocess
|
||||
import threading
|
||||
import time
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger("gnss_guard.buzzer")
|
||||
|
||||
# Buzzer control path (Linux LED subsystem)
|
||||
BUZZER_PATH = '/sys/class/leds/usr-buzzer/brightness'
|
||||
|
||||
|
||||
class BuzzerService:
|
||||
"""
|
||||
Service to control the hardware buzzer on reTerminal DM4.
|
||||
|
||||
The buzzer is controlled via the Linux LED subsystem:
|
||||
- Write "1" to turn ON
|
||||
- Write "0" to turn OFF
|
||||
|
||||
Supports alarm patterns (on/off cycling) that run in a background thread.
|
||||
"""
|
||||
|
||||
def __init__(self, on_duration: float = 1.0, off_duration: float = 1.0):
|
||||
"""
|
||||
Initialize the buzzer service.
|
||||
|
||||
Args:
|
||||
on_duration: Duration in seconds for buzzer ON during alarm pattern
|
||||
off_duration: Duration in seconds for buzzer OFF during alarm pattern
|
||||
"""
|
||||
self.on_duration = on_duration
|
||||
self.off_duration = off_duration
|
||||
|
||||
# Alarm state
|
||||
self._alarm_active = False
|
||||
self._alarm_acknowledged = False
|
||||
self._alarm_thread: Optional[threading.Thread] = None
|
||||
self._stop_event = threading.Event()
|
||||
|
||||
# Check if buzzer is available
|
||||
self._buzzer_available = os.path.exists(BUZZER_PATH)
|
||||
if not self._buzzer_available:
|
||||
logger.warning(f"Buzzer not available at {BUZZER_PATH} - running in simulation mode")
|
||||
else:
|
||||
logger.info(f"Buzzer service initialized (path: {BUZZER_PATH})")
|
||||
# Ensure buzzer is off on startup
|
||||
self.buzzer_off()
|
||||
|
||||
def _write_buzzer(self, value: str) -> bool:
|
||||
"""
|
||||
Write value to buzzer control file.
|
||||
|
||||
Args:
|
||||
value: "1" for ON, "0" for OFF
|
||||
|
||||
Returns:
|
||||
True if successful, False otherwise
|
||||
"""
|
||||
if not self._buzzer_available:
|
||||
logger.debug(f"Buzzer simulation: {'ON' if value == '1' else 'OFF'}")
|
||||
return True
|
||||
|
||||
try:
|
||||
# Use sudo tee to write to the sysfs file (requires sudo permissions)
|
||||
result = subprocess.run(
|
||||
['sudo', 'tee', BUZZER_PATH],
|
||||
input=value,
|
||||
text=True,
|
||||
stdout=subprocess.DEVNULL,
|
||||
stderr=subprocess.PIPE,
|
||||
timeout=2.0
|
||||
)
|
||||
if result.returncode != 0:
|
||||
logger.error(f"Failed to write to buzzer: {result.stderr}")
|
||||
return False
|
||||
return True
|
||||
except subprocess.TimeoutExpired:
|
||||
logger.error("Timeout writing to buzzer")
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.error(f"Error writing to buzzer: {e}")
|
||||
return False
|
||||
|
||||
def buzzer_on(self) -> bool:
|
||||
"""Turn buzzer ON"""
|
||||
return self._write_buzzer('1')
|
||||
|
||||
def buzzer_off(self) -> bool:
|
||||
"""Turn buzzer OFF"""
|
||||
return self._write_buzzer('0')
|
||||
|
||||
def get_status(self) -> str:
|
||||
"""
|
||||
Get current buzzer status.
|
||||
|
||||
Returns:
|
||||
"ON", "OFF", or "UNKNOWN"
|
||||
"""
|
||||
if not self._buzzer_available:
|
||||
return "SIMULATED"
|
||||
|
||||
try:
|
||||
with open(BUZZER_PATH, 'r') as f:
|
||||
value = f.read().strip()
|
||||
return "ON" if value in ['1', '255'] else "OFF"
|
||||
except Exception as e:
|
||||
logger.error(f"Error reading buzzer status: {e}")
|
||||
return "UNKNOWN"
|
||||
|
||||
def _alarm_loop(self):
|
||||
"""
|
||||
Background thread loop for alarm pattern (1 second on, 1 second off).
|
||||
Runs until alarm is acknowledged or stopped.
|
||||
"""
|
||||
logger.info("Alarm pattern started")
|
||||
|
||||
while not self._stop_event.is_set() and not self._alarm_acknowledged:
|
||||
# Buzzer ON
|
||||
self.buzzer_on()
|
||||
|
||||
# Wait for on_duration or until stopped
|
||||
if self._stop_event.wait(self.on_duration):
|
||||
break
|
||||
if self._alarm_acknowledged:
|
||||
break
|
||||
|
||||
# Buzzer OFF
|
||||
self.buzzer_off()
|
||||
|
||||
# Wait for off_duration or until stopped
|
||||
if self._stop_event.wait(self.off_duration):
|
||||
break
|
||||
|
||||
# Ensure buzzer is off when alarm stops
|
||||
self.buzzer_off()
|
||||
self._alarm_active = False
|
||||
logger.info("Alarm pattern stopped")
|
||||
|
||||
def start_alarm(self) -> bool:
|
||||
"""
|
||||
Start the alarm pattern (1 second on, 1 second off).
|
||||
|
||||
Returns:
|
||||
True if alarm started, False if already running
|
||||
"""
|
||||
if self._alarm_active and self._alarm_thread and self._alarm_thread.is_alive():
|
||||
logger.debug("Alarm already active")
|
||||
return False
|
||||
|
||||
# Reset state
|
||||
self._alarm_acknowledged = False
|
||||
self._alarm_active = True
|
||||
self._stop_event.clear()
|
||||
|
||||
# Start alarm thread
|
||||
self._alarm_thread = threading.Thread(target=self._alarm_loop, daemon=True)
|
||||
self._alarm_thread.start()
|
||||
|
||||
logger.info("Alarm started")
|
||||
return True
|
||||
|
||||
def stop_alarm(self) -> bool:
|
||||
"""
|
||||
Stop the alarm pattern.
|
||||
|
||||
Returns:
|
||||
True if alarm was stopped, False if not running
|
||||
"""
|
||||
if not self._alarm_active:
|
||||
return False
|
||||
|
||||
self._stop_event.set()
|
||||
|
||||
# Wait for thread to finish
|
||||
if self._alarm_thread and self._alarm_thread.is_alive():
|
||||
self._alarm_thread.join(timeout=3.0)
|
||||
|
||||
# Ensure buzzer is off
|
||||
self.buzzer_off()
|
||||
self._alarm_active = False
|
||||
|
||||
logger.info("Alarm stopped")
|
||||
return True
|
||||
|
||||
def acknowledge_alarm(self) -> bool:
|
||||
"""
|
||||
Acknowledge the alarm, stopping the buzzer.
|
||||
|
||||
Returns:
|
||||
True if alarm was acknowledged, False if no alarm active
|
||||
"""
|
||||
if not self._alarm_active:
|
||||
logger.debug("No active alarm to acknowledge")
|
||||
return False
|
||||
|
||||
self._alarm_acknowledged = True
|
||||
self._stop_event.set()
|
||||
|
||||
# Wait for thread to finish
|
||||
if self._alarm_thread and self._alarm_thread.is_alive():
|
||||
self._alarm_thread.join(timeout=3.0)
|
||||
|
||||
# Ensure buzzer is off
|
||||
self.buzzer_off()
|
||||
self._alarm_active = False
|
||||
|
||||
logger.info("Alarm acknowledged")
|
||||
return True
|
||||
|
||||
def is_alarm_active(self) -> bool:
|
||||
"""Check if alarm is currently active"""
|
||||
return self._alarm_active
|
||||
|
||||
def is_alarm_acknowledged(self) -> bool:
|
||||
"""Check if current alarm has been acknowledged"""
|
||||
return self._alarm_acknowledged
|
||||
|
||||
def reset_acknowledged(self):
|
||||
"""
|
||||
Reset the acknowledged state.
|
||||
Called when status returns to healthy, allowing new alarms to trigger.
|
||||
"""
|
||||
self._alarm_acknowledged = False
|
||||
|
||||
def shutdown(self):
|
||||
"""Shutdown the buzzer service, ensuring buzzer is off"""
|
||||
self.stop_alarm()
|
||||
self.buzzer_off()
|
||||
logger.info("Buzzer service shutdown")
|
||||
|
||||
|
||||
# Global buzzer service instance (singleton pattern)
|
||||
_buzzer_instance: Optional[BuzzerService] = None
|
||||
|
||||
|
||||
def get_buzzer_service(on_duration: float = 1.0, off_duration: float = 1.0) -> BuzzerService:
|
||||
"""
|
||||
Get or create the global buzzer service instance.
|
||||
|
||||
Args:
|
||||
on_duration: Duration in seconds for buzzer ON during alarm
|
||||
off_duration: Duration in seconds for buzzer OFF during alarm
|
||||
|
||||
Returns:
|
||||
BuzzerService instance
|
||||
"""
|
||||
global _buzzer_instance
|
||||
if _buzzer_instance is None:
|
||||
_buzzer_instance = BuzzerService(on_duration, off_duration)
|
||||
return _buzzer_instance
|
||||
@@ -0,0 +1,427 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Server Sync Service for GNSS Guard Client
|
||||
|
||||
Syncs validation data to the central GNSS Guard Server.
|
||||
Features:
|
||||
- Immediate sync on each validation
|
||||
- Offline queue for failed syncs
|
||||
- Batch catchup for queued records
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
import sqlite3
|
||||
import time
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, List, Optional
|
||||
import requests
|
||||
|
||||
logger = logging.getLogger("gnss_guard.server_sync")
|
||||
|
||||
|
||||
class ServerSync:
|
||||
"""
|
||||
Syncs validation data to the central GNSS Guard Server.
|
||||
|
||||
Features:
|
||||
- Sends validation results to server after each iteration
|
||||
- Queues failed requests for retry
|
||||
- Batch sends queued records on successful connection
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
database_path: Path,
|
||||
server_url: str,
|
||||
server_token: str,
|
||||
asset_name: str,
|
||||
batch_size: int = 100,
|
||||
max_queue_size: int = 1000
|
||||
):
|
||||
"""
|
||||
Initialize server sync service.
|
||||
|
||||
Args:
|
||||
database_path: Path to SQLite database (for sync queue)
|
||||
server_url: Base URL of GNSS Guard Server
|
||||
server_token: Authentication token for this asset
|
||||
asset_name: Name of this asset
|
||||
batch_size: Max records to send in batch catchup
|
||||
max_queue_size: Max records to keep in queue
|
||||
"""
|
||||
self.database_path = database_path
|
||||
self.server_url = server_url.rstrip('/')
|
||||
self.server_token = server_token
|
||||
self.asset_name = asset_name
|
||||
self.batch_size = batch_size
|
||||
self.max_queue_size = max_queue_size
|
||||
|
||||
# Request timeout (seconds)
|
||||
self.timeout = 10
|
||||
|
||||
# Initialize sync queue table
|
||||
self._init_sync_queue_table()
|
||||
|
||||
logger.info(f"Server sync initialized for asset '{asset_name}' -> {server_url}")
|
||||
|
||||
def _init_sync_queue_table(self):
|
||||
"""Create sync_queue table if it doesn't exist"""
|
||||
try:
|
||||
conn = sqlite3.connect(str(self.database_path), timeout=5.0)
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute("""
|
||||
CREATE TABLE IF NOT EXISTS sync_queue (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
validation_timestamp_unix REAL NOT NULL,
|
||||
payload TEXT NOT NULL,
|
||||
created_at TEXT NOT NULL,
|
||||
attempts INTEGER DEFAULT 0,
|
||||
last_attempt_at TEXT,
|
||||
UNIQUE(validation_timestamp_unix)
|
||||
)
|
||||
""")
|
||||
|
||||
cursor.execute("""
|
||||
CREATE INDEX IF NOT EXISTS idx_sync_queue_timestamp
|
||||
ON sync_queue(validation_timestamp_unix)
|
||||
""")
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
logger.debug("Sync queue table initialized")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to initialize sync queue table: {e}")
|
||||
|
||||
def _get_headers(self) -> Dict[str, str]:
|
||||
"""Get request headers with authentication"""
|
||||
return {
|
||||
"Authorization": f"Bearer {self.server_token}",
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
|
||||
def sync_validation(self, validation_result: Dict[str, Any]) -> bool:
|
||||
"""
|
||||
Sync a validation result to the server.
|
||||
|
||||
If sync fails, the record is queued for later retry.
|
||||
If sync succeeds, attempt to send any queued records.
|
||||
|
||||
Args:
|
||||
validation_result: Validation result from CoordinateValidator
|
||||
|
||||
Returns:
|
||||
bool: True if sync succeeded, False if queued
|
||||
"""
|
||||
# Prepare payload
|
||||
payload = {
|
||||
"validation_timestamp": validation_result.get("validation_timestamp"),
|
||||
"validation_timestamp_unix": validation_result.get("validation_timestamp_unix"),
|
||||
"is_valid": validation_result.get("is_valid", False),
|
||||
"sources_missing": validation_result.get("sources_missing", []),
|
||||
"sources_stale": validation_result.get("sources_stale", []),
|
||||
"coordinate_differences": validation_result.get("coordinate_differences", {}),
|
||||
"source_coordinates": validation_result.get("source_coordinates", {}),
|
||||
"validation_details": validation_result.get("validation_details", {}),
|
||||
}
|
||||
|
||||
# Try to send
|
||||
success = self._send_validation(payload)
|
||||
|
||||
if success:
|
||||
# On success, try to send queued records
|
||||
self._process_queue()
|
||||
else:
|
||||
# On failure, queue the record
|
||||
self._queue_record(payload)
|
||||
|
||||
return success
|
||||
|
||||
def _send_validation(self, payload: Dict[str, Any]) -> bool:
|
||||
"""
|
||||
Send a single validation record to the server.
|
||||
|
||||
Args:
|
||||
payload: Validation data to send
|
||||
|
||||
Returns:
|
||||
bool: True if successful
|
||||
"""
|
||||
try:
|
||||
url = f"{self.server_url}/api/v1/validation"
|
||||
|
||||
response = requests.post(
|
||||
url,
|
||||
json=payload,
|
||||
headers=self._get_headers(),
|
||||
timeout=self.timeout
|
||||
)
|
||||
|
||||
if response.status_code == 201:
|
||||
logger.debug(f"Validation synced to server")
|
||||
return True
|
||||
elif response.status_code == 401:
|
||||
logger.error(f"Server auth failed - check SERVER_TOKEN")
|
||||
return False
|
||||
else:
|
||||
logger.warning(f"Server returned {response.status_code}: {response.text[:200]}")
|
||||
return False
|
||||
|
||||
except requests.exceptions.Timeout:
|
||||
logger.warning(f"Server request timed out")
|
||||
return False
|
||||
except requests.exceptions.ConnectionError:
|
||||
logger.warning(f"Cannot connect to server at {self.server_url}")
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.error(f"Server sync error: {e}")
|
||||
return False
|
||||
|
||||
def _send_batch(self, records: List[Dict[str, Any]]) -> bool:
|
||||
"""
|
||||
Send a batch of validation records to the server.
|
||||
|
||||
Args:
|
||||
records: List of validation payloads
|
||||
|
||||
Returns:
|
||||
bool: True if successful
|
||||
"""
|
||||
try:
|
||||
url = f"{self.server_url}/api/v1/validation/batch"
|
||||
|
||||
response = requests.post(
|
||||
url,
|
||||
json={"records": records},
|
||||
headers=self._get_headers(),
|
||||
timeout=self.timeout * 3 # Longer timeout for batch
|
||||
)
|
||||
|
||||
if response.status_code == 201:
|
||||
result = response.json()
|
||||
logger.info(f"Batch sync: {result.get('saved', 0)} saved, {result.get('skipped', 0)} skipped")
|
||||
return True
|
||||
else:
|
||||
logger.warning(f"Batch sync failed: {response.status_code}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Batch sync error: {e}")
|
||||
return False
|
||||
|
||||
def _queue_record(self, payload: Dict[str, Any]):
|
||||
"""
|
||||
Add a validation record to the sync queue.
|
||||
|
||||
Args:
|
||||
payload: Validation data to queue
|
||||
"""
|
||||
try:
|
||||
conn = sqlite3.connect(str(self.database_path), timeout=5.0)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check queue size and remove oldest if full
|
||||
cursor.execute("SELECT COUNT(*) FROM sync_queue")
|
||||
count = cursor.fetchone()[0]
|
||||
|
||||
if count >= self.max_queue_size:
|
||||
# Remove oldest records to make room
|
||||
remove_count = count - self.max_queue_size + 10
|
||||
cursor.execute("""
|
||||
DELETE FROM sync_queue
|
||||
WHERE id IN (
|
||||
SELECT id FROM sync_queue
|
||||
ORDER BY validation_timestamp_unix ASC
|
||||
LIMIT ?
|
||||
)
|
||||
""", (remove_count,))
|
||||
logger.warning(f"Sync queue full, removed {remove_count} oldest records")
|
||||
|
||||
# Insert new record
|
||||
cursor.execute("""
|
||||
INSERT OR IGNORE INTO sync_queue
|
||||
(validation_timestamp_unix, payload, created_at)
|
||||
VALUES (?, ?, ?)
|
||||
""", (
|
||||
payload["validation_timestamp_unix"],
|
||||
json.dumps(payload),
|
||||
datetime.utcnow().isoformat()
|
||||
))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
logger.debug(f"Queued validation record for later sync")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to queue record: {e}")
|
||||
|
||||
def _process_queue(self):
|
||||
"""Process queued records after successful connection"""
|
||||
try:
|
||||
conn = sqlite3.connect(str(self.database_path), timeout=5.0)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Get queued records (oldest first)
|
||||
cursor.execute("""
|
||||
SELECT id, payload FROM sync_queue
|
||||
ORDER BY validation_timestamp_unix ASC
|
||||
LIMIT ?
|
||||
""", (self.batch_size,))
|
||||
|
||||
rows = cursor.fetchall()
|
||||
conn.close()
|
||||
|
||||
if not rows:
|
||||
return
|
||||
|
||||
logger.info(f"Processing {len(rows)} queued records")
|
||||
|
||||
# Parse payloads
|
||||
records = []
|
||||
record_ids = []
|
||||
for row_id, payload_json in rows:
|
||||
try:
|
||||
records.append(json.loads(payload_json))
|
||||
record_ids.append(row_id)
|
||||
except json.JSONDecodeError:
|
||||
record_ids.append(row_id) # Still mark for deletion if corrupt
|
||||
|
||||
if not records:
|
||||
return
|
||||
|
||||
# Send batch
|
||||
if self._send_batch(records):
|
||||
# Remove sent records from queue
|
||||
self._remove_from_queue(record_ids)
|
||||
else:
|
||||
# Update attempt count
|
||||
self._update_attempt_count(record_ids)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing queue: {e}")
|
||||
|
||||
def _remove_from_queue(self, record_ids: List[int]):
|
||||
"""Remove successfully sent records from queue"""
|
||||
if not record_ids:
|
||||
return
|
||||
|
||||
try:
|
||||
conn = sqlite3.connect(str(self.database_path), timeout=5.0)
|
||||
cursor = conn.cursor()
|
||||
|
||||
placeholders = ','.join('?' * len(record_ids))
|
||||
cursor.execute(f"DELETE FROM sync_queue WHERE id IN ({placeholders})", record_ids)
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
logger.debug(f"Removed {len(record_ids)} records from sync queue")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to remove records from queue: {e}")
|
||||
|
||||
def _update_attempt_count(self, record_ids: List[int]):
|
||||
"""Update attempt count for failed records"""
|
||||
if not record_ids:
|
||||
return
|
||||
|
||||
try:
|
||||
conn = sqlite3.connect(str(self.database_path), timeout=5.0)
|
||||
cursor = conn.cursor()
|
||||
|
||||
now = datetime.utcnow().isoformat()
|
||||
placeholders = ','.join('?' * len(record_ids))
|
||||
cursor.execute(f"""
|
||||
UPDATE sync_queue
|
||||
SET attempts = attempts + 1, last_attempt_at = ?
|
||||
WHERE id IN ({placeholders})
|
||||
""", [now] + record_ids)
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to update attempt count: {e}")
|
||||
|
||||
def get_queue_status(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get current sync queue status.
|
||||
|
||||
Returns:
|
||||
Dictionary with queue stats
|
||||
"""
|
||||
try:
|
||||
conn = sqlite3.connect(str(self.database_path), timeout=5.0)
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute("SELECT COUNT(*) FROM sync_queue")
|
||||
count = cursor.fetchone()[0]
|
||||
|
||||
cursor.execute("SELECT MIN(validation_timestamp_unix), MAX(validation_timestamp_unix) FROM sync_queue")
|
||||
oldest, newest = cursor.fetchone()
|
||||
|
||||
conn.close()
|
||||
|
||||
return {
|
||||
"queued_count": count,
|
||||
"oldest_timestamp": oldest,
|
||||
"newest_timestamp": newest,
|
||||
"queue_full": count >= self.max_queue_size
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get queue status: {e}")
|
||||
return {"error": str(e)}
|
||||
|
||||
def force_sync(self) -> bool:
|
||||
"""
|
||||
Force a sync of all queued records.
|
||||
|
||||
Returns:
|
||||
bool: True if all records synced successfully
|
||||
"""
|
||||
logger.info("Starting forced sync of queued records")
|
||||
|
||||
try:
|
||||
conn = sqlite3.connect(str(self.database_path), timeout=5.0)
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("SELECT COUNT(*) FROM sync_queue")
|
||||
total = cursor.fetchone()[0]
|
||||
conn.close()
|
||||
|
||||
if total == 0:
|
||||
logger.info("No records to sync")
|
||||
return True
|
||||
|
||||
synced = 0
|
||||
while True:
|
||||
# Check if queue is empty
|
||||
status = self.get_queue_status()
|
||||
if status.get("queued_count", 0) == 0:
|
||||
break
|
||||
|
||||
# Process a batch
|
||||
before_count = status["queued_count"]
|
||||
self._process_queue()
|
||||
|
||||
# Check if we made progress
|
||||
after_status = self.get_queue_status()
|
||||
if after_status.get("queued_count", 0) >= before_count:
|
||||
# No progress, connection likely failed
|
||||
logger.warning("Sync stalled, connection issue")
|
||||
break
|
||||
|
||||
synced += before_count - after_status.get("queued_count", 0)
|
||||
|
||||
logger.info(f"Force sync completed: {synced}/{total} records synced")
|
||||
return synced == total
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Force sync error: {e}")
|
||||
return False
|
||||
|
||||
@@ -0,0 +1,4 @@
|
||||
"""
|
||||
Data source fetchers for GNSS Guard
|
||||
"""
|
||||
|
||||
733
backup-from-device/gnss-guard/tm-gnss-guard/sources/nmea_gps.py
Normal file
733
backup-from-device/gnss-guard/tm-gnss-guard/sources/nmea_gps.py
Normal file
@@ -0,0 +1,733 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
NMEA GPS data collector
|
||||
Continuously collects GPS coordinates from NMEA devices via TCP connection
|
||||
Filters for GGA sentences only and maintains latest position per source
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import os
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
from typing import Dict, Any, Optional, List
|
||||
from queue import Queue
|
||||
|
||||
from config import Config
|
||||
from storage.logger import StructuredLogger
|
||||
|
||||
logger = logging.getLogger("gnss_guard.nmea_gps")
|
||||
|
||||
|
||||
def strip_telnet_iac(data: bytes, diagnostic_mode: bool = False) -> bytes:
|
||||
"""Remove Telnet IAC (Interpret As Command) sequences from data stream.
|
||||
|
||||
Telnet IAC sequences are 0xFF followed by command bytes:
|
||||
- 0xFF 0xFB (WILL)
|
||||
- 0xFF 0xFC (WONT)
|
||||
- 0xFF 0xFD (DO)
|
||||
- 0xFF 0xFE (DONT)
|
||||
- 0xFF 0xFF (IAC escape - becomes single 0xFF)
|
||||
|
||||
These sequences are negotiation bytes and should be stripped before
|
||||
processing NMEA data.
|
||||
"""
|
||||
if not data:
|
||||
return data
|
||||
|
||||
result = bytearray()
|
||||
i = 0
|
||||
|
||||
while i < len(data):
|
||||
if data[i] == 0xFF: # IAC byte
|
||||
if i + 1 < len(data):
|
||||
cmd = data[i + 1]
|
||||
|
||||
# IAC IAC (0xFF 0xFF) is escaped IAC - keep single 0xFF
|
||||
if cmd == 0xFF:
|
||||
result.append(0xFF)
|
||||
i += 2
|
||||
continue
|
||||
|
||||
# IAC command sequences (WILL/WONT/DO/DONT)
|
||||
if cmd in (0xFB, 0xFC, 0xFD, 0xFE):
|
||||
if diagnostic_mode:
|
||||
cmd_names = {0xFB: "WILL", 0xFC: "WONT", 0xFD: "DO", 0xFE: "DONT"}
|
||||
logger.debug(f"[DIAGNOSTIC] Telnet IAC: 0xFF 0x{cmd:02X} ({cmd_names.get(cmd, 'UNKNOWN')})")
|
||||
|
||||
i += 2 # Skip IAC + command
|
||||
|
||||
# Some commands have an option byte
|
||||
if i < len(data):
|
||||
opt = data[i]
|
||||
if diagnostic_mode:
|
||||
logger.debug(f"[DIAGNOSTIC] Option: 0x{opt:02X}")
|
||||
i += 1
|
||||
else:
|
||||
# Unknown IAC command - skip it
|
||||
if diagnostic_mode:
|
||||
logger.debug(f"[DIAGNOSTIC] Telnet IAC: 0xFF 0x{cmd:02X} (unknown, skipped)")
|
||||
i += 2
|
||||
else:
|
||||
# Incomplete IAC at end of buffer - skip it
|
||||
i += 1
|
||||
else:
|
||||
result.append(data[i])
|
||||
i += 1
|
||||
|
||||
return bytes(result)
|
||||
|
||||
|
||||
class NMEAParser:
|
||||
"""Parser for NMEA 0183 sentences"""
|
||||
|
||||
@staticmethod
|
||||
def validate_checksum(sentence: str) -> bool:
|
||||
"""Validate NMEA sentence checksum"""
|
||||
if "*" not in sentence:
|
||||
return False
|
||||
|
||||
try:
|
||||
data, checksum = sentence.split("*")
|
||||
calculated = 0
|
||||
for char in data[1:]: # Skip the '$'
|
||||
calculated ^= ord(char)
|
||||
return format(calculated, "02X") == checksum.upper()
|
||||
except (ValueError, IndexError):
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def parse_sentence(sentence: str) -> Dict[str, Any]:
|
||||
"""Parse NMEA sentence into structured data"""
|
||||
sentence = sentence.strip()
|
||||
|
||||
if not sentence.startswith("$"):
|
||||
return {"error": "Invalid sentence format"}
|
||||
|
||||
# Validate checksum
|
||||
checksum_valid = NMEAParser.validate_checksum(sentence)
|
||||
|
||||
try:
|
||||
# Remove checksum if present
|
||||
if "*" in sentence:
|
||||
sentence = sentence.split("*")[0]
|
||||
|
||||
# Split into fields
|
||||
fields = sentence[1:].split(",") # Remove $ and split
|
||||
|
||||
if len(fields) < 1:
|
||||
return {"error": "Empty sentence"}
|
||||
|
||||
# Extract talker ID and sentence type
|
||||
identifier = fields[0]
|
||||
if len(identifier) >= 5:
|
||||
# Handle special cases like SHEROT (should be S + HEROT)
|
||||
if identifier.startswith("SHEROT"):
|
||||
talker_id = "S"
|
||||
sentence_type = "HEROT"
|
||||
else:
|
||||
talker_id = identifier[:2]
|
||||
sentence_type = identifier[2:]
|
||||
else:
|
||||
talker_id = "UN"
|
||||
sentence_type = identifier
|
||||
|
||||
parsed_data = {
|
||||
"sentence_type": sentence_type,
|
||||
"talker_id": talker_id,
|
||||
"checksum_valid": checksum_valid,
|
||||
"fields": fields[1:] if len(fields) > 1 else [],
|
||||
}
|
||||
|
||||
# Parse specific sentence types for enhanced data extraction
|
||||
if sentence_type == "GGA":
|
||||
parsed_data.update(NMEAParser.parse_gga(fields))
|
||||
else:
|
||||
# For non-GGA sentences, just return basic parsing
|
||||
pass
|
||||
|
||||
return parsed_data
|
||||
|
||||
except Exception as e:
|
||||
return {"error": f"Parse error: {str(e)}"}
|
||||
|
||||
@staticmethod
|
||||
def parse_gga(fields: List[str]) -> Dict[str, Any]:
|
||||
"""Parse GGA (Global Positioning System Fix Data) sentence"""
|
||||
result = {}
|
||||
try:
|
||||
# Time
|
||||
if fields[1]:
|
||||
result["time"] = fields[1]
|
||||
|
||||
# Latitude
|
||||
if fields[2] and fields[3]:
|
||||
lat_deg = float(fields[2][:2])
|
||||
lat_min = float(fields[2][2:])
|
||||
latitude = lat_deg + lat_min / 60
|
||||
if fields[3] == "S":
|
||||
latitude = -latitude
|
||||
result["latitude"] = latitude
|
||||
|
||||
# Longitude
|
||||
if fields[4] and fields[5]:
|
||||
lon_deg = float(fields[4][:3])
|
||||
lon_min = float(fields[4][3:])
|
||||
longitude = lon_deg + lon_min / 60
|
||||
if fields[5] == "W":
|
||||
longitude = -longitude
|
||||
result["longitude"] = longitude
|
||||
|
||||
# Quality and satellites
|
||||
if len(fields) > 6 and fields[6]:
|
||||
result["quality"] = int(fields[6])
|
||||
if len(fields) > 7 and fields[7]:
|
||||
result["satellites"] = int(fields[7])
|
||||
if len(fields) > 8 and fields[8]:
|
||||
result["hdop"] = float(fields[8])
|
||||
if len(fields) > 9 and fields[9]:
|
||||
result["altitude"] = float(fields[9])
|
||||
|
||||
return result
|
||||
except (ValueError, IndexError):
|
||||
return {}
|
||||
|
||||
|
||||
class DeviceConnection:
|
||||
"""Handles connection to a single NMEA device"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
device_config: Dict[str, Any],
|
||||
data_queue: Queue,
|
||||
parser: NMEAParser,
|
||||
vessel_info: Dict[str, Any],
|
||||
diagnostic_mode: bool = False,
|
||||
structured_logger: Optional[StructuredLogger] = None,
|
||||
source_name: Optional[str] = None,
|
||||
verbose_logging: bool = False,
|
||||
):
|
||||
self.device_config = device_config
|
||||
self.data_queue = data_queue
|
||||
self.parser = parser
|
||||
self.vessel_info = vessel_info
|
||||
self.diagnostic_mode = diagnostic_mode
|
||||
self.structured_logger = structured_logger
|
||||
self.source_name = source_name or device_config.get("id", "unknown")
|
||||
self.verbose_logging = verbose_logging
|
||||
self.running = False
|
||||
self.sequence_number = 1
|
||||
self.sentences_received = 0
|
||||
self.last_sentence_log_time = time.time()
|
||||
|
||||
async def connect_and_collect(self):
|
||||
"""Connect to device and start collecting data"""
|
||||
self.running = True
|
||||
device_ip = self.device_config["ip"]
|
||||
device_port = self.device_config["port"]
|
||||
device_id = self.device_config["id"]
|
||||
|
||||
logger.info(f"Starting connection to device {device_id} at {device_ip}:{device_port}")
|
||||
if self.structured_logger:
|
||||
self.structured_logger.info(
|
||||
self.source_name,
|
||||
f"Starting connection to device {device_id}",
|
||||
{"device_ip": device_ip, "device_port": device_port}
|
||||
)
|
||||
|
||||
if self.diagnostic_mode or self.verbose_logging:
|
||||
logger.info(f"[DEBUG] Enhanced connection logging enabled for device {device_id}")
|
||||
logger.info(f"[DEBUG] Target: {device_ip}:{device_port}")
|
||||
|
||||
while self.running:
|
||||
try:
|
||||
# Connect to device with timeout
|
||||
connection_timeout = 10 # 10 seconds timeout for connection
|
||||
if self.verbose_logging:
|
||||
logger.info(f"[DEBUG] Attempting TCP connection to {device_ip}:{device_port} (timeout: {connection_timeout}s)...")
|
||||
|
||||
try:
|
||||
reader, writer = await asyncio.wait_for(
|
||||
asyncio.open_connection(device_ip, device_port),
|
||||
timeout=connection_timeout
|
||||
)
|
||||
except asyncio.TimeoutError:
|
||||
logger.error(f"Connection TIMEOUT for device {device_id} at {device_ip}:{device_port} (no response in {connection_timeout}s)")
|
||||
if self.verbose_logging:
|
||||
logger.error(f"[DEBUG] TCP connection attempt timed out after {connection_timeout} seconds")
|
||||
logger.error(f"[DEBUG] Possible causes: wrong IP, firewall blocking, device offline, network issue")
|
||||
if self.structured_logger:
|
||||
self.structured_logger.error(
|
||||
self.source_name,
|
||||
f"Connection timeout for device {device_id}",
|
||||
{"device_ip": device_ip, "device_port": device_port, "timeout": connection_timeout}
|
||||
)
|
||||
if self.running:
|
||||
reconnect_delay = self.device_config.get("reconnect_delay", 5)
|
||||
logger.info(f"Retrying connection to device {device_id} in {reconnect_delay} seconds...")
|
||||
await asyncio.sleep(reconnect_delay)
|
||||
continue
|
||||
|
||||
# Log socket details if verbose
|
||||
if self.verbose_logging:
|
||||
sock = writer.get_extra_info('socket')
|
||||
if sock:
|
||||
local_addr = sock.getsockname()
|
||||
peer_addr = sock.getpeername()
|
||||
logger.info(f"[DEBUG] TCP connection established: local={local_addr} -> remote={peer_addr}")
|
||||
|
||||
logger.info(f"Connected to device {device_id} at {device_ip}:{device_port}")
|
||||
if self.structured_logger:
|
||||
self.structured_logger.info(
|
||||
self.source_name,
|
||||
f"Connected to device {device_id}",
|
||||
{"device_ip": device_ip, "device_port": device_port}
|
||||
)
|
||||
|
||||
# Buffer for accumulating data and extracting complete lines
|
||||
buffer = b""
|
||||
|
||||
# Keep connection alive and read continuously
|
||||
while self.running:
|
||||
try:
|
||||
# Read raw bytes from device with timeout
|
||||
data = await asyncio.wait_for(reader.read(4096), timeout=30.0)
|
||||
|
||||
if not data:
|
||||
logger.warning(f"No data received from device {device_id}, connection may be closed")
|
||||
if self.verbose_logging:
|
||||
logger.warning(f"[DEBUG] TCP read returned empty data - server closed connection or EOF")
|
||||
if self.structured_logger:
|
||||
self.structured_logger.warning(
|
||||
self.source_name,
|
||||
f"No data received from device {device_id}, connection may be closed"
|
||||
)
|
||||
break
|
||||
|
||||
# Strip Telnet IAC sequences before processing
|
||||
cleaned_data = strip_telnet_iac(data, self.diagnostic_mode)
|
||||
|
||||
# Log data reception periodically (every 10 seconds) to show activity
|
||||
current_time = time.time()
|
||||
if current_time - self.last_sentence_log_time >= 10:
|
||||
logger.debug(f"Received {len(cleaned_data)} bytes from {device_id} (total sentences: {self.sentences_received})")
|
||||
self.last_sentence_log_time = current_time
|
||||
|
||||
# Add cleaned data to buffer
|
||||
buffer += cleaned_data
|
||||
|
||||
# Process complete lines from buffer
|
||||
while b"\n" in buffer or b"\r" in buffer:
|
||||
# Find line ending (CRLF, LF, or CR)
|
||||
line_end = -1
|
||||
if b"\r\n" in buffer:
|
||||
line_end = buffer.find(b"\r\n")
|
||||
line = buffer[:line_end]
|
||||
buffer = buffer[line_end + 2 :]
|
||||
elif b"\n" in buffer:
|
||||
line_end = buffer.find(b"\n")
|
||||
line = buffer[:line_end]
|
||||
buffer = buffer[line_end + 1 :]
|
||||
elif b"\r" in buffer:
|
||||
line_end = buffer.find(b"\r")
|
||||
line = buffer[:line_end]
|
||||
buffer = buffer[line_end + 1 :]
|
||||
else:
|
||||
break
|
||||
|
||||
# Decode and process NMEA sentence
|
||||
try:
|
||||
line_str = line.decode("ascii", errors="ignore").strip()
|
||||
if line_str.startswith("$"):
|
||||
self.sentences_received += 1
|
||||
# Log first sentence and every 10th sentence to show activity (unless verbose logging is enabled)
|
||||
# Verbose logging will be handled in the processing task
|
||||
if not self.verbose_logging:
|
||||
if self.sentences_received == 1:
|
||||
logger.info(f"NMEA {device_id}: First sentence received: {line_str[:80]}")
|
||||
elif self.sentences_received % 10 == 0:
|
||||
logger.debug(f"NMEA {device_id}: Received sentence #{self.sentences_received}: {line_str[:50]}...")
|
||||
await self.process_nmea_sentence(line_str, device_ip, device_port, device_id)
|
||||
except Exception as e:
|
||||
logger.debug(f"Error decoding line: {e}")
|
||||
|
||||
# Small delay to avoid overwhelming the system
|
||||
read_delay = float(os.getenv("READ_DELAY_SECONDS", "0.1"))
|
||||
await asyncio.sleep(read_delay)
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
logger.warning(f"Timeout reading from device {device_id} (30s no data)")
|
||||
if self.verbose_logging:
|
||||
logger.warning(f"[DEBUG] Read timeout - device may be disconnected or not sending data")
|
||||
logger.warning(f"[DEBUG] Total sentences received this session: {self.sentences_received}")
|
||||
if self.structured_logger:
|
||||
self.structured_logger.warning(
|
||||
self.source_name,
|
||||
f"Timeout reading from device {device_id}"
|
||||
)
|
||||
continue
|
||||
except Exception as e:
|
||||
logger.error(f"Error reading from device {device_id}: {e}")
|
||||
if self.verbose_logging:
|
||||
logger.error(f"[DEBUG] Read error type: {type(e).__name__}")
|
||||
logger.error(f"[DEBUG] Read error details: {e}")
|
||||
if self.structured_logger:
|
||||
self.structured_logger.error(
|
||||
self.source_name,
|
||||
f"Error reading from device {device_id}",
|
||||
{"error": str(e)}
|
||||
)
|
||||
break
|
||||
|
||||
writer.close()
|
||||
await writer.wait_closed()
|
||||
logger.info(f"Disconnected from device {device_id}")
|
||||
if self.structured_logger:
|
||||
self.structured_logger.info(
|
||||
self.source_name,
|
||||
f"Disconnected from device {device_id}"
|
||||
)
|
||||
|
||||
except ConnectionRefusedError as e:
|
||||
logger.error(f"Connection REFUSED for device {device_id} at {device_ip}:{device_port} - Is the device running?")
|
||||
if self.verbose_logging:
|
||||
logger.error(f"[DEBUG] ConnectionRefusedError: {e}")
|
||||
logger.error(f"[DEBUG] This usually means: port is closed, no service listening, or firewall blocking")
|
||||
if self.structured_logger:
|
||||
self.structured_logger.error(
|
||||
self.source_name,
|
||||
f"Connection refused for device {device_id}",
|
||||
{"error": str(e), "device_ip": device_ip, "device_port": device_port}
|
||||
)
|
||||
|
||||
if self.running:
|
||||
reconnect_delay = self.device_config.get("reconnect_delay", 5)
|
||||
logger.info(f"Retrying connection to device {device_id} in {reconnect_delay} seconds...")
|
||||
await asyncio.sleep(reconnect_delay)
|
||||
|
||||
except OSError as e:
|
||||
# Catch network-level errors (no route, network unreachable, etc.)
|
||||
logger.error(f"Network error for device {device_id} at {device_ip}:{device_port}: {e}")
|
||||
if self.verbose_logging:
|
||||
logger.error(f"[DEBUG] OSError: {e}")
|
||||
logger.error(f"[DEBUG] Error code: {e.errno if hasattr(e, 'errno') else 'N/A'}")
|
||||
logger.error(f"[DEBUG] This may indicate: wrong IP, network unreachable, or routing issue")
|
||||
if self.structured_logger:
|
||||
self.structured_logger.error(
|
||||
self.source_name,
|
||||
f"Network error for device {device_id}",
|
||||
{"error": str(e), "device_ip": device_ip, "device_port": device_port}
|
||||
)
|
||||
|
||||
if self.running:
|
||||
reconnect_delay = self.device_config.get("reconnect_delay", 5)
|
||||
logger.info(f"Retrying connection to device {device_id} in {reconnect_delay} seconds...")
|
||||
await asyncio.sleep(reconnect_delay)
|
||||
|
||||
except asyncio.TimeoutError as e:
|
||||
logger.error(f"Connection TIMEOUT for device {device_id} at {device_ip}:{device_port}")
|
||||
if self.verbose_logging:
|
||||
logger.error(f"[DEBUG] Connection attempt timed out")
|
||||
logger.error(f"[DEBUG] This may indicate: wrong IP, firewall, or device not responding")
|
||||
if self.structured_logger:
|
||||
self.structured_logger.error(
|
||||
self.source_name,
|
||||
f"Connection timeout for device {device_id}",
|
||||
{"device_ip": device_ip, "device_port": device_port}
|
||||
)
|
||||
|
||||
if self.running:
|
||||
reconnect_delay = self.device_config.get("reconnect_delay", 5)
|
||||
logger.info(f"Retrying connection to device {device_id} in {reconnect_delay} seconds...")
|
||||
await asyncio.sleep(reconnect_delay)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Connection error for device {device_id}: {e}")
|
||||
if self.verbose_logging:
|
||||
logger.error(f"[DEBUG] Exception type: {type(e).__name__}")
|
||||
logger.error(f"[DEBUG] Exception details: {e}")
|
||||
if self.structured_logger:
|
||||
self.structured_logger.error(
|
||||
self.source_name,
|
||||
f"Connection error for device {device_id}",
|
||||
{"error": str(e), "error_type": type(e).__name__, "device_ip": device_ip, "device_port": device_port}
|
||||
)
|
||||
|
||||
if self.running:
|
||||
reconnect_delay = self.device_config.get("reconnect_delay", 5)
|
||||
logger.info(f"Retrying connection to device {device_id} in {reconnect_delay} seconds...")
|
||||
if self.structured_logger:
|
||||
self.structured_logger.info(
|
||||
self.source_name,
|
||||
f"Retrying connection to device {device_id}",
|
||||
{"reconnect_delay": reconnect_delay}
|
||||
)
|
||||
await asyncio.sleep(reconnect_delay)
|
||||
|
||||
async def process_nmea_sentence(self, sentence: str, source_ip: str, source_port: int, device_id: str):
|
||||
"""Process a single NMEA sentence"""
|
||||
try:
|
||||
start_time = time.time()
|
||||
|
||||
# Parse the sentence
|
||||
parsed_data = self.parser.parse_sentence(sentence)
|
||||
|
||||
# Create record
|
||||
now = datetime.now(timezone.utc)
|
||||
record = {
|
||||
"timestamp": now.isoformat(),
|
||||
"timestamp_unix": now.timestamp() * 1000, # milliseconds
|
||||
"vessel": self.vessel_info,
|
||||
"source_ip": source_ip,
|
||||
"source_port": source_port,
|
||||
"device_id": device_id,
|
||||
"raw_nmea": sentence,
|
||||
"parsed_data": parsed_data,
|
||||
"validation": {
|
||||
"checksum_valid": parsed_data.get("checksum_valid", False),
|
||||
"parse_successful": "error" not in parsed_data,
|
||||
"errors": ([parsed_data.get("error")] if "error" in parsed_data else []),
|
||||
},
|
||||
"collection_metadata": {
|
||||
"collector_version": "1.0.0",
|
||||
"processing_delay_ms": int((time.time() - start_time) * 1000),
|
||||
"sequence_number": self.sequence_number,
|
||||
},
|
||||
}
|
||||
|
||||
self.sequence_number += 1
|
||||
|
||||
# Add to queue for processing
|
||||
self.data_queue.put(record)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing NMEA sentence from device {device_id}: {e}")
|
||||
|
||||
def stop(self):
|
||||
"""Stop device connection"""
|
||||
self.running = False
|
||||
|
||||
|
||||
class NMEAGPSCollector:
|
||||
"""Collector for NMEA GPS coordinates from vessel GPS devices"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
config: Config,
|
||||
source_name: str,
|
||||
device_ip: str,
|
||||
device_port: int,
|
||||
structured_logger: Optional[StructuredLogger] = None
|
||||
):
|
||||
"""
|
||||
Initialize NMEA GPS collector
|
||||
|
||||
Args:
|
||||
config: Configuration object
|
||||
source_name: Source identifier (e.g., "nmea_primary", "nmea_secondary")
|
||||
device_ip: IP address of NMEA device
|
||||
device_port: Port of NMEA device
|
||||
structured_logger: Optional StructuredLogger instance for JSON logging
|
||||
"""
|
||||
self.config = config
|
||||
self.source_name = source_name
|
||||
self.device_ip = device_ip
|
||||
self.device_port = device_port
|
||||
self.structured_logger = structured_logger
|
||||
self.latest_position: Optional[Dict[str, Any]] = None
|
||||
self.lock = asyncio.Lock()
|
||||
|
||||
self.parser = NMEAParser()
|
||||
self.data_queue = Queue()
|
||||
self.device_config = {
|
||||
"id": source_name,
|
||||
"ip": device_ip,
|
||||
"port": device_port,
|
||||
"reconnect_delay": 5
|
||||
}
|
||||
self.vessel_info = {"serial": source_name}
|
||||
self.connection = None
|
||||
self.running = False
|
||||
self.gga_count_period = 0
|
||||
self.last_activity_log_time = time.time()
|
||||
|
||||
async def start(self):
|
||||
"""Start the NMEA collector as an async task"""
|
||||
if not self.device_ip or self.device_port == 0:
|
||||
logger.warning(f"NMEA collector {self.source_name} not configured (missing IP/port)")
|
||||
if self.structured_logger:
|
||||
self.structured_logger.warning(
|
||||
self.source_name,
|
||||
"NMEA collector not configured",
|
||||
{"reason": "missing IP/port"}
|
||||
)
|
||||
return
|
||||
|
||||
self.running = True
|
||||
|
||||
# Log verbose mode settings
|
||||
if self.config.nmea_verbose_logging:
|
||||
logger.info(f"[DEBUG] ========== NMEA DEBUG MODE ENABLED for {self.source_name} ==========")
|
||||
logger.info(f"[DEBUG] Device configuration:")
|
||||
logger.info(f"[DEBUG] IP: {self.device_ip}")
|
||||
logger.info(f"[DEBUG] Port: {self.device_port}")
|
||||
logger.info(f"[DEBUG] Source name: {self.source_name}")
|
||||
logger.info(f"[DEBUG] Will show: connection attempts, TCP details, all NMEA sentences, errors")
|
||||
|
||||
self.connection = DeviceConnection(
|
||||
device_config=self.device_config,
|
||||
data_queue=self.data_queue,
|
||||
parser=self.parser,
|
||||
vessel_info=self.vessel_info,
|
||||
diagnostic_mode=self.config.nmea_verbose_logging, # Enable diagnostic mode when verbose
|
||||
structured_logger=self.structured_logger,
|
||||
source_name=self.source_name,
|
||||
verbose_logging=self.config.nmea_verbose_logging
|
||||
)
|
||||
|
||||
# Start connection task
|
||||
asyncio.create_task(self._connection_task())
|
||||
# Start processing task
|
||||
asyncio.create_task(self._processing_task())
|
||||
|
||||
async def _connection_task(self):
|
||||
"""Task that manages the device connection"""
|
||||
await self.connection.connect_and_collect()
|
||||
|
||||
async def _processing_task(self):
|
||||
"""Task that processes NMEA sentences from the queue"""
|
||||
while self.running:
|
||||
try:
|
||||
# Check if queue has items (non-blocking)
|
||||
try:
|
||||
record = self.data_queue.get_nowait()
|
||||
except:
|
||||
# Queue is empty, sleep and continue
|
||||
# Log periodic activity summary (every 30 seconds)
|
||||
current_time = time.time()
|
||||
if current_time - self.last_activity_log_time >= 30:
|
||||
if self.gga_count_period > 0:
|
||||
# Only log activity summary if verbose logging is enabled
|
||||
if self.config.nmea_verbose_logging:
|
||||
logger.info(f"NMEA {self.source_name}: Activity - {self.gga_count_period} GGA sentences processed in last 30s")
|
||||
else:
|
||||
# Always log warnings if no GGA sentences received (important for diagnostics)
|
||||
logger.warning(f"NMEA {self.source_name}: No GGA sentences received in last 30s (checking connection...)")
|
||||
self.gga_count_period = 0
|
||||
self.last_activity_log_time = current_time
|
||||
await asyncio.sleep(0.1)
|
||||
continue
|
||||
|
||||
# Process only GGA sentences
|
||||
parsed_data = record.get("parsed_data", {})
|
||||
sentence_type = parsed_data.get("sentence_type", "")
|
||||
|
||||
# Log all sentences if verbose logging is enabled
|
||||
if self.config.nmea_verbose_logging:
|
||||
raw_nmea = record.get("raw_nmea", "")
|
||||
logger.info(f"NMEA {self.source_name}: [{sentence_type}] {raw_nmea[:100]}")
|
||||
|
||||
if sentence_type == "GGA":
|
||||
self.gga_count_period += 1
|
||||
# Only log GGA count if verbose logging is enabled
|
||||
if self.config.nmea_verbose_logging:
|
||||
logger.info(f"NMEA {self.source_name}: Received GGA sentence (total this period: {self.gga_count_period})")
|
||||
await self._process_gga(record)
|
||||
else:
|
||||
# Log non-GGA sentences at debug level (unless verbose logging is enabled)
|
||||
if not self.config.nmea_verbose_logging:
|
||||
logger.debug(f"Received {sentence_type} sentence from {self.source_name} (not processing)")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in NMEA processing task for {self.source_name}: {e}")
|
||||
if self.structured_logger:
|
||||
self.structured_logger.error(
|
||||
self.source_name,
|
||||
"Error in NMEA processing task",
|
||||
{"error": str(e)}
|
||||
)
|
||||
await asyncio.sleep(1.0)
|
||||
|
||||
async def _process_gga(self, record: Dict[str, Any]):
|
||||
"""Process a GGA sentence and update latest position"""
|
||||
try:
|
||||
parsed_data = record.get("parsed_data", {})
|
||||
|
||||
# Extract coordinates from parsed GGA data
|
||||
latitude = parsed_data.get("latitude")
|
||||
longitude = parsed_data.get("longitude")
|
||||
altitude = parsed_data.get("altitude")
|
||||
|
||||
if latitude is None or longitude is None:
|
||||
logger.debug(f"GGA sentence from {self.source_name} missing coordinates")
|
||||
return
|
||||
|
||||
# Get timestamp
|
||||
timestamp_str = record.get("timestamp", "")
|
||||
try:
|
||||
timestamp = datetime.fromisoformat(timestamp_str.replace("Z", "+00:00"))
|
||||
except:
|
||||
timestamp = datetime.now(timezone.utc)
|
||||
|
||||
# Update latest position
|
||||
async with self.lock:
|
||||
self.latest_position = {
|
||||
"source": self.source_name,
|
||||
"latitude": float(latitude),
|
||||
"longitude": float(longitude),
|
||||
"altitude": float(altitude) if altitude is not None else None,
|
||||
"timestamp": timestamp.isoformat(),
|
||||
"timestamp_unix": timestamp.timestamp(),
|
||||
"supplementary_data": {
|
||||
"satellites": parsed_data.get("satellites"),
|
||||
"quality": parsed_data.get("quality"),
|
||||
"hdop": parsed_data.get("hdop"),
|
||||
"time": parsed_data.get("time"),
|
||||
"raw_nmea": record.get("raw_nmea"),
|
||||
}
|
||||
}
|
||||
|
||||
# Log successful position update only if verbose logging is enabled
|
||||
if self.config.nmea_verbose_logging:
|
||||
logger.info(
|
||||
f"NMEA {self.source_name}: Updated position - "
|
||||
f"Lat: {latitude:.6f}, Lon: {longitude:.6f}, "
|
||||
f"Alt: {altitude:.1f}m, Satellites: {parsed_data.get('satellites', 'N/A')}, "
|
||||
f"Quality: {parsed_data.get('quality', 'N/A')}"
|
||||
)
|
||||
if self.structured_logger:
|
||||
self.structured_logger.info(
|
||||
self.source_name,
|
||||
"Position updated from GGA sentence",
|
||||
{
|
||||
"latitude": latitude,
|
||||
"longitude": longitude,
|
||||
"altitude": altitude,
|
||||
"satellites": parsed_data.get("satellites"),
|
||||
"quality": parsed_data.get("quality"),
|
||||
"hdop": parsed_data.get("hdop")
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing GGA sentence from {self.source_name}: {e}")
|
||||
if self.structured_logger:
|
||||
self.structured_logger.error(
|
||||
self.source_name,
|
||||
"Error processing GGA sentence",
|
||||
{"error": str(e)}
|
||||
)
|
||||
|
||||
async def get_latest_position(self) -> Optional[Dict[str, Any]]:
|
||||
"""Get the latest position from this collector"""
|
||||
async with self.lock:
|
||||
if self.latest_position:
|
||||
# Create a copy to avoid race conditions
|
||||
return dict(self.latest_position)
|
||||
return None
|
||||
|
||||
async def stop(self):
|
||||
"""Stop the collector"""
|
||||
self.running = False
|
||||
if self.connection:
|
||||
self.connection.stop()
|
||||
@@ -0,0 +1,134 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Starlink GPS data fetcher
|
||||
Fetches GPS coordinates from Starlink terminal via gRPC
|
||||
Reuses logic from _old_project/starlink_location.py
|
||||
"""
|
||||
|
||||
import sys
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from datetime import datetime, timezone
|
||||
from typing import Dict, Any, Optional, List
|
||||
from config import Config
|
||||
|
||||
logger = logging.getLogger("gnss_guard.starlink_gps")
|
||||
|
||||
# Add starlink-grpc-tools to path
|
||||
starlink_tools_path = Path(__file__).parent.parent / "starlink-grpc-tools"
|
||||
if str(starlink_tools_path) not in sys.path:
|
||||
sys.path.insert(0, str(starlink_tools_path))
|
||||
|
||||
try:
|
||||
import starlink_grpc
|
||||
except ImportError:
|
||||
logger.error("Failed to import starlink_grpc. Make sure starlink-grpc-tools is available.")
|
||||
starlink_grpc = None
|
||||
|
||||
|
||||
class StarlinkGPSFetcher:
|
||||
"""Fetcher for Starlink GPS coordinates"""
|
||||
|
||||
def __init__(self, config: Config):
|
||||
self.config = config
|
||||
self.target_ip = f"{config.starlink_ip}:{config.starlink_port}"
|
||||
|
||||
def fetch(self) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Fetch GPS coordinates from Starlink terminal
|
||||
|
||||
Returns:
|
||||
List of dictionaries with position data (starlink_location and starlink_gps)
|
||||
Returns empty list if fetch fails
|
||||
"""
|
||||
if not self.config.starlink_enabled:
|
||||
return []
|
||||
|
||||
if starlink_grpc is None:
|
||||
logger.error("starlink_grpc module not available")
|
||||
return []
|
||||
|
||||
max_retries = self.config.starlink_max_retries
|
||||
results = []
|
||||
|
||||
for attempt in range(1, max_retries + 1):
|
||||
try:
|
||||
# Create channel context
|
||||
context = starlink_grpc.ChannelContext(target=self.target_ip)
|
||||
|
||||
# Get location data
|
||||
try:
|
||||
raw_location = starlink_grpc.get_location(context)
|
||||
location_info = starlink_grpc.location_data(context)
|
||||
|
||||
# Extract Starlink Location coordinates
|
||||
if location_info.get("latitude") is not None and location_info.get("longitude") is not None:
|
||||
timestamp = datetime.now(timezone.utc)
|
||||
position_uncertainty = None
|
||||
if hasattr(raw_location, 'sigma_m'):
|
||||
try:
|
||||
position_uncertainty = float(raw_location.sigma_m)
|
||||
except (ValueError, TypeError):
|
||||
pass
|
||||
|
||||
results.append({
|
||||
"source": "starlink_location",
|
||||
"latitude": float(location_info.get("latitude")),
|
||||
"longitude": float(location_info.get("longitude")),
|
||||
"altitude": float(location_info.get("altitude", 0)),
|
||||
"position_uncertainty_m": position_uncertainty,
|
||||
"timestamp": timestamp.isoformat(),
|
||||
"timestamp_unix": timestamp.timestamp(),
|
||||
"supplementary_data": {
|
||||
"location_source": str(raw_location.source) if hasattr(raw_location, 'source') else None,
|
||||
"horizontal_speed_mps": raw_location.horizontal_speed_mps if hasattr(raw_location, 'horizontal_speed_mps') else None,
|
||||
"vertical_speed_mps": raw_location.vertical_speed_mps if hasattr(raw_location, 'vertical_speed_mps') else None,
|
||||
}
|
||||
})
|
||||
|
||||
# Extract Starlink GPS (LLA) coordinates
|
||||
if hasattr(raw_location, 'lla'):
|
||||
lla = raw_location.lla
|
||||
lla_data = {}
|
||||
for attr in dir(lla):
|
||||
if not attr.startswith('_') and not callable(getattr(lla, attr)):
|
||||
try:
|
||||
lla_data[attr] = getattr(lla, attr)
|
||||
except:
|
||||
pass
|
||||
|
||||
if lla_data.get('lat') is not None and lla_data.get('lon') is not None:
|
||||
timestamp = datetime.now(timezone.utc)
|
||||
results.append({
|
||||
"source": "starlink_gps",
|
||||
"latitude": float(lla_data.get('lat')),
|
||||
"longitude": float(lla_data.get('lon')),
|
||||
"altitude": float(lla_data.get('alt', 0)),
|
||||
"timestamp": timestamp.isoformat(),
|
||||
"timestamp_unix": timestamp.timestamp(),
|
||||
"supplementary_data": {
|
||||
**{k: v for k, v in lla_data.items() if k not in ['lat', 'lon', 'alt', 'DESCRIPTOR']}
|
||||
}
|
||||
})
|
||||
|
||||
except starlink_grpc.GrpcError as e:
|
||||
if attempt < max_retries:
|
||||
logger.debug(f"Starlink GPS fetch attempt {attempt}/{max_retries} failed: {e}, retrying...")
|
||||
continue
|
||||
else:
|
||||
logger.error(f"Failed to fetch Starlink location data after {max_retries} attempts: {e}")
|
||||
return []
|
||||
|
||||
# Success - return results
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
if attempt < max_retries:
|
||||
logger.debug(f"Starlink GPS fetch attempt {attempt}/{max_retries} failed: {e}, retrying...")
|
||||
continue
|
||||
else:
|
||||
logger.error(f"Unexpected error fetching Starlink GPS data after {max_retries} attempts: {e}")
|
||||
return []
|
||||
|
||||
return []
|
||||
|
||||
@@ -0,0 +1,156 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
TM AIS GPS data fetcher
|
||||
Fetches GPS coordinates from TM AIS GPS antenna via HTTP API
|
||||
"""
|
||||
|
||||
import logging
|
||||
import requests
|
||||
from datetime import datetime, timezone
|
||||
from typing import Dict, Any, Optional
|
||||
from config import Config
|
||||
|
||||
# Suppress SSL warnings for self-signed certificates
|
||||
import urllib3
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
logger = logging.getLogger("gnss_guard.tm_ais_gps")
|
||||
|
||||
|
||||
class TMAISGPSFetcher:
|
||||
"""Fetcher for TM AIS GPS coordinates"""
|
||||
|
||||
def __init__(self, config: Config):
|
||||
self.config = config
|
||||
self.url = config.tm_ais_url
|
||||
self.token = config.tm_ais_token # Already trimmed in Config
|
||||
self.last_fetch_failed = False
|
||||
|
||||
# Warn if token is empty
|
||||
if not self.token:
|
||||
logger.warning("TM AIS GPS token is empty - authentication will fail")
|
||||
|
||||
def fetch(self) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
Fetch GPS coordinates from TM AIS GPS antenna
|
||||
|
||||
Returns:
|
||||
Dictionary with position data or None if fetch fails
|
||||
"""
|
||||
if not self.config.tm_ais_enabled:
|
||||
return None
|
||||
|
||||
headers = {"Authorization": f"Bearer {self.token}"}
|
||||
max_retries = self.config.tm_ais_max_retries
|
||||
last_error = None
|
||||
|
||||
# Log request details (mask token for security)
|
||||
token_preview = f"{self.token[:4]}..." if len(self.token) > 4 else "***"
|
||||
logger.debug(f"TM AIS GPS request: URL={self.url}, Token={token_preview}")
|
||||
|
||||
# Try up to max_retries times
|
||||
for attempt_number in range(1, max_retries + 1):
|
||||
logger.info(f"TM AIS GPS fetch attempt {attempt_number}/{max_retries}")
|
||||
|
||||
try:
|
||||
# Disable SSL verification for self-signed certificates (equivalent to curl -k)
|
||||
response = requests.get(
|
||||
self.url,
|
||||
headers=headers,
|
||||
verify=False, # Equivalent to curl -k flag
|
||||
timeout=5.0
|
||||
)
|
||||
|
||||
# Log response status for debugging
|
||||
logger.debug(f"TM AIS GPS response status: {response.status_code}")
|
||||
|
||||
response.raise_for_status()
|
||||
|
||||
data = response.json()
|
||||
|
||||
# Extract coordinates
|
||||
latitude = data.get("latitude")
|
||||
longitude = data.get("longitude")
|
||||
gps_timestamp = data.get("gps_timestamp")
|
||||
response_timestamp = data.get("response_timestamp")
|
||||
|
||||
if latitude is None or longitude is None:
|
||||
logger.warning("TM AIS GPS response missing latitude or longitude")
|
||||
self.last_fetch_failed = True
|
||||
return None
|
||||
|
||||
# Parse timestamps and convert to UTC
|
||||
gps_ts = None
|
||||
if gps_timestamp:
|
||||
try:
|
||||
# Parse timestamp (handles both Z and timezone offsets)
|
||||
parsed_ts = datetime.fromisoformat(gps_timestamp.replace("Z", "+00:00"))
|
||||
# Convert to UTC if timezone-aware, otherwise assume UTC
|
||||
if parsed_ts.tzinfo is not None:
|
||||
gps_ts = parsed_ts.astimezone(timezone.utc)
|
||||
else:
|
||||
gps_ts = parsed_ts.replace(tzinfo=timezone.utc)
|
||||
except Exception as e:
|
||||
logger.debug(f"Failed to parse GPS timestamp: {e}")
|
||||
|
||||
response_ts = datetime.now(timezone.utc)
|
||||
if response_timestamp:
|
||||
try:
|
||||
# Parse timestamp (handles both Z and timezone offsets)
|
||||
parsed_ts = datetime.fromisoformat(response_timestamp.replace("Z", "+00:00"))
|
||||
# Convert to UTC if timezone-aware, otherwise assume UTC
|
||||
if parsed_ts.tzinfo is not None:
|
||||
response_ts = parsed_ts.astimezone(timezone.utc)
|
||||
else:
|
||||
response_ts = parsed_ts.replace(tzinfo=timezone.utc)
|
||||
except Exception as e:
|
||||
logger.debug(f"Failed to parse response timestamp: {e}")
|
||||
|
||||
# Success - reset failure flag
|
||||
if self.last_fetch_failed:
|
||||
logger.info("TM AIS GPS connection restored")
|
||||
self.last_fetch_failed = False
|
||||
|
||||
return {
|
||||
"source": "tm_ais",
|
||||
"latitude": float(latitude),
|
||||
"longitude": float(longitude),
|
||||
"altitude": None,
|
||||
"timestamp": gps_ts.isoformat() if gps_ts else response_ts.isoformat(),
|
||||
"timestamp_unix": (gps_ts or response_ts).timestamp(),
|
||||
"supplementary_data": {
|
||||
"gps_timestamp": gps_timestamp,
|
||||
"response_timestamp": response_timestamp,
|
||||
}
|
||||
}
|
||||
|
||||
except requests.exceptions.HTTPError as e:
|
||||
# Log response body for 401 errors to help debug authentication issues
|
||||
if hasattr(e.response, 'status_code') and e.response.status_code == 401:
|
||||
try:
|
||||
error_body = e.response.text[:200] # Limit to first 200 chars
|
||||
logger.debug(f"TM AIS GPS 401 response body: {error_body}")
|
||||
except Exception:
|
||||
pass
|
||||
last_error = str(e)
|
||||
logger.info(f"TM AIS GPS attempt {attempt_number}/{max_retries} failed: {e}")
|
||||
# Continue to next attempt
|
||||
except requests.exceptions.RequestException as e:
|
||||
last_error = str(e)
|
||||
logger.info(f"TM AIS GPS attempt {attempt_number}/{max_retries} failed: {e}")
|
||||
# Continue to next attempt
|
||||
|
||||
except Exception as e:
|
||||
last_error = str(e)
|
||||
logger.info(f"TM AIS GPS attempt {attempt_number}/{max_retries} unexpected error: {e}")
|
||||
# Continue to next attempt
|
||||
|
||||
# All attempts failed
|
||||
if not self.last_fetch_failed:
|
||||
logger.error(f"Failed to fetch TM AIS GPS data after {max_retries} attempts. Last error: {last_error}")
|
||||
else:
|
||||
logger.debug(f"TM AIS GPS still unavailable after {max_retries} attempts")
|
||||
|
||||
self.last_fetch_failed = True
|
||||
return None
|
||||
|
||||
@@ -0,0 +1,53 @@
|
||||
# This workflow uses actions that are not certified by GitHub.
|
||||
# They are provided by a third-party and are governed by
|
||||
# separate terms of service, privacy policy, and support
|
||||
# documentation.
|
||||
|
||||
name: Create and publish a Docker image to GitHub Packages Repository
|
||||
|
||||
on: workflow_dispatch
|
||||
|
||||
env:
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_NAME: ${{ github.repository }}
|
||||
|
||||
jobs:
|
||||
build-and-push-image:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
with:
|
||||
platforms: 'arm64'
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Extract metadata (tags, labels) for Docker
|
||||
id: meta
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
|
||||
- name: Build and push Docker image
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: true
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
@@ -0,0 +1,32 @@
|
||||
FROM python:3.9
|
||||
LABEL maintainer="neurocis <neurocis@neurocis.me>"
|
||||
|
||||
RUN true && \
|
||||
\
|
||||
ARCH=`uname -m`; \
|
||||
if [ "$ARCH" = "armv7l" ]; then \
|
||||
NOBIN_OPT="--no-binary=grpcio"; \
|
||||
else \
|
||||
NOBIN_OPT=""; \
|
||||
fi; \
|
||||
# Install python prerequisites
|
||||
pip3 install --no-cache-dir $NOBIN_OPT \
|
||||
croniter==2.0.5 pytz==2024.1 six==1.16.0 \
|
||||
grpcio==1.62.2 \
|
||||
influxdb==5.3.2 certifi==2024.2.2 charset-normalizer==3.3.2 idna==3.7 \
|
||||
msgpack==1.0.8 requests==2.31.0 urllib3==2.2.1 \
|
||||
influxdb-client==1.42.0 reactivex==4.0.4 \
|
||||
paho-mqtt==2.0.0 \
|
||||
pypng==0.20220715.0 \
|
||||
python-dateutil==2.9.0 \
|
||||
typing_extensions==4.11.0 \
|
||||
yagrc==1.1.2 grpcio-reflection==1.62.2 protobuf==4.25.3
|
||||
|
||||
COPY dish_*.py loop_util.py starlink_*.py entrypoint.sh /app/
|
||||
WORKDIR /app
|
||||
|
||||
ENTRYPOINT ["/bin/sh", "/app/entrypoint.sh"]
|
||||
CMD ["dish_grpc_influx.py status alert_detail"]
|
||||
|
||||
# docker run -d --name='starlink-grpc-tools' -e INFLUXDB_HOST=192.168.1.34 -e INFLUXDB_PORT=8086 -e INFLUXDB_DB=starlink
|
||||
# --net='br0' --ip='192.168.1.39' ghcr.io/sparky8512/starlink-grpc-tools dish_grpc_influx.py status alert_detail
|
||||
@@ -0,0 +1,24 @@
|
||||
This is free and unencumbered software released into the public domain.
|
||||
|
||||
Anyone is free to copy, modify, publish, use, compile, sell, or
|
||||
distribute this software, either in source code form or as a compiled
|
||||
binary, for any purpose, commercial or non-commercial, and by any
|
||||
means.
|
||||
|
||||
In jurisdictions that recognize copyright laws, the author or authors
|
||||
of this software dedicate any and all copyright interest in the
|
||||
software to the public domain. We make this dedication for the benefit
|
||||
of the public at large and to the detriment of our heirs and
|
||||
successors. We intend this dedication to be an overt act of
|
||||
relinquishment in perpetuity of all present and future rights to this
|
||||
software under copyright law.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
|
||||
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
||||
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
For more information, please refer to <https://unlicense.org>
|
||||
@@ -0,0 +1,818 @@
|
||||
{
|
||||
"__inputs": [
|
||||
{
|
||||
"name": "VAR_DS_INFLUXDB",
|
||||
"type": "constant",
|
||||
"label": "InfluxDB DataSource",
|
||||
"value": "InfluxDB-starlinkstats",
|
||||
"description": ""
|
||||
},
|
||||
{
|
||||
"name": "VAR_TBL_STATS",
|
||||
"type": "constant",
|
||||
"label": "Table name for Statistics",
|
||||
"value": "spacex.starlink.user_terminal.status",
|
||||
"description": ""
|
||||
}
|
||||
],
|
||||
"__requires": [
|
||||
{
|
||||
"type": "grafana",
|
||||
"id": "grafana",
|
||||
"name": "Grafana",
|
||||
"version": "7.3.6"
|
||||
},
|
||||
{
|
||||
"type": "panel",
|
||||
"id": "graph",
|
||||
"name": "Graph",
|
||||
"version": ""
|
||||
},
|
||||
{
|
||||
"type": "datasource",
|
||||
"id": "influxdb",
|
||||
"name": "InfluxDB",
|
||||
"version": "1.0.0"
|
||||
},
|
||||
{
|
||||
"type": "panel",
|
||||
"id": "table",
|
||||
"name": "Table",
|
||||
"version": ""
|
||||
}
|
||||
],
|
||||
"annotations": {
|
||||
"list": [
|
||||
{
|
||||
"builtIn": 1,
|
||||
"datasource": "-- Grafana --",
|
||||
"enable": true,
|
||||
"hide": true,
|
||||
"iconColor": "rgba(0, 211, 255, 1)",
|
||||
"name": "Annotations & Alerts",
|
||||
"type": "dashboard"
|
||||
}
|
||||
]
|
||||
},
|
||||
"editable": true,
|
||||
"gnetId": null,
|
||||
"graphTooltip": 0,
|
||||
"id": null,
|
||||
"iteration": 1610413551748,
|
||||
"links": [],
|
||||
"panels": [
|
||||
{
|
||||
"aliasColors": {},
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "$DS_INFLUXDB",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"fill": 1,
|
||||
"fillGradient": 0,
|
||||
"gridPos": {
|
||||
"h": 11,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 0
|
||||
},
|
||||
"hiddenSeries": false,
|
||||
"id": 4,
|
||||
"legend": {
|
||||
"alignAsTable": true,
|
||||
"avg": true,
|
||||
"current": true,
|
||||
"hideZero": false,
|
||||
"max": true,
|
||||
"min": false,
|
||||
"rightSide": false,
|
||||
"show": true,
|
||||
"total": false,
|
||||
"values": true
|
||||
},
|
||||
"lines": true,
|
||||
"linewidth": 1,
|
||||
"nullPointMode": "null",
|
||||
"options": {
|
||||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "7.3.6",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
"seriesOverrides": [],
|
||||
"spaceLength": 10,
|
||||
"stack": false,
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"groupBy": [],
|
||||
"measurement": "/^$TBL_STATS$/",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"queryType": "randomWalk",
|
||||
"refId": "A",
|
||||
"resultFormat": "time_series",
|
||||
"select": [
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"downlink_throughput_bps"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"bps Down"
|
||||
],
|
||||
"type": "alias"
|
||||
}
|
||||
],
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"uplink_throughput_bps"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"bps Up"
|
||||
],
|
||||
"type": "alias"
|
||||
}
|
||||
]
|
||||
],
|
||||
"tags": []
|
||||
}
|
||||
],
|
||||
"thresholds": [],
|
||||
"timeFrom": null,
|
||||
"timeRegions": [],
|
||||
"timeShift": null,
|
||||
"title": "Actual Throughput",
|
||||
"tooltip": {
|
||||
"shared": true,
|
||||
"sort": 0,
|
||||
"value_type": "individual"
|
||||
},
|
||||
"type": "graph",
|
||||
"xaxis": {
|
||||
"buckets": null,
|
||||
"mode": "time",
|
||||
"name": null,
|
||||
"show": true,
|
||||
"values": []
|
||||
},
|
||||
"yaxes": [
|
||||
{
|
||||
"$$hashKey": "object:1099",
|
||||
"format": "short",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
},
|
||||
{
|
||||
"$$hashKey": "object:1100",
|
||||
"format": "short",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
}
|
||||
],
|
||||
"yaxis": {
|
||||
"align": false,
|
||||
"alignLevel": null
|
||||
}
|
||||
},
|
||||
{
|
||||
"aliasColors": {},
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "$DS_INFLUXDB",
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"fill": 1,
|
||||
"fillGradient": 0,
|
||||
"gridPos": {
|
||||
"h": 11,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 0
|
||||
},
|
||||
"hiddenSeries": false,
|
||||
"id": 2,
|
||||
"legend": {
|
||||
"alignAsTable": true,
|
||||
"avg": true,
|
||||
"current": true,
|
||||
"max": true,
|
||||
"min": true,
|
||||
"show": true,
|
||||
"total": false,
|
||||
"values": true
|
||||
},
|
||||
"lines": true,
|
||||
"linewidth": 1,
|
||||
"nullPointMode": "null",
|
||||
"options": {
|
||||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "7.3.6",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
"seriesOverrides": [],
|
||||
"spaceLength": 10,
|
||||
"stack": false,
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"groupBy": [],
|
||||
"measurement": "/^$TBL_STATS$/",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"queryType": "randomWalk",
|
||||
"refId": "A",
|
||||
"resultFormat": "time_series",
|
||||
"select": [
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"pop_ping_latency_ms"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"Ping Latency"
|
||||
],
|
||||
"type": "alias"
|
||||
}
|
||||
],
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"pop_ping_drop_rate"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"Drop Rate"
|
||||
],
|
||||
"type": "alias"
|
||||
}
|
||||
],
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"fraction_obstructed"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"*100"
|
||||
],
|
||||
"type": "math"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"Percent Obstructed"
|
||||
],
|
||||
"type": "alias"
|
||||
}
|
||||
],
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"snr"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"*10"
|
||||
],
|
||||
"type": "math"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"SNR"
|
||||
],
|
||||
"type": "alias"
|
||||
}
|
||||
]
|
||||
],
|
||||
"tags": []
|
||||
}
|
||||
],
|
||||
"thresholds": [],
|
||||
"timeFrom": null,
|
||||
"timeRegions": [],
|
||||
"timeShift": null,
|
||||
"title": "Ping Latency, Drop Rate, Percent Obstructed & SNR",
|
||||
"tooltip": {
|
||||
"shared": true,
|
||||
"sort": 0,
|
||||
"value_type": "individual"
|
||||
},
|
||||
"type": "graph",
|
||||
"xaxis": {
|
||||
"buckets": null,
|
||||
"mode": "time",
|
||||
"name": null,
|
||||
"show": true,
|
||||
"values": []
|
||||
},
|
||||
"yaxes": [
|
||||
{
|
||||
"format": "short",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
},
|
||||
{
|
||||
"format": "short",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
}
|
||||
],
|
||||
"yaxis": {
|
||||
"align": false,
|
||||
"alignLevel": null
|
||||
}
|
||||
},
|
||||
{
|
||||
"cacheTimeout": null,
|
||||
"datasource": "$DS_INFLUXDB",
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {
|
||||
"align": null,
|
||||
"filterable": false
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "Obstructed"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 105
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "Wrong Location"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 114
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "Thermal Throttle"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 121
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "Thermal Shutdown"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 136
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "Motors Stuck"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 116
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "Time"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 143
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "State"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 118
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "Bad Location"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 122
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "Temp Throttle"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 118
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "Temp Shutdown"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 134
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "Software Version"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 369
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 7,
|
||||
"w": 24,
|
||||
"x": 0,
|
||||
"y": 11
|
||||
},
|
||||
"id": 6,
|
||||
"interval": null,
|
||||
"links": [],
|
||||
"options": {
|
||||
"showHeader": true,
|
||||
"sortBy": [
|
||||
{
|
||||
"desc": true,
|
||||
"displayName": "Time (last)"
|
||||
}
|
||||
]
|
||||
},
|
||||
"pluginVersion": "7.3.6",
|
||||
"targets": [
|
||||
{
|
||||
"groupBy": [],
|
||||
"hide": false,
|
||||
"measurement": "/^$TBL_STATS$/",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"query": "SELECT \"currently_obstructed\" AS \"Obstructed\", \"alert_unexpected_location\" AS \"Wrong Location\", \"alert_thermal_throttle\" AS \"Thermal Throttle\", \"alert_thermal_shutdown\" AS \"Thermal Shutdown\", \"alert_motors_stuck\" AS \"Motors Stuck\", \"state\" AS \"State\" FROM \"spacex.starlink.user_terminal.status\" WHERE $timeFilter",
|
||||
"queryType": "randomWalk",
|
||||
"rawQuery": false,
|
||||
"refId": "A",
|
||||
"resultFormat": "table",
|
||||
"select": [
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"state"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"State"
|
||||
],
|
||||
"type": "alias"
|
||||
}
|
||||
],
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"currently_obstructed"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"Obstructed"
|
||||
],
|
||||
"type": "alias"
|
||||
}
|
||||
],
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"alert_unexpected_location"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"Bad Location"
|
||||
],
|
||||
"type": "alias"
|
||||
}
|
||||
],
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"alert_thermal_throttle"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"Temp Throttled"
|
||||
],
|
||||
"type": "alias"
|
||||
}
|
||||
],
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"alert_thermal_shutdown"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"Temp Shutdown"
|
||||
],
|
||||
"type": "alias"
|
||||
}
|
||||
],
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"alert_motors_stuck"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"Motors Stuck"
|
||||
],
|
||||
"type": "alias"
|
||||
}
|
||||
],
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"software_version"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"Software Version"
|
||||
],
|
||||
"type": "alias"
|
||||
}
|
||||
],
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"hardware_version"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"Hardware Version"
|
||||
],
|
||||
"type": "alias"
|
||||
}
|
||||
]
|
||||
],
|
||||
"tags": []
|
||||
}
|
||||
],
|
||||
"timeFrom": null,
|
||||
"timeShift": null,
|
||||
"title": "Alerts & Versions",
|
||||
"transformations": [
|
||||
{
|
||||
"id": "groupBy",
|
||||
"options": {
|
||||
"fields": {
|
||||
"Bad Location": {
|
||||
"aggregations": [],
|
||||
"operation": "groupby"
|
||||
},
|
||||
"Hardware Version": {
|
||||
"aggregations": [],
|
||||
"operation": "groupby"
|
||||
},
|
||||
"Motors Stuck": {
|
||||
"aggregations": [],
|
||||
"operation": "groupby"
|
||||
},
|
||||
"Obstructed": {
|
||||
"aggregations": [],
|
||||
"operation": "groupby"
|
||||
},
|
||||
"Software Version": {
|
||||
"aggregations": [],
|
||||
"operation": "groupby"
|
||||
},
|
||||
"State": {
|
||||
"aggregations": [],
|
||||
"operation": "groupby"
|
||||
},
|
||||
"Temp Shutdown": {
|
||||
"aggregations": [],
|
||||
"operation": "groupby"
|
||||
},
|
||||
"Temp Throttle": {
|
||||
"aggregations": [],
|
||||
"operation": "groupby"
|
||||
},
|
||||
"Temp Throttled": {
|
||||
"aggregations": [],
|
||||
"operation": "groupby"
|
||||
},
|
||||
"Thermal Shutdown": {
|
||||
"aggregations": [],
|
||||
"operation": "groupby"
|
||||
},
|
||||
"Thermal Throttle": {
|
||||
"aggregations": [],
|
||||
"operation": "groupby"
|
||||
},
|
||||
"Time": {
|
||||
"aggregations": [
|
||||
"last"
|
||||
],
|
||||
"operation": "aggregate"
|
||||
},
|
||||
"Wrong Location": {
|
||||
"aggregations": [],
|
||||
"operation": "groupby"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"type": "table"
|
||||
}
|
||||
],
|
||||
"refresh": false,
|
||||
"schemaVersion": 26,
|
||||
"style": "dark",
|
||||
"tags": [],
|
||||
"templating": {
|
||||
"list": [
|
||||
{
|
||||
"current": {
|
||||
"value": "${VAR_DS_INFLUXDB}",
|
||||
"text": "${VAR_DS_INFLUXDB}",
|
||||
"selected": false
|
||||
},
|
||||
"error": null,
|
||||
"hide": 2,
|
||||
"label": "InfluxDB DataSource",
|
||||
"name": "DS_INFLUXDB",
|
||||
"options": [
|
||||
{
|
||||
"value": "${VAR_DS_INFLUXDB}",
|
||||
"text": "${VAR_DS_INFLUXDB}",
|
||||
"selected": false
|
||||
}
|
||||
],
|
||||
"query": "${VAR_DS_INFLUXDB}",
|
||||
"skipUrlSync": false,
|
||||
"type": "constant"
|
||||
},
|
||||
{
|
||||
"current": {
|
||||
"value": "${VAR_TBL_STATS}",
|
||||
"text": "${VAR_TBL_STATS}",
|
||||
"selected": false
|
||||
},
|
||||
"error": null,
|
||||
"hide": 2,
|
||||
"label": "Table name for Statistics",
|
||||
"name": "TBL_STATS",
|
||||
"options": [
|
||||
{
|
||||
"value": "${VAR_TBL_STATS}",
|
||||
"text": "${VAR_TBL_STATS}",
|
||||
"selected": false
|
||||
}
|
||||
],
|
||||
"query": "${VAR_TBL_STATS}",
|
||||
"skipUrlSync": false,
|
||||
"type": "constant"
|
||||
}
|
||||
]
|
||||
},
|
||||
"time": {
|
||||
"from": "now-24h",
|
||||
"to": "now"
|
||||
},
|
||||
"timepicker": {
|
||||
"refresh_intervals": [
|
||||
"5s",
|
||||
"10s",
|
||||
"30s",
|
||||
"1m",
|
||||
"5m",
|
||||
"15m",
|
||||
"30m",
|
||||
"1h",
|
||||
"2h",
|
||||
"1d"
|
||||
]
|
||||
},
|
||||
"timezone": "",
|
||||
"title": "Starlink Statistics",
|
||||
"uid": "ymkHwLaMz",
|
||||
"version": 36
|
||||
}
|
||||
@@ -0,0 +1,675 @@
|
||||
{
|
||||
"__inputs": [
|
||||
{
|
||||
"name": "DS_INFLUXDB",
|
||||
"label": "InfluxDB",
|
||||
"description": "",
|
||||
"type": "datasource",
|
||||
"pluginId": "influxdb",
|
||||
"pluginName": "InfluxDB"
|
||||
},
|
||||
{
|
||||
"name": "VAR_TBL_STATS",
|
||||
"label": "influx",
|
||||
"description": "",
|
||||
"type": "datasource",
|
||||
"pluginId": "influxdb",
|
||||
"pluginName": "InfluxDB"
|
||||
},
|
||||
{
|
||||
"name": "VAR_DS_INFLUXDB",
|
||||
"type": "constant",
|
||||
"label": "InfluxDB DataSource",
|
||||
"value": "InfluxDB-starlinkstats",
|
||||
"description": ""
|
||||
},
|
||||
{
|
||||
"name": "VAR_TBL_STATS",
|
||||
"type": "constant",
|
||||
"label": "Table name for Statistics",
|
||||
"value": "spacex.starlink.user_terminal.status",
|
||||
"description": ""
|
||||
}
|
||||
],
|
||||
"__requires": [
|
||||
{
|
||||
"type": "grafana",
|
||||
"id": "grafana",
|
||||
"name": "Grafana",
|
||||
"version": "8.2.5"
|
||||
},
|
||||
{
|
||||
"type": "datasource",
|
||||
"id": "influxdb",
|
||||
"name": "InfluxDB",
|
||||
"version": "1.0.0"
|
||||
},
|
||||
{
|
||||
"type": "panel",
|
||||
"id": "table",
|
||||
"name": "Table",
|
||||
"version": ""
|
||||
},
|
||||
{
|
||||
"type": "panel",
|
||||
"id": "timeseries",
|
||||
"name": "Time series",
|
||||
"version": ""
|
||||
}
|
||||
],
|
||||
"annotations": {
|
||||
"list": [
|
||||
{
|
||||
"builtIn": 1,
|
||||
"datasource": "-- Grafana --",
|
||||
"enable": true,
|
||||
"hide": true,
|
||||
"iconColor": "rgba(0, 211, 255, 1)",
|
||||
"name": "Annotations & Alerts",
|
||||
"target": {
|
||||
"limit": 100,
|
||||
"matchAny": false,
|
||||
"tags": [],
|
||||
"type": "dashboard"
|
||||
},
|
||||
"type": "dashboard"
|
||||
}
|
||||
]
|
||||
},
|
||||
"editable": true,
|
||||
"fiscalYearStartMonth": 0,
|
||||
"gnetId": null,
|
||||
"graphTooltip": 0,
|
||||
"id": null,
|
||||
"iteration": 1637920561166,
|
||||
"links": [],
|
||||
"liveNow": false,
|
||||
"panels": [
|
||||
{
|
||||
"datasource": "${DS_INFLUXDB}",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"legend": false,
|
||||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "never",
|
||||
"spanNulls": true,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "normal"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "binbps"
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byRegexp",
|
||||
"options": "/(uplink)/m"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "displayName",
|
||||
"value": "Uplink"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "downlink_throughput_bps"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "displayName",
|
||||
"value": "Downlink"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "uplink_throughput_bps"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "displayName",
|
||||
"value": "Uplink"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 11,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 0
|
||||
},
|
||||
"id": 4,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": [
|
||||
"mean",
|
||||
"max",
|
||||
"lastNotNull"
|
||||
],
|
||||
"displayMode": "table",
|
||||
"placement": "bottom"
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "multi"
|
||||
}
|
||||
},
|
||||
"pluginVersion": "8.2.5",
|
||||
"targets": [
|
||||
{
|
||||
"hide": false,
|
||||
"query": "from(bucket: \"starlink\")\n |> range(start: v.timeRangeStart, stop: v.timeRangeStop)\n |> filter(fn: (r) => r[\"_field\"] == \"downlink_throughput_bps\" or r[\"_field\"] == \"uplink_throughput_bps\")\n |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)\n |> yield(name: \"last\")",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"timeFrom": null,
|
||||
"timeShift": null,
|
||||
"title": "Actual Throughput",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": "${DS_INFLUXDB}",
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"legend": false,
|
||||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "auto",
|
||||
"spanNulls": true,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "none"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "short"
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "fraction_obstructed"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "displayName",
|
||||
"value": "Fraction Obstruction"
|
||||
},
|
||||
{
|
||||
"id": "unit",
|
||||
"value": "%"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "pop_ping_drop_rate"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "displayName",
|
||||
"value": "Pop Ping Drop Rate"
|
||||
},
|
||||
{
|
||||
"id": "unit",
|
||||
"value": "%"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "pop_ping_latency_ms"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "displayName",
|
||||
"value": "Pop Ping Latency Rate"
|
||||
},
|
||||
{
|
||||
"id": "unit",
|
||||
"value": "ms"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 11,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 0
|
||||
},
|
||||
"id": 2,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": [
|
||||
"mean",
|
||||
"lastNotNull",
|
||||
"max",
|
||||
"min"
|
||||
],
|
||||
"displayMode": "table",
|
||||
"placement": "bottom"
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "multi"
|
||||
}
|
||||
},
|
||||
"pluginVersion": "8.2.5",
|
||||
"targets": [
|
||||
{
|
||||
"query": "from(bucket: \"starlink\")\n |> range(start: v.timeRangeStart, stop: v.timeRangeStop)\n |> filter(fn: (r) => r[\"_field\"] == \"pop_ping_latency_ms\" or r[\"_field\"] == \"pop_ping_drop_rate\" or r[\"_field\"] == \"fraction_obstructed\" or r[\"_field\"] == \"snr\")\n |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)\n |> yield(name: \"last\")",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"timeFrom": null,
|
||||
"timeShift": null,
|
||||
"title": "Ping Latency, Drop Rate, Percent Obstructed & SNR",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"cacheTimeout": null,
|
||||
"datasource": "${DS_INFLUXDB}",
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "thresholds"
|
||||
},
|
||||
"custom": {
|
||||
"align": null,
|
||||
"displayMode": "auto",
|
||||
"filterable": false
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "alerts"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "displayName",
|
||||
"value": "Alerts"
|
||||
},
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 100
|
||||
},
|
||||
{
|
||||
"id": "custom.align",
|
||||
"value": "left"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "currently_obstructed"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "displayName",
|
||||
"value": "Currently Obstructed"
|
||||
},
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 200
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "hardware_version"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "displayName",
|
||||
"value": "Hardware Revision"
|
||||
},
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 200
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "software_version"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "displayName",
|
||||
"value": "Software Revision"
|
||||
},
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 400
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "state"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "displayName",
|
||||
"value": "State"
|
||||
},
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 100
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "alert_motors_stuck"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "displayName",
|
||||
"value": "Motor Stuck"
|
||||
},
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 100
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "alert_unexpected_location"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "displayName",
|
||||
"value": "Unexpected Location"
|
||||
},
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 150
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "alert_thermal_shutdown"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "displayName",
|
||||
"value": "Thermal Shutdown"
|
||||
},
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 140
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "alert_thermal_throttle"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "displayName",
|
||||
"value": "Thermal Throttle"
|
||||
},
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 130
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "uptime"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "displayName",
|
||||
"value": "Uptime"
|
||||
},
|
||||
{
|
||||
"id": "custom.align",
|
||||
"value": "left"
|
||||
},
|
||||
{
|
||||
"id": "unit",
|
||||
"value": "s"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "Time"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "custom.width",
|
||||
"value": 150
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 7,
|
||||
"w": 24,
|
||||
"x": 0,
|
||||
"y": 11
|
||||
},
|
||||
"id": 6,
|
||||
"interval": null,
|
||||
"links": [],
|
||||
"options": {
|
||||
"frameIndex": 0,
|
||||
"showHeader": true,
|
||||
"sortBy": [
|
||||
{
|
||||
"desc": true,
|
||||
"displayName": "Time (last)"
|
||||
}
|
||||
]
|
||||
},
|
||||
"pluginVersion": "8.2.5",
|
||||
"targets": [
|
||||
{
|
||||
"query": "from(bucket: \"starlink\")\n |> range(start: v.timeRangeStart, stop: v.timeRangeStop)\n |> filter(fn: (r) => r[\"_field\"] == \"hardware_version\" or r[\"_field\"] == \"state\" or r[\"_field\"] == \"software_version\" or r[\"_field\"] == \"alerts\" or r[\"_field\"] == \"currently_obstructed\" or r[\"_field\"] == \"alert_unexpected_location\" or r[\"_field\"] == \"alert_thermal_throttle\" or r[\"_field\"] == \"alert_thermal_shutdown\" or r[\"_field\"] == \"alert_motors_stuck\" or r[\"_field\"] == \"uptime\" )\n |> yield(name: \"last\")",
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"timeFrom": null,
|
||||
"timeShift": null,
|
||||
"title": "Alerts & Versions",
|
||||
"transformations": [
|
||||
{
|
||||
"id": "seriesToColumns",
|
||||
"options": {
|
||||
"byField": "Time"
|
||||
}
|
||||
}
|
||||
],
|
||||
"type": "table"
|
||||
}
|
||||
],
|
||||
"refresh": false,
|
||||
"schemaVersion": 32,
|
||||
"style": "dark",
|
||||
"tags": [],
|
||||
"templating": {
|
||||
"list": [
|
||||
{
|
||||
"description": null,
|
||||
"error": null,
|
||||
"hide": 2,
|
||||
"label": "InfluxDB DataSource",
|
||||
"name": "DS_INFLUXDB",
|
||||
"query": "${VAR_DS_INFLUXDB}",
|
||||
"skipUrlSync": false,
|
||||
"type": "constant",
|
||||
"current": {
|
||||
"value": "${VAR_DS_INFLUXDB}",
|
||||
"text": "${VAR_DS_INFLUXDB}",
|
||||
"selected": false
|
||||
},
|
||||
"options": [
|
||||
{
|
||||
"value": "${VAR_DS_INFLUXDB}",
|
||||
"text": "${VAR_DS_INFLUXDB}",
|
||||
"selected": false
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"description": null,
|
||||
"error": null,
|
||||
"hide": 2,
|
||||
"label": "Table name for Statistics",
|
||||
"name": "TBL_STATS",
|
||||
"query": "${VAR_TBL_STATS}",
|
||||
"skipUrlSync": false,
|
||||
"type": "constant",
|
||||
"current": {
|
||||
"value": "${VAR_TBL_STATS}",
|
||||
"text": "${VAR_TBL_STATS}",
|
||||
"selected": false
|
||||
},
|
||||
"options": [
|
||||
{
|
||||
"value": "${VAR_TBL_STATS}",
|
||||
"text": "${VAR_TBL_STATS}",
|
||||
"selected": false
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"time": {
|
||||
"from": "now-30m",
|
||||
"to": "now"
|
||||
},
|
||||
"timepicker": {
|
||||
"refresh_intervals": [
|
||||
"5s",
|
||||
"10s",
|
||||
"30s",
|
||||
"1m",
|
||||
"5m",
|
||||
"15m",
|
||||
"30m",
|
||||
"1h",
|
||||
"2h",
|
||||
"1d"
|
||||
]
|
||||
},
|
||||
"timezone": "",
|
||||
"title": "Starlink Statistics",
|
||||
"uid": "ymkHwLaMz",
|
||||
"version": 12
|
||||
}
|
||||
@@ -0,0 +1,308 @@
|
||||
{
|
||||
"layout": {},
|
||||
"schedule": {
|
||||
"enabled": false,
|
||||
"cronSchedule": "0 0 * * *",
|
||||
"tz": "UTC",
|
||||
"keepLastN": 2
|
||||
},
|
||||
"name": "Starlink Statistics",
|
||||
"description": "This Dashboard is meant to be a clone of the starlink App's Statitics Page",
|
||||
"elements": [
|
||||
{
|
||||
"config": {
|
||||
"markdown": "# Starlink Statistics\n--- \nThis Dashboard is meant to be a clone of the starlink App's Statitics Page. Increase time python script calls API for more accurate results. (Default API Call: 60 seconds)\n",
|
||||
"axis": {}
|
||||
},
|
||||
"id": "1p7z19fum",
|
||||
"layout": {
|
||||
"x": 0,
|
||||
"y": 0,
|
||||
"w": 12,
|
||||
"h": 2
|
||||
},
|
||||
"variant": "markdown",
|
||||
"type": "markdown.default"
|
||||
},
|
||||
{
|
||||
"config": {
|
||||
"markdown": "### What is Latency?\n- Starlink and the Starlink router both send test pings to the internet many times per minute. Latency measures how long, in milliseconds, a request takes to go to the internet and back.\n\n- High latency may impact your experience with online gaming, video calls, and web browsing. It may be caused by extreme weather or periods of high network usage.\n\n",
|
||||
"axis": {}
|
||||
},
|
||||
"id": "84gt5a832",
|
||||
"layout": {
|
||||
"x": 0,
|
||||
"y": 2,
|
||||
"w": 6,
|
||||
"h": 2
|
||||
},
|
||||
"variant": "markdown",
|
||||
"type": "markdown.default"
|
||||
},
|
||||
{
|
||||
"config": {
|
||||
"markdown": "### What is power Draw?\n- Power Draw Measures the average amount of power that Starlink Uses. Starlink will use more power while heating to melt snow.\n\n",
|
||||
"axis": {}
|
||||
},
|
||||
"id": "pyoifapcf",
|
||||
"layout": {
|
||||
"x": 6,
|
||||
"y": 2,
|
||||
"w": 6,
|
||||
"h": 2
|
||||
},
|
||||
"variant": "markdown",
|
||||
"type": "markdown.default"
|
||||
},
|
||||
{
|
||||
"config": {
|
||||
"onClickAction": {
|
||||
"type": "None"
|
||||
},
|
||||
"style": true,
|
||||
"applyThreshold": false,
|
||||
"colorThresholds": {
|
||||
"thresholds": [
|
||||
{
|
||||
"color": "#45850B",
|
||||
"threshold": 30
|
||||
},
|
||||
{
|
||||
"color": "#EFDB23",
|
||||
"threshold": 70
|
||||
},
|
||||
{
|
||||
"color": "#B20000",
|
||||
"threshold": 100
|
||||
}
|
||||
]
|
||||
},
|
||||
"axis": {
|
||||
"xAxis": "avg_mean_full_ping_latency",
|
||||
"yAxis": [
|
||||
"avg_mean_full_ping_latency"
|
||||
]
|
||||
},
|
||||
"decimals": 2,
|
||||
"suffix": " ms"
|
||||
},
|
||||
"search": {
|
||||
"type": "inline",
|
||||
"query": "dataset=\"starlink\" sourcetype in (\"starlink:ping_latency\") | extract parser=json_parser | summarize avg_mean_full_ping_latency=avg(mean_full_ping_latency) ",
|
||||
"earliest": "-15m",
|
||||
"latest": "now"
|
||||
},
|
||||
"id": "kfntldnby",
|
||||
"layout": {
|
||||
"x": 0,
|
||||
"y": 4,
|
||||
"w": 6,
|
||||
"h": 3
|
||||
},
|
||||
"type": "counter.single",
|
||||
"title": "Average Mean Full Ping Latency - Last 15 Min"
|
||||
},
|
||||
{
|
||||
"config": {
|
||||
"onClickAction": {
|
||||
"type": "None"
|
||||
},
|
||||
"style": true,
|
||||
"applyThreshold": false,
|
||||
"colorThresholds": {
|
||||
"thresholds": [
|
||||
{
|
||||
"color": "#45850B",
|
||||
"threshold": 30
|
||||
},
|
||||
{
|
||||
"color": "#EFDB23",
|
||||
"threshold": 70
|
||||
},
|
||||
{
|
||||
"color": "#B20000",
|
||||
"threshold": 100
|
||||
}
|
||||
]
|
||||
},
|
||||
"axis": {
|
||||
"xAxis": "avg_mean_power",
|
||||
"yAxis": [
|
||||
"avg_mean_power"
|
||||
]
|
||||
},
|
||||
"decimals": 2,
|
||||
"suffix": " Watts"
|
||||
},
|
||||
"search": {
|
||||
"type": "inline",
|
||||
"query": "dataset=\"starlink\" sourcetype=\"starlink:power\" | extract parser=json_parser | summarize avg_mean_power=avg(mean_power)",
|
||||
"earliest": "-15m",
|
||||
"latest": "now"
|
||||
},
|
||||
"id": "7o73dimso",
|
||||
"layout": {
|
||||
"x": 6,
|
||||
"y": 4,
|
||||
"w": 6,
|
||||
"h": 3
|
||||
},
|
||||
"type": "counter.single",
|
||||
"title": "Power Draw Average - Last 15 Min"
|
||||
},
|
||||
{
|
||||
"config": {
|
||||
"colorPalette": 0,
|
||||
"colorPaletteReversed": false,
|
||||
"customData": {
|
||||
"trellis": false,
|
||||
"connectNulls": "Leave gaps",
|
||||
"stack": false,
|
||||
"seriesCount": 1
|
||||
},
|
||||
"xAxis": {
|
||||
"labelOrientation": 0,
|
||||
"position": "Bottom"
|
||||
},
|
||||
"yAxis": {
|
||||
"position": "Left",
|
||||
"scale": "Linear",
|
||||
"splitLine": true,
|
||||
"interval": 2,
|
||||
"min": 20,
|
||||
"max": 35
|
||||
},
|
||||
"axis": {
|
||||
"yAxis": [
|
||||
"values_ping_latency"
|
||||
],
|
||||
"yAxisExcluded": [
|
||||
"_time"
|
||||
]
|
||||
},
|
||||
"legend": {
|
||||
"position": "Right",
|
||||
"truncate": true
|
||||
},
|
||||
"onClickAction": {
|
||||
"type": "None"
|
||||
},
|
||||
"seriesInfo": {
|
||||
"values_ping_latency": {
|
||||
"type": "column"
|
||||
},
|
||||
"_time": {}
|
||||
}
|
||||
},
|
||||
"search": {
|
||||
"type": "inline",
|
||||
"query": "dataset=\"starlink\" sourcetype in (\"starlink:ping_latency\") | extract parser=json_parser | timestats values(mean_full_ping_latency) ",
|
||||
"earliest": "-15m",
|
||||
"latest": "now"
|
||||
},
|
||||
"id": "n5lu6hhw0",
|
||||
"layout": {
|
||||
"x": 0,
|
||||
"y": 7,
|
||||
"w": 6,
|
||||
"h": 5
|
||||
},
|
||||
"type": "chart.column",
|
||||
"hidePanel": false,
|
||||
"title": "Ping Latency - Last 15 Min"
|
||||
},
|
||||
{
|
||||
"config": {
|
||||
"colorPalette": 1,
|
||||
"colorPaletteReversed": false,
|
||||
"customData": {
|
||||
"trellis": false,
|
||||
"connectNulls": "Leave gaps",
|
||||
"stack": false,
|
||||
"seriesCount": 1
|
||||
},
|
||||
"xAxis": {
|
||||
"labelOrientation": 0,
|
||||
"position": "Bottom"
|
||||
},
|
||||
"yAxis": {
|
||||
"position": "Left",
|
||||
"scale": "Linear",
|
||||
"splitLine": true,
|
||||
"min": 25,
|
||||
"max": 70,
|
||||
"interval": 5
|
||||
},
|
||||
"axis": {
|
||||
"yAxis": [
|
||||
"values_latest_power"
|
||||
],
|
||||
"yAxisExcluded": [
|
||||
"_time"
|
||||
]
|
||||
},
|
||||
"legend": {
|
||||
"position": "Top",
|
||||
"truncate": true
|
||||
},
|
||||
"onClickAction": {
|
||||
"type": "None"
|
||||
},
|
||||
"seriesInfo": {
|
||||
"_time": {
|
||||
"color": "#29bd00"
|
||||
},
|
||||
"values_latest_power": {
|
||||
"color": "#369900",
|
||||
"type": "area"
|
||||
}
|
||||
}
|
||||
},
|
||||
"search": {
|
||||
"type": "inline",
|
||||
"query": "dataset=\"starlink\" sourcetype=\"starlink:power\" | extract parser=json_parser | timestats values(latest_power)",
|
||||
"earliest": "-15m",
|
||||
"latest": "now"
|
||||
},
|
||||
"id": "20ekij4vo",
|
||||
"layout": {
|
||||
"x": 6,
|
||||
"y": 7,
|
||||
"w": 6,
|
||||
"h": 5
|
||||
},
|
||||
"type": "chart.column",
|
||||
"title": "Power Draw - Last 15 Min"
|
||||
},
|
||||
{
|
||||
"config": {
|
||||
"markdown": "## What is ping success?\n- Starlink and the Starlink router both send test pings to the internet many times per minute. It is normal for some pings to be dropped, and your connection to the internet to remain unaffected.",
|
||||
"axis": {}
|
||||
},
|
||||
"id": "2o01xt5al",
|
||||
"layout": {
|
||||
"x": 0,
|
||||
"y": 12,
|
||||
"w": 6,
|
||||
"h": 2
|
||||
},
|
||||
"variant": "markdown",
|
||||
"type": "markdown.default"
|
||||
},
|
||||
{
|
||||
"config": {
|
||||
"markdown": "## What is throughput?\n- 'Download' and 'Upload' measure the amount of data that your Starlink is downloading from or uploading to the internet. Download a large file or run a speed test to watch it jump!",
|
||||
"axis": {}
|
||||
},
|
||||
"id": "hwr5nirfk",
|
||||
"layout": {
|
||||
"x": 6,
|
||||
"y": 12,
|
||||
"w": 5,
|
||||
"h": 2
|
||||
},
|
||||
"variant": "markdown",
|
||||
"type": "markdown.default"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,142 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Check whether there is a software update pending on a Starlink user terminal.
|
||||
|
||||
Optionally, reboot the dish to initiate install if there is an update pending.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
from datetime import datetime
|
||||
import logging
|
||||
import sys
|
||||
import time
|
||||
|
||||
import grpc
|
||||
|
||||
import loop_util
|
||||
import starlink_grpc
|
||||
|
||||
# This is the enum value spacex_api.device.dish_pb2.SoftwareUpdateState.REBOOT_REQUIRED
|
||||
REBOOT_REQUIRED = 6
|
||||
# This is the enum value spacex_api.device.dish_pb2.SoftwareUpdateState.DISABLED
|
||||
UPDATE_DISABLED = 7
|
||||
|
||||
|
||||
def loop_body(opts, context):
|
||||
now = time.time()
|
||||
|
||||
try:
|
||||
status = starlink_grpc.get_status(context)
|
||||
except (AttributeError, ValueError, grpc.RpcError) as e:
|
||||
logging.error("Failed getting dish status: %s", str(starlink_grpc.GrpcError(e)))
|
||||
return 1
|
||||
|
||||
# There are at least 3 and maybe 4 redundant flags that indicate whether or
|
||||
# not a software update is pending. In order to be robust against future
|
||||
# changes in the protocol and/or implementation of it, this scripts checks
|
||||
# them all, while allowing for the possibility that some of them have been
|
||||
# obsoleted and thus no longer present in the reflected protocol classes.
|
||||
|
||||
try:
|
||||
alert_flag = status.alerts.install_pending
|
||||
except (AttributeError, ValueError):
|
||||
alert_flag = None
|
||||
|
||||
try:
|
||||
state_flag = status.software_update_state == REBOOT_REQUIRED
|
||||
state_dflag = status.software_update_state == UPDATE_DISABLED
|
||||
except (AttributeError, ValueError):
|
||||
state_flag = None
|
||||
state_dflag = None
|
||||
|
||||
try:
|
||||
stats_flag = status.software_update_stats.software_update_state == REBOOT_REQUIRED
|
||||
stats_dflag = status.software_update_stats.software_update_state == UPDATE_DISABLED
|
||||
except (AttributeError, ValueError):
|
||||
stats_flag = None
|
||||
stats_dflag = None
|
||||
|
||||
try:
|
||||
ready_flag = status.swupdate_reboot_ready
|
||||
except (AttributeError, ValueError):
|
||||
ready_flag = None
|
||||
|
||||
try:
|
||||
sw_version = status.device_info.software_version
|
||||
except (AttributeError, ValueError):
|
||||
sw_version = "UNKNOWN"
|
||||
|
||||
if opts.verbose >= 2:
|
||||
print("Pending flags:", alert_flag, state_flag, stats_flag, ready_flag)
|
||||
print("Disable flags:", state_dflag, stats_dflag)
|
||||
|
||||
if state_dflag or stats_dflag:
|
||||
logging.warning("Software updates appear to be disabled")
|
||||
|
||||
# The swupdate_reboot_ready field does not appear to be in use, so may
|
||||
# mean something other than what it sounds like. Only use it if none of
|
||||
# the others are available.
|
||||
if alert_flag is None and state_flag is None and stats_flag is None:
|
||||
install_pending = bool(ready_flag)
|
||||
else:
|
||||
install_pending = alert_flag or state_flag or stats_flag
|
||||
|
||||
if opts.verbose:
|
||||
dtnow = datetime.fromtimestamp(now, tz=getattr(opts, "timezone", None))
|
||||
print(dtnow.replace(microsecond=0, tzinfo=None).isoformat(), "- ", end="")
|
||||
|
||||
if install_pending:
|
||||
print("Install pending, current version:", sw_version)
|
||||
if opts.install:
|
||||
print("Rebooting dish to initiate install")
|
||||
try:
|
||||
starlink_grpc.reboot(context)
|
||||
except starlink_grpc.GrpcError as e:
|
||||
logging.error("Failed reboot request: %s", str(e))
|
||||
return 1
|
||||
elif opts.verbose:
|
||||
print("No install pending, current version:", sw_version)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(description="Check for Starlink user terminal software update")
|
||||
parser.add_argument(
|
||||
"-i",
|
||||
"--install",
|
||||
action="store_true",
|
||||
help="Initiate dish reboot to perform install if there is an update pending")
|
||||
parser.add_argument("-g",
|
||||
"--target",
|
||||
help="host:port of dish to query, default is the standard IP address "
|
||||
"and port (192.168.100.1:9200)")
|
||||
parser.add_argument("-v",
|
||||
"--verbose",
|
||||
action="count",
|
||||
default=0,
|
||||
help="Increase verbosity, may be used multiple times")
|
||||
loop_util.add_args(parser)
|
||||
opts = parser.parse_args()
|
||||
|
||||
loop_util.check_args(opts, parser)
|
||||
|
||||
return opts
|
||||
|
||||
|
||||
def main():
|
||||
opts = parse_args()
|
||||
|
||||
logging.basicConfig(format="%(levelname)s: %(message)s")
|
||||
|
||||
context = starlink_grpc.ChannelContext(target=opts.target)
|
||||
|
||||
try:
|
||||
rc = loop_util.run_loop(opts, loop_body, opts, context)
|
||||
finally:
|
||||
context.close()
|
||||
|
||||
sys.exit(rc)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,445 @@
|
||||
"""Shared code among the dish_grpc_* commands
|
||||
|
||||
Note:
|
||||
|
||||
This module is not intended to be generically useful or to export a stable
|
||||
interface. Rather, it should be considered an implementation detail of the
|
||||
other scripts, and will change as needed.
|
||||
|
||||
For a module that exports an interface intended for general use, see
|
||||
starlink_grpc.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
from datetime import datetime
|
||||
from datetime import timezone
|
||||
import logging
|
||||
import re
|
||||
import time
|
||||
from typing import List
|
||||
|
||||
import grpc
|
||||
|
||||
import starlink_grpc
|
||||
|
||||
BRACKETS_RE = re.compile(r"([^[]*)(\[((\d+),|)(\d*)\]|)$")
|
||||
LOOP_TIME_DEFAULT = 0
|
||||
STATUS_MODES: List[str] = ["status", "obstruction_detail", "alert_detail", "location"]
|
||||
HISTORY_STATS_MODES: List[str] = [
|
||||
"ping_drop", "ping_run_length", "ping_latency", "ping_loaded_latency", "usage", "power"
|
||||
]
|
||||
UNGROUPED_MODES: List[str] = []
|
||||
|
||||
|
||||
def create_arg_parser(output_description, bulk_history=True):
|
||||
"""Create an argparse parser and add the common command line options."""
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Collect status and/or history data from a Starlink user terminal and " +
|
||||
output_description,
|
||||
epilog="Additional arguments can be read from a file by including @FILENAME as an "
|
||||
"option, where FILENAME is a path to a file that contains arguments, one per line.",
|
||||
fromfile_prefix_chars="@",
|
||||
add_help=False)
|
||||
|
||||
# need to remember this for later
|
||||
parser.bulk_history = bulk_history
|
||||
|
||||
group = parser.add_argument_group(title="General options")
|
||||
group.add_argument("-g",
|
||||
"--target",
|
||||
help="host:port of dish to query, default is the standard IP address "
|
||||
"and port (192.168.100.1:9200)")
|
||||
group.add_argument("-h", "--help", action="help", help="Be helpful")
|
||||
group.add_argument("-N",
|
||||
"--numeric",
|
||||
action="store_true",
|
||||
help="Record boolean values as 1 and 0 instead of True and False")
|
||||
group.add_argument("-t",
|
||||
"--loop-interval",
|
||||
type=float,
|
||||
default=float(LOOP_TIME_DEFAULT),
|
||||
help="Loop interval in seconds or 0 for no loop, default: " +
|
||||
str(LOOP_TIME_DEFAULT))
|
||||
group.add_argument("-v", "--verbose", action="store_true", help="Be verbose")
|
||||
|
||||
group = parser.add_argument_group(title="History mode options")
|
||||
group.add_argument("-a",
|
||||
"--all-samples",
|
||||
action="store_const",
|
||||
const=-1,
|
||||
dest="samples",
|
||||
help="Parse all valid samples")
|
||||
group.add_argument("-o",
|
||||
"--poll-loops",
|
||||
type=int,
|
||||
help="Poll history for N loops and aggregate data before computing history "
|
||||
"stats; this allows for a smaller loop interval with less loss of data "
|
||||
"when the dish reboots",
|
||||
metavar="N")
|
||||
if bulk_history:
|
||||
sample_help = ("Number of data samples to parse; normally applies to first loop "
|
||||
"iteration only, default: all in bulk mode, loop interval if loop "
|
||||
"interval set, else all available samples")
|
||||
no_counter_help = ("Don't track sample counter across loop iterations in non-bulk "
|
||||
"modes; keep using samples option value instead")
|
||||
else:
|
||||
sample_help = ("Number of data samples to parse; normally applies to first loop "
|
||||
"iteration only, default: loop interval, if set, else all available " +
|
||||
"samples")
|
||||
no_counter_help = ("Don't track sample counter across loop iterations; keep using "
|
||||
"samples option value instead")
|
||||
group.add_argument("-s", "--samples", type=int, help=sample_help)
|
||||
group.add_argument("-j", "--no-counter", action="store_true", help=no_counter_help)
|
||||
|
||||
return parser
|
||||
|
||||
|
||||
def run_arg_parser(parser, need_id=False, no_stdout_errors=False, modes=None):
|
||||
"""Run parse_args on a parser previously created with create_arg_parser
|
||||
|
||||
Args:
|
||||
need_id (bool): A flag to set in options to indicate whether or not to
|
||||
set dish_id on the global state object; see get_data for more
|
||||
detail.
|
||||
no_stdout_errors (bool): A flag set in options to protect stdout from
|
||||
error messages, in case that's where the data output is going, so
|
||||
may be being redirected to a file.
|
||||
modes (list[str]): Optionally provide the subset of data group modes
|
||||
to allow.
|
||||
|
||||
Returns:
|
||||
An argparse Namespace object with the parsed options set as attributes.
|
||||
"""
|
||||
if modes is None:
|
||||
modes = STATUS_MODES + HISTORY_STATS_MODES + UNGROUPED_MODES
|
||||
if parser.bulk_history:
|
||||
modes.append("bulk_history")
|
||||
parser.add_argument("mode",
|
||||
nargs="+",
|
||||
choices=modes,
|
||||
help="The data group to record, one or more of: " + ", ".join(modes),
|
||||
metavar="mode")
|
||||
|
||||
opts = parser.parse_args()
|
||||
|
||||
if opts.loop_interval <= 0.0 or opts.poll_loops is None:
|
||||
opts.poll_loops = 1
|
||||
elif opts.poll_loops < 2:
|
||||
parser.error("Poll loops arg must be 2 or greater to be meaningful")
|
||||
|
||||
# for convenience, set flags for whether any mode in a group is selected
|
||||
status_set = set(STATUS_MODES)
|
||||
opts.status_mode = bool(status_set.intersection(opts.mode))
|
||||
status_set.remove("location")
|
||||
# special group for any status mode other than location
|
||||
opts.pure_status_mode = bool(status_set.intersection(opts.mode))
|
||||
opts.history_stats_mode = bool(set(HISTORY_STATS_MODES).intersection(opts.mode))
|
||||
opts.bulk_mode = "bulk_history" in opts.mode
|
||||
|
||||
if opts.samples is None:
|
||||
opts.samples = int(opts.loop_interval) if opts.loop_interval >= 1.0 else -1
|
||||
opts.bulk_samples = -1
|
||||
else:
|
||||
# for scripts that query starting history counter, skip it if samples
|
||||
# was explicitly set
|
||||
opts.skip_query = True
|
||||
opts.bulk_samples = opts.samples
|
||||
|
||||
opts.no_stdout_errors = no_stdout_errors
|
||||
opts.need_id = need_id
|
||||
|
||||
return opts
|
||||
|
||||
|
||||
def conn_error(opts, msg, *args):
|
||||
"""Indicate an error in an appropriate way."""
|
||||
# Connection errors that happen in an interval loop are not critical
|
||||
# failures, but are interesting enough to print in non-verbose mode.
|
||||
if opts.loop_interval > 0.0 and not opts.no_stdout_errors:
|
||||
print(msg % args)
|
||||
else:
|
||||
logging.error(msg, *args)
|
||||
|
||||
|
||||
class GlobalState:
|
||||
"""A class for keeping state across loop iterations."""
|
||||
def __init__(self, target=None):
|
||||
# counter, timestamp for bulk_history:
|
||||
self.counter = None
|
||||
self.timestamp = None
|
||||
# counter, timestamp for history stats:
|
||||
self.counter_stats = None
|
||||
self.timestamp_stats = None
|
||||
self.dish_id = None
|
||||
self.context = starlink_grpc.ChannelContext(target=target)
|
||||
self.poll_count = 0
|
||||
self.accum_history = None
|
||||
self.first_poll = True
|
||||
self.warn_once_location = True
|
||||
|
||||
def shutdown(self):
|
||||
self.context.close()
|
||||
|
||||
|
||||
def get_data(opts, gstate, add_item, add_sequence, add_bulk=None, flush_history=False):
|
||||
"""Fetch data from the dish, pull it apart and call back with the pieces.
|
||||
|
||||
This function uses call backs to return the useful data. If need_id is set
|
||||
in opts, then it is guaranteed that dish_id will have been set in gstate
|
||||
prior to any of the call backs being invoked.
|
||||
|
||||
Args:
|
||||
opts (object): The options object returned from run_arg_parser.
|
||||
gstate (GlobalState): An object for keeping track of state across
|
||||
multiple calls.
|
||||
add_item (function): Call back for non-sequence data, with prototype:
|
||||
|
||||
add_item(name, value, category)
|
||||
add_sequence (function): Call back for sequence data, with prototype:
|
||||
|
||||
add_sequence(name, value, category, start_index_label)
|
||||
add_bulk (function): Optional. Call back for bulk history data, with
|
||||
prototype:
|
||||
|
||||
add_bulk(bulk_data, count, start_timestamp, start_counter)
|
||||
flush_history (bool): Optional. If true, run in a special mode that
|
||||
emits (only) history stats for already polled data, if any,
|
||||
regardless of --poll-loops state. Intended for script shutdown
|
||||
operation, in order to flush stats for polled history data which
|
||||
would otherwise be lost on script restart.
|
||||
|
||||
Returns:
|
||||
Tuple with 3 values. The first value is 1 if there were any failures
|
||||
getting data from the dish, otherwise 0. The second value is an int
|
||||
timestamp for status data (data with category "status"), or None if
|
||||
no status data was reported. The third value is an int timestamp for
|
||||
history stats data (non-bulk data with category other than "status"),
|
||||
or None if no history stats data was reported.
|
||||
"""
|
||||
if flush_history and opts.poll_loops < 2:
|
||||
return 0, None, None
|
||||
|
||||
rc = 0
|
||||
status_ts = None
|
||||
hist_ts = None
|
||||
|
||||
if not flush_history:
|
||||
rc, status_ts = get_status_data(opts, gstate, add_item, add_sequence)
|
||||
|
||||
if opts.history_stats_mode and (not rc or opts.poll_loops > 1):
|
||||
hist_rc, hist_ts = get_history_stats(opts, gstate, add_item, add_sequence, flush_history)
|
||||
if not rc:
|
||||
rc = hist_rc
|
||||
|
||||
if not flush_history and opts.bulk_mode and add_bulk and not rc:
|
||||
rc = get_bulk_data(opts, gstate, add_bulk)
|
||||
|
||||
return rc, status_ts, hist_ts
|
||||
|
||||
|
||||
def add_data_normal(data, category, add_item, add_sequence):
|
||||
for key, val in data.items():
|
||||
name, start, seq = BRACKETS_RE.match(key).group(1, 4, 5)
|
||||
if seq is None:
|
||||
add_item(name, val, category)
|
||||
else:
|
||||
add_sequence(name, val, category, int(start) if start else 0)
|
||||
|
||||
|
||||
def add_data_numeric(data, category, add_item, add_sequence):
|
||||
for key, val in data.items():
|
||||
name, start, seq = BRACKETS_RE.match(key).group(1, 4, 5)
|
||||
if seq is None:
|
||||
add_item(name, int(val) if isinstance(val, int) else val, category)
|
||||
else:
|
||||
add_sequence(name,
|
||||
[int(subval) if isinstance(subval, int) else subval for subval in val],
|
||||
category,
|
||||
int(start) if start else 0)
|
||||
|
||||
|
||||
def get_status_data(opts, gstate, add_item, add_sequence):
|
||||
if opts.status_mode:
|
||||
timestamp = int(time.time())
|
||||
add_data = add_data_numeric if opts.numeric else add_data_normal
|
||||
if opts.pure_status_mode or opts.need_id and gstate.dish_id is None:
|
||||
try:
|
||||
groups = starlink_grpc.status_data(context=gstate.context)
|
||||
status_data, obstruct_detail, alert_detail = groups[0:3]
|
||||
except starlink_grpc.GrpcError as e:
|
||||
if "status" in opts.mode:
|
||||
if opts.need_id and gstate.dish_id is None:
|
||||
conn_error(opts, "Dish unreachable and ID unknown, so not recording state")
|
||||
return 1, None
|
||||
if opts.verbose:
|
||||
print("Dish unreachable")
|
||||
add_item("state", "DISH_UNREACHABLE", "status")
|
||||
return 0, timestamp
|
||||
conn_error(opts, "Failure getting status: %s", str(e))
|
||||
return 1, None
|
||||
if opts.need_id:
|
||||
gstate.dish_id = status_data["id"]
|
||||
del status_data["id"]
|
||||
if "status" in opts.mode:
|
||||
add_data(status_data, "status", add_item, add_sequence)
|
||||
if "obstruction_detail" in opts.mode:
|
||||
add_data(obstruct_detail, "status", add_item, add_sequence)
|
||||
if "alert_detail" in opts.mode:
|
||||
add_data(alert_detail, "status", add_item, add_sequence)
|
||||
if "location" in opts.mode:
|
||||
try:
|
||||
location = starlink_grpc.location_data(context=gstate.context)
|
||||
except starlink_grpc.GrpcError as e:
|
||||
conn_error(opts, "Failure getting location: %s", str(e))
|
||||
return 1, None
|
||||
if location["latitude"] is None and gstate.warn_once_location:
|
||||
logging.warning("Location data not enabled. See README for more details.")
|
||||
gstate.warn_once_location = False
|
||||
add_data(location, "status", add_item, add_sequence)
|
||||
return 0, timestamp
|
||||
elif opts.need_id and gstate.dish_id is None:
|
||||
try:
|
||||
gstate.dish_id = starlink_grpc.get_id(context=gstate.context)
|
||||
except starlink_grpc.GrpcError as e:
|
||||
conn_error(opts, "Failure getting dish ID: %s", str(e))
|
||||
return 1, None
|
||||
if opts.verbose:
|
||||
print("Using dish ID: " + gstate.dish_id)
|
||||
|
||||
return 0, None
|
||||
|
||||
|
||||
def get_history_stats(opts, gstate, add_item, add_sequence, flush_history):
|
||||
"""Fetch history stats. See `get_data` for details."""
|
||||
if flush_history or (opts.need_id and gstate.dish_id is None):
|
||||
history = None
|
||||
else:
|
||||
try:
|
||||
timestamp = int(time.time())
|
||||
history = starlink_grpc.get_history(context=gstate.context)
|
||||
gstate.timestamp_stats = timestamp
|
||||
except (AttributeError, ValueError, grpc.RpcError) as e:
|
||||
conn_error(opts, "Failure getting history: %s", str(starlink_grpc.GrpcError(e)))
|
||||
history = None
|
||||
|
||||
parse_samples = opts.samples if gstate.counter_stats is None else -1
|
||||
start = gstate.counter_stats if gstate.counter_stats else None
|
||||
|
||||
# Accumulate polled history data into gstate.accum_history, even if there
|
||||
# was a dish reboot.
|
||||
if gstate.accum_history:
|
||||
if history is not None:
|
||||
gstate.accum_history = starlink_grpc.concatenate_history(gstate.accum_history,
|
||||
history,
|
||||
samples1=parse_samples,
|
||||
start1=start,
|
||||
verbose=opts.verbose)
|
||||
# Counter tracking gets too complicated to handle across reboots
|
||||
# once the data has been accumulated, so just have concatenate
|
||||
# handle it on the first polled loop and use a value of 0 to
|
||||
# remember it was done (as opposed to None, which is used for a
|
||||
# different purpose).
|
||||
if not opts.no_counter:
|
||||
gstate.counter_stats = 0
|
||||
else:
|
||||
gstate.accum_history = history
|
||||
|
||||
# When resuming from prior count with --poll-loops set, advance the loop
|
||||
# count by however many loops worth of data was caught up on. This helps
|
||||
# avoid abnormally large sample counts in the first set of output data.
|
||||
if gstate.first_poll and gstate.accum_history:
|
||||
if opts.poll_loops > 1 and gstate.counter_stats:
|
||||
new_samples = gstate.accum_history.current - gstate.counter_stats
|
||||
if new_samples < 0:
|
||||
new_samples = gstate.accum_history.current
|
||||
if new_samples > len(gstate.accum_history.pop_ping_drop_rate):
|
||||
new_samples = len(gstate.accum_history.pop_ping_drop_rate)
|
||||
gstate.poll_count = max(gstate.poll_count, int((new_samples-1) / opts.loop_interval))
|
||||
gstate.first_poll = False
|
||||
|
||||
if gstate.poll_count < opts.poll_loops - 1 and not flush_history:
|
||||
gstate.poll_count += 1
|
||||
return 0, None
|
||||
|
||||
gstate.poll_count = 0
|
||||
|
||||
if gstate.accum_history is None:
|
||||
return (0, None) if flush_history else (1, None)
|
||||
|
||||
groups = starlink_grpc.history_stats(parse_samples,
|
||||
start=start,
|
||||
verbose=opts.verbose,
|
||||
history=gstate.accum_history)
|
||||
general, ping, runlen, latency, loaded, usage, power = groups[0:7]
|
||||
add_data = add_data_numeric if opts.numeric else add_data_normal
|
||||
add_data(general, "ping_stats", add_item, add_sequence)
|
||||
if "ping_drop" in opts.mode:
|
||||
add_data(ping, "ping_stats", add_item, add_sequence)
|
||||
if "ping_run_length" in opts.mode:
|
||||
add_data(runlen, "ping_stats", add_item, add_sequence)
|
||||
if "ping_latency" in opts.mode:
|
||||
add_data(latency, "ping_stats", add_item, add_sequence)
|
||||
if "ping_loaded_latency" in opts.mode:
|
||||
add_data(loaded, "ping_stats", add_item, add_sequence)
|
||||
if "usage" in opts.mode:
|
||||
add_data(usage, "usage", add_item, add_sequence)
|
||||
if "power" in opts.mode:
|
||||
add_data(power, "power", add_item, add_sequence)
|
||||
if not opts.no_counter:
|
||||
gstate.counter_stats = general["end_counter"]
|
||||
|
||||
timestamp = gstate.timestamp_stats
|
||||
gstate.timestamp_stats = None
|
||||
gstate.accum_history = None
|
||||
|
||||
return 0, timestamp
|
||||
|
||||
|
||||
def get_bulk_data(opts, gstate, add_bulk):
|
||||
"""Fetch bulk data. See `get_data` for details."""
|
||||
before = time.time()
|
||||
|
||||
start = gstate.counter
|
||||
parse_samples = opts.bulk_samples if start is None else -1
|
||||
try:
|
||||
general, bulk = starlink_grpc.history_bulk_data(parse_samples,
|
||||
start=start,
|
||||
verbose=opts.verbose,
|
||||
context=gstate.context)
|
||||
except starlink_grpc.GrpcError as e:
|
||||
conn_error(opts, "Failure getting history: %s", str(e))
|
||||
return 1
|
||||
|
||||
after = time.time()
|
||||
parsed_samples = general["samples"]
|
||||
new_counter = general["end_counter"]
|
||||
timestamp = gstate.timestamp
|
||||
# check this first, so it doesn't report as lost time sync
|
||||
if gstate.counter is not None and new_counter != gstate.counter + parsed_samples:
|
||||
timestamp = None
|
||||
# Allow up to 2 seconds of time drift before forcibly re-syncing, since
|
||||
# +/- 1 second can happen just due to scheduler timing.
|
||||
if timestamp is not None and not before - 2.0 <= timestamp + parsed_samples <= after + 2.0:
|
||||
if opts.verbose:
|
||||
print("Lost sample time sync at: " +
|
||||
str(datetime.fromtimestamp(timestamp + parsed_samples, tz=timezone.utc)))
|
||||
timestamp = None
|
||||
if timestamp is None:
|
||||
timestamp = int(before)
|
||||
if opts.verbose:
|
||||
print("Establishing new time base: {0} -> {1}".format(
|
||||
new_counter, datetime.fromtimestamp(timestamp, tz=timezone.utc)))
|
||||
timestamp -= parsed_samples
|
||||
|
||||
if opts.numeric:
|
||||
add_bulk(
|
||||
{
|
||||
k: [int(subv) if isinstance(subv, int) else subv for subv in v]
|
||||
for k, v in bulk.items()
|
||||
}, parsed_samples, timestamp, new_counter - parsed_samples)
|
||||
else:
|
||||
add_bulk(bulk, parsed_samples, timestamp, new_counter - parsed_samples)
|
||||
|
||||
gstate.counter = new_counter
|
||||
gstate.timestamp = timestamp + parsed_samples
|
||||
return 0
|
||||
135
backup-from-device/gnss-guard/tm-gnss-guard/starlink-grpc-tools/dish_control.py
Executable file
135
backup-from-device/gnss-guard/tm-gnss-guard/starlink-grpc-tools/dish_control.py
Executable file
@@ -0,0 +1,135 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Manipulate operating state of a Starlink user terminal."""
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import sys
|
||||
|
||||
import grpc
|
||||
from yagrc import reflector as yagrc_reflector
|
||||
|
||||
import loop_util
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(description="Starlink user terminal state control")
|
||||
parser.add_argument("-e",
|
||||
"--target",
|
||||
default="192.168.100.1:9200",
|
||||
help="host:port of dish to query, default is the standard IP address "
|
||||
"and port (192.168.100.1:9200)")
|
||||
subs = parser.add_subparsers(dest="command", required=True)
|
||||
subs.add_parser("reboot", help="Reboot the user terminal")
|
||||
subs.add_parser("stow", help="Set user terminal to stow position")
|
||||
subs.add_parser("unstow", help="Restore user terminal from stow position")
|
||||
sleep_parser = subs.add_parser(
|
||||
"set_sleep",
|
||||
help="Show, set, or disable power save configuration",
|
||||
description="Run without arguments to show current configuration")
|
||||
sleep_parser.add_argument("start",
|
||||
nargs="?",
|
||||
type=int,
|
||||
help="Start time in minutes past midnight UTC")
|
||||
sleep_parser.add_argument("duration",
|
||||
nargs="?",
|
||||
type=int,
|
||||
help="Duration in minutes, or 0 to disable")
|
||||
gps_parser = subs.add_parser(
|
||||
"set_gps",
|
||||
help="Enable, disable, or show usage of GPS for position data",
|
||||
description="Run without arguments to show current configuration")
|
||||
gps_parser.add_argument("--enable",
|
||||
action=argparse.BooleanOptionalAction,
|
||||
help="Enable/disable use of GPS for position data")
|
||||
loop_util.add_args(parser)
|
||||
|
||||
opts = parser.parse_args()
|
||||
|
||||
if opts.command == "set_sleep" and opts.start is not None:
|
||||
if opts.duration is None:
|
||||
sleep_parser.error("Must specify duration if start time is specified")
|
||||
if opts.start < 0 or opts.start >= 1440:
|
||||
sleep_parser.error("Invalid start time, must be >= 0 and < 1440")
|
||||
if opts.duration < 0 or opts.duration > 1440:
|
||||
sleep_parser.error("Invalid duration, must be >= 0 and <= 1440")
|
||||
loop_util.check_args(opts, parser)
|
||||
|
||||
return opts
|
||||
|
||||
|
||||
def loop_body(opts):
|
||||
reflector = yagrc_reflector.GrpcReflectionClient()
|
||||
try:
|
||||
with grpc.insecure_channel(opts.target) as channel:
|
||||
reflector.load_protocols(channel, symbols=["SpaceX.API.Device.Device"])
|
||||
stub = reflector.service_stub_class("SpaceX.API.Device.Device")(channel)
|
||||
request_class = reflector.message_class("SpaceX.API.Device.Request")
|
||||
if opts.command == "reboot":
|
||||
request = request_class(reboot={})
|
||||
elif opts.command == "stow":
|
||||
request = request_class(dish_stow={})
|
||||
elif opts.command == "unstow":
|
||||
request = request_class(dish_stow={"unstow": True})
|
||||
elif opts.command == "set_sleep":
|
||||
if opts.start is None and opts.duration is None:
|
||||
request = request_class(dish_get_config={})
|
||||
else:
|
||||
if opts.duration:
|
||||
request = request_class(
|
||||
dish_power_save={
|
||||
"power_save_start_minutes": opts.start,
|
||||
"power_save_duration_minutes": opts.duration,
|
||||
"enable_power_save": True
|
||||
})
|
||||
else:
|
||||
# duration of 0 not allowed, even when disabled
|
||||
request = request_class(dish_power_save={
|
||||
"power_save_duration_minutes": 1,
|
||||
"enable_power_save": False
|
||||
})
|
||||
elif opts.command == "set_gps":
|
||||
if opts.enable is None:
|
||||
request = request_class(get_status={})
|
||||
else:
|
||||
request = request_class(dish_inhibit_gps={"inhibit_gps": not opts.enable})
|
||||
|
||||
response = stub.Handle(request, timeout=10)
|
||||
|
||||
if opts.command == "set_sleep" and opts.start is None and opts.duration is None:
|
||||
config = response.dish_get_config.dish_config
|
||||
if config.power_save_mode:
|
||||
print("Sleep start:", config.power_save_start_minutes,
|
||||
"minutes past midnight UTC")
|
||||
print("Sleep duration:", config.power_save_duration_minutes, "minutes")
|
||||
else:
|
||||
print("Sleep disabled")
|
||||
elif opts.command == "set_gps" and opts.enable is None:
|
||||
status = response.dish_get_status
|
||||
if status.gps_stats.inhibit_gps:
|
||||
print("GPS disabled")
|
||||
else:
|
||||
print("GPS enabled")
|
||||
except (AttributeError, ValueError, grpc.RpcError) as e:
|
||||
if isinstance(e, grpc.Call):
|
||||
msg = e.details()
|
||||
elif isinstance(e, (AttributeError, ValueError)):
|
||||
msg = "Protocol error"
|
||||
else:
|
||||
msg = "Unknown communication or service error"
|
||||
logging.error(msg)
|
||||
return 1
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
def main():
|
||||
opts = parse_args()
|
||||
|
||||
logging.basicConfig(format="%(levelname)s: %(message)s")
|
||||
|
||||
rc = loop_util.run_loop(opts, loop_body, opts)
|
||||
sys.exit(rc)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,339 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Write Starlink user terminal data to an InfluxDB 1.x database.
|
||||
|
||||
This script pulls the current status info and/or metrics computed from the
|
||||
history data and writes them to the specified InfluxDB database either once
|
||||
or in a periodic loop.
|
||||
|
||||
Data will be written into the requested database with the following
|
||||
measurement / series names:
|
||||
|
||||
: spacex.starlink.user_terminal.status : Current status data
|
||||
: spacex.starlink.user_terminal.history : Bulk history data
|
||||
: spacex.starlink.user_terminal.ping_stats : Ping history statistics
|
||||
: spacex.starlink.user_terminal.usage : Usage history statistics
|
||||
: spacex.starlink.user_terminal.power : Power history statistics
|
||||
|
||||
NOTE: The Starlink user terminal does not include time values with its
|
||||
history or status data, so this script uses current system time to compute
|
||||
the timestamps it sends to InfluxDB. It is recommended to run this script on
|
||||
a host that has its system clock synced via NTP. Otherwise, the timestamps
|
||||
may get out of sync with real time.
|
||||
"""
|
||||
|
||||
from datetime import datetime
|
||||
from datetime import timezone
|
||||
import logging
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
import time
|
||||
import warnings
|
||||
|
||||
from influxdb import InfluxDBClient
|
||||
|
||||
import dish_common
|
||||
|
||||
HOST_DEFAULT = "localhost"
|
||||
DATABASE_DEFAULT = "starlinkstats"
|
||||
BULK_MEASUREMENT = "spacex.starlink.user_terminal.history"
|
||||
FLUSH_LIMIT = 6
|
||||
MAX_BATCH = 5000
|
||||
MAX_QUEUE_LENGTH = 864000
|
||||
|
||||
|
||||
class Terminated(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def handle_sigterm(signum, frame):
|
||||
# Turn SIGTERM into an exception so main loop can clean up
|
||||
raise Terminated
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = dish_common.create_arg_parser(
|
||||
output_description="write it to an InfluxDB 1.x database")
|
||||
|
||||
group = parser.add_argument_group(title="InfluxDB 1.x database options")
|
||||
group.add_argument("-n",
|
||||
"--hostname",
|
||||
default=HOST_DEFAULT,
|
||||
dest="host",
|
||||
help="Hostname of InfluxDB server, default: " + HOST_DEFAULT)
|
||||
group.add_argument("-p", "--port", type=int, help="Port number to use on InfluxDB server")
|
||||
group.add_argument("-P", "--password", help="Set password for username/password authentication")
|
||||
group.add_argument("-U", "--username", help="Set username for authentication")
|
||||
group.add_argument("-D",
|
||||
"--database",
|
||||
default=DATABASE_DEFAULT,
|
||||
help="Database name to use, default: " + DATABASE_DEFAULT)
|
||||
group.add_argument("-R", "--retention-policy", help="Retention policy name to use")
|
||||
group.add_argument("-k",
|
||||
"--skip-query",
|
||||
action="store_true",
|
||||
help="Skip querying for prior sample write point in bulk mode")
|
||||
group.add_argument("-C",
|
||||
"--ca-cert",
|
||||
dest="verify_ssl",
|
||||
help="Enable SSL/TLS using specified CA cert to verify server",
|
||||
metavar="FILENAME")
|
||||
group.add_argument("-I",
|
||||
"--insecure",
|
||||
action="store_false",
|
||||
dest="verify_ssl",
|
||||
help="Enable SSL/TLS but disable certificate verification (INSECURE!)")
|
||||
group.add_argument("-S",
|
||||
"--secure",
|
||||
action="store_true",
|
||||
dest="verify_ssl",
|
||||
help="Enable SSL/TLS using default CA cert")
|
||||
|
||||
env_map = (
|
||||
("INFLUXDB_HOST", "host"),
|
||||
("INFLUXDB_PORT", "port"),
|
||||
("INFLUXDB_USER", "username"),
|
||||
("INFLUXDB_PWD", "password"),
|
||||
("INFLUXDB_DB", "database"),
|
||||
("INFLUXDB_RP", "retention-policy"),
|
||||
("INFLUXDB_SSL", "verify_ssl"),
|
||||
)
|
||||
env_defaults = {}
|
||||
for var, opt in env_map:
|
||||
# check both set and not empty string
|
||||
val = os.environ.get(var)
|
||||
if val:
|
||||
if var == "INFLUXDB_SSL" and val == "secure":
|
||||
env_defaults[opt] = True
|
||||
elif var == "INFLUXDB_SSL" and val == "insecure":
|
||||
env_defaults[opt] = False
|
||||
else:
|
||||
env_defaults[opt] = val
|
||||
parser.set_defaults(**env_defaults)
|
||||
|
||||
opts = dish_common.run_arg_parser(parser, need_id=True)
|
||||
|
||||
if opts.username is None and opts.password is not None:
|
||||
parser.error("Password authentication requires username to be set")
|
||||
|
||||
opts.icargs = {"timeout": 5}
|
||||
for key in ["port", "host", "password", "username", "database", "verify_ssl"]:
|
||||
val = getattr(opts, key)
|
||||
if val is not None:
|
||||
opts.icargs[key] = val
|
||||
|
||||
if opts.verify_ssl is not None:
|
||||
opts.icargs["ssl"] = True
|
||||
|
||||
return opts
|
||||
|
||||
|
||||
def flush_points(opts, gstate):
|
||||
try:
|
||||
while len(gstate.points) > MAX_BATCH:
|
||||
gstate.influx_client.write_points(gstate.points[:MAX_BATCH],
|
||||
time_precision="s",
|
||||
retention_policy=opts.retention_policy)
|
||||
if opts.verbose:
|
||||
print("Data points written: " + str(MAX_BATCH))
|
||||
del gstate.points[:MAX_BATCH]
|
||||
if gstate.points:
|
||||
gstate.influx_client.write_points(gstate.points,
|
||||
time_precision="s",
|
||||
retention_policy=opts.retention_policy)
|
||||
if opts.verbose:
|
||||
print("Data points written: " + str(len(gstate.points)))
|
||||
gstate.points.clear()
|
||||
except Exception as e:
|
||||
dish_common.conn_error(opts, "Failed writing to InfluxDB database: %s", str(e))
|
||||
# If failures persist, don't just use infinite memory. Max queue
|
||||
# is currently 10 days of bulk data, so something is very wrong
|
||||
# if it's ever exceeded.
|
||||
if len(gstate.points) > MAX_QUEUE_LENGTH:
|
||||
logging.error("Max write queue exceeded, discarding data.")
|
||||
del gstate.points[:-MAX_QUEUE_LENGTH]
|
||||
return 1
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
def query_counter(gstate, start, end):
|
||||
try:
|
||||
# fetch the latest point where counter field was recorded
|
||||
result = gstate.influx_client.query("SELECT counter FROM \"{0}\" "
|
||||
"WHERE time>={1}s AND time<{2}s AND id=$id "
|
||||
"ORDER by time DESC LIMIT 1;".format(
|
||||
BULK_MEASUREMENT, start, end),
|
||||
bind_params={"id": gstate.dish_id},
|
||||
epoch="s")
|
||||
points = list(result.get_points())
|
||||
if points:
|
||||
counter = points[0].get("counter", None)
|
||||
timestamp = points[0].get("time", 0)
|
||||
if counter and timestamp:
|
||||
return int(counter), int(timestamp)
|
||||
except TypeError as e:
|
||||
# bind_params was added in influxdb-python v5.2.3. That would be easy
|
||||
# enough to work around, but older versions had other problems with
|
||||
# query(), so just skip this functionality.
|
||||
logging.error(
|
||||
"Failed running query, probably due to influxdb-python version too old. "
|
||||
"Skipping resumption from prior counter value. Reported error was: %s", str(e))
|
||||
|
||||
return None, 0
|
||||
|
||||
|
||||
def sync_timebase(opts, gstate):
|
||||
try:
|
||||
db_counter, db_timestamp = query_counter(gstate, gstate.start_timestamp, gstate.timestamp)
|
||||
except Exception as e:
|
||||
# could be temporary outage, so try again next time
|
||||
dish_common.conn_error(opts, "Failed querying InfluxDB for prior count: %s", str(e))
|
||||
return
|
||||
gstate.timebase_synced = True
|
||||
|
||||
if db_counter and gstate.start_counter <= db_counter:
|
||||
del gstate.deferred_points[:db_counter - gstate.start_counter]
|
||||
if gstate.deferred_points:
|
||||
delta_timestamp = db_timestamp - (gstate.deferred_points[0]["time"] - 1)
|
||||
# to prevent +/- 1 second timestamp drift when the script restarts,
|
||||
# if time base is within 2 seconds of that of the last sample in
|
||||
# the database, correct back to that time base
|
||||
if delta_timestamp == 0:
|
||||
if opts.verbose:
|
||||
print("Exactly synced with database time base")
|
||||
elif -2 <= delta_timestamp <= 2:
|
||||
if opts.verbose:
|
||||
print("Replacing with existing time base: {0} -> {1}".format(
|
||||
db_counter, datetime.fromtimestamp(db_timestamp, tz=timezone.utc)))
|
||||
for point in gstate.deferred_points:
|
||||
db_timestamp += 1
|
||||
if point["time"] + delta_timestamp == db_timestamp:
|
||||
point["time"] = db_timestamp
|
||||
else:
|
||||
# lost time sync when recording data, leave the rest
|
||||
break
|
||||
else:
|
||||
gstate.timestamp = db_timestamp
|
||||
else:
|
||||
if opts.verbose:
|
||||
print("Database time base out of sync by {0} seconds".format(delta_timestamp))
|
||||
|
||||
gstate.points.extend(gstate.deferred_points)
|
||||
gstate.deferred_points.clear()
|
||||
|
||||
|
||||
def loop_body(opts, gstate, shutdown=False):
|
||||
fields = {"status": {}, "ping_stats": {}, "usage": {}, "power": {}}
|
||||
|
||||
def cb_add_item(key, val, category):
|
||||
fields[category][key] = val
|
||||
|
||||
def cb_add_sequence(key, val, category, start):
|
||||
for i, subval in enumerate(val, start=start):
|
||||
fields[category]["{0}_{1}".format(key, i)] = subval
|
||||
|
||||
def cb_add_bulk(bulk, count, timestamp, counter):
|
||||
if gstate.start_timestamp is None:
|
||||
gstate.start_timestamp = timestamp
|
||||
gstate.start_counter = counter
|
||||
points = gstate.points if gstate.timebase_synced else gstate.deferred_points
|
||||
for i in range(count):
|
||||
timestamp += 1
|
||||
points.append({
|
||||
"measurement": BULK_MEASUREMENT,
|
||||
"tags": {
|
||||
"id": gstate.dish_id
|
||||
},
|
||||
"time": timestamp,
|
||||
"fields": {key: val[i] for key, val in bulk.items() if val[i] is not None},
|
||||
})
|
||||
if points:
|
||||
# save off counter value for script restart
|
||||
points[-1]["fields"]["counter"] = counter + count
|
||||
|
||||
rc, status_ts, hist_ts = dish_common.get_data(opts,
|
||||
gstate,
|
||||
cb_add_item,
|
||||
cb_add_sequence,
|
||||
add_bulk=cb_add_bulk,
|
||||
flush_history=shutdown)
|
||||
if rc:
|
||||
return rc
|
||||
|
||||
for category, cat_fields in fields.items():
|
||||
if cat_fields:
|
||||
timestamp = status_ts if category == "status" else hist_ts
|
||||
gstate.points.append({
|
||||
"measurement": "spacex.starlink.user_terminal." + category,
|
||||
"tags": {
|
||||
"id": gstate.dish_id
|
||||
},
|
||||
"time": timestamp,
|
||||
"fields": cat_fields,
|
||||
})
|
||||
|
||||
# This is here and not before the points being processed because if the
|
||||
# query previously failed, there will be points that were processed in
|
||||
# a prior loop. This avoids having to handle that as a special case.
|
||||
if opts.bulk_mode and not gstate.timebase_synced:
|
||||
sync_timebase(opts, gstate)
|
||||
|
||||
if opts.verbose:
|
||||
print("Data points queued: " + str(len(gstate.points)))
|
||||
|
||||
if len(gstate.points) >= FLUSH_LIMIT:
|
||||
return flush_points(opts, gstate)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
def main():
|
||||
opts = parse_args()
|
||||
|
||||
logging.basicConfig(format="%(levelname)s: %(message)s")
|
||||
|
||||
gstate = dish_common.GlobalState(target=opts.target)
|
||||
gstate.points = []
|
||||
gstate.deferred_points = []
|
||||
gstate.timebase_synced = opts.skip_query
|
||||
gstate.start_timestamp = None
|
||||
gstate.start_counter = None
|
||||
|
||||
if "verify_ssl" in opts.icargs and not opts.icargs["verify_ssl"]:
|
||||
# user has explicitly said be insecure, so don't warn about it
|
||||
warnings.filterwarnings("ignore", message="Unverified HTTPS request")
|
||||
|
||||
signal.signal(signal.SIGTERM, handle_sigterm)
|
||||
try:
|
||||
# attempt to hack around breakage between influxdb-python client and 2.0 server:
|
||||
gstate.influx_client = InfluxDBClient(**opts.icargs, headers={"Accept": "application/json"})
|
||||
except TypeError:
|
||||
# ...unless influxdb-python package version is too old
|
||||
gstate.influx_client = InfluxDBClient(**opts.icargs)
|
||||
|
||||
rc = 0
|
||||
try:
|
||||
next_loop = time.monotonic()
|
||||
while True:
|
||||
rc = loop_body(opts, gstate)
|
||||
if opts.loop_interval > 0.0:
|
||||
now = time.monotonic()
|
||||
next_loop = max(next_loop + opts.loop_interval, now)
|
||||
time.sleep(next_loop - now)
|
||||
else:
|
||||
break
|
||||
except (KeyboardInterrupt, Terminated):
|
||||
pass
|
||||
finally:
|
||||
loop_body(opts, gstate, shutdown=True)
|
||||
if gstate.points:
|
||||
rc = flush_points(opts, gstate)
|
||||
gstate.influx_client.close()
|
||||
gstate.shutdown()
|
||||
|
||||
sys.exit(rc)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,331 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Write Starlink user terminal data to an InfluxDB 2.x database.
|
||||
|
||||
This script pulls the current status info and/or metrics computed from the
|
||||
history data and writes them to the specified InfluxDB 2.x database either once
|
||||
or in a periodic loop.
|
||||
|
||||
Data will be written into the requested database with the following
|
||||
measurement / series names:
|
||||
|
||||
: spacex.starlink.user_terminal.status : Current status data
|
||||
: spacex.starlink.user_terminal.history : Bulk history data
|
||||
: spacex.starlink.user_terminal.ping_stats : Ping history statistics
|
||||
: spacex.starlink.user_terminal.usage : Usage history statistics
|
||||
: spacex.starlink.user_terminal.power : Power history statistics
|
||||
|
||||
NOTE: The Starlink user terminal does not include time values with its
|
||||
history or status data, so this script uses current system time to compute
|
||||
the timestamps it sends to InfluxDB. It is recommended to run this script on
|
||||
a host that has its system clock synced via NTP. Otherwise, the timestamps
|
||||
may get out of sync with real time.
|
||||
"""
|
||||
|
||||
from datetime import datetime
|
||||
from datetime import timezone
|
||||
import logging
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
import time
|
||||
import warnings
|
||||
|
||||
from influxdb_client import InfluxDBClient, WriteOptions, WritePrecision
|
||||
|
||||
import dish_common
|
||||
|
||||
URL_DEFAULT = "http://localhost:8086"
|
||||
BUCKET_DEFAULT = "starlinkstats"
|
||||
BULK_MEASUREMENT = "spacex.starlink.user_terminal.history"
|
||||
FLUSH_LIMIT = 6
|
||||
MAX_BATCH = 5000
|
||||
MAX_QUEUE_LENGTH = 864000
|
||||
|
||||
|
||||
class Terminated(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def handle_sigterm(signum, frame):
|
||||
# Turn SIGTERM into an exception so main loop can clean up
|
||||
raise Terminated
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = dish_common.create_arg_parser(
|
||||
output_description="write it to an InfluxDB 2.x database")
|
||||
|
||||
group = parser.add_argument_group(title="InfluxDB 2.x database options")
|
||||
group.add_argument("-u",
|
||||
"--url",
|
||||
default=URL_DEFAULT,
|
||||
dest="url",
|
||||
help="URL of the InfluxDB 2.x server, default: " + URL_DEFAULT)
|
||||
group.add_argument("-T", "--token", help="Token to access the bucket")
|
||||
group.add_argument("-B",
|
||||
"--bucket",
|
||||
default=BUCKET_DEFAULT,
|
||||
help="Bucket name to use, default: " + BUCKET_DEFAULT)
|
||||
group.add_argument("-O", "--org", help="Organisation name")
|
||||
group.add_argument("-k",
|
||||
"--skip-query",
|
||||
action="store_true",
|
||||
help="Skip querying for prior sample write point in bulk mode")
|
||||
group.add_argument("-C",
|
||||
"--ca-cert",
|
||||
dest="ssl_ca_cert",
|
||||
help="Use specified CA cert to verify HTTPS server",
|
||||
metavar="FILENAME")
|
||||
group.add_argument("-I",
|
||||
"--insecure",
|
||||
action="store_false",
|
||||
dest="verify_ssl",
|
||||
help="Disable certificate verification of HTTPS server (INSECURE!)")
|
||||
|
||||
env_map = (
|
||||
("INFLUXDB_URL", "url"),
|
||||
("INFLUXDB_TOKEN", "token"),
|
||||
("INFLUXDB_Bucket", "bucket"),
|
||||
("INFLUXDB_ORG", "org"),
|
||||
("INFLUXDB_SSL", "verify_ssl"),
|
||||
)
|
||||
env_defaults = {}
|
||||
for var, opt in env_map:
|
||||
# check both set and not empty string
|
||||
val = os.environ.get(var)
|
||||
if val:
|
||||
if var == "INFLUXDB_SSL":
|
||||
if val == "insecure":
|
||||
env_defaults[opt] = False
|
||||
elif val == "secure":
|
||||
env_defaults[opt] = True
|
||||
else:
|
||||
env_defaults["ssl_ca_cert"] = val
|
||||
else:
|
||||
env_defaults[opt] = val
|
||||
parser.set_defaults(**env_defaults)
|
||||
|
||||
opts = dish_common.run_arg_parser(parser, need_id=True)
|
||||
|
||||
opts.icargs = {}
|
||||
for key in ["url", "token", "bucket", "org", "verify_ssl", "ssl_ca_cert"]:
|
||||
val = getattr(opts, key)
|
||||
if val is not None:
|
||||
opts.icargs[key] = val
|
||||
|
||||
if (not opts.verify_ssl
|
||||
or opts.ssl_ca_cert is not None) and not opts.url.lower().startswith("https:"):
|
||||
parser.error("SSL options only apply to HTTPS URLs")
|
||||
|
||||
return opts
|
||||
|
||||
|
||||
def flush_points(opts, gstate):
|
||||
try:
|
||||
write_api = gstate.influx_client.write_api(
|
||||
write_options=WriteOptions(batch_size=len(gstate.points),
|
||||
flush_interval=10_000,
|
||||
jitter_interval=2_000,
|
||||
retry_interval=5_000,
|
||||
max_retries=5,
|
||||
max_retry_delay=30_000,
|
||||
exponential_base=2))
|
||||
while len(gstate.points) > MAX_BATCH:
|
||||
write_api.write(record=gstate.points[:MAX_BATCH],
|
||||
write_precision=WritePrecision.S,
|
||||
bucket=opts.bucket)
|
||||
if opts.verbose:
|
||||
print("Data points written: " + str(MAX_BATCH))
|
||||
del gstate.points[:MAX_BATCH]
|
||||
|
||||
if gstate.points:
|
||||
write_api.write(record=gstate.points,
|
||||
write_precision=WritePrecision.S,
|
||||
bucket=opts.bucket)
|
||||
if opts.verbose:
|
||||
print("Data points written: " + str(len(gstate.points)))
|
||||
gstate.points.clear()
|
||||
write_api.flush()
|
||||
write_api.close()
|
||||
except Exception as e:
|
||||
dish_common.conn_error(opts, "Failed writing to InfluxDB database: %s", str(e))
|
||||
# If failures persist, don't just use infinite memory. Max queue
|
||||
# is currently 10 days of bulk data, so something is very wrong
|
||||
# if it's ever exceeded.
|
||||
if len(gstate.points) > MAX_QUEUE_LENGTH:
|
||||
logging.error("Max write queue exceeded, discarding data.")
|
||||
del gstate.points[:-MAX_QUEUE_LENGTH]
|
||||
return 1
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
def query_counter(opts, gstate, start, end):
|
||||
query_api = gstate.influx_client.query_api()
|
||||
result = query_api.query('''
|
||||
from(bucket: "{0}")
|
||||
|> range(start: {1}, stop: {2})
|
||||
|> filter(fn: (r) => r["_measurement"] == "{3}")
|
||||
|> filter(fn: (r) => r["_field"] == "counter")
|
||||
|> last()
|
||||
|> yield(name: "last")
|
||||
'''.format(opts.bucket, str(start), str(end), BULK_MEASUREMENT))
|
||||
if result:
|
||||
counter = result[0].records[0]["_value"]
|
||||
timestamp = result[0].records[0]["_time"].timestamp()
|
||||
if counter and timestamp:
|
||||
return int(counter), int(timestamp)
|
||||
|
||||
return None, 0
|
||||
|
||||
|
||||
def sync_timebase(opts, gstate):
|
||||
try:
|
||||
db_counter, db_timestamp = query_counter(opts, gstate, gstate.start_timestamp,
|
||||
gstate.timestamp)
|
||||
except Exception as e:
|
||||
# could be temporary outage, so try again next time
|
||||
dish_common.conn_error(opts, "Failed querying InfluxDB for prior count: %s", str(e))
|
||||
return
|
||||
gstate.timebase_synced = True
|
||||
|
||||
if db_counter and gstate.start_counter <= db_counter:
|
||||
del gstate.deferred_points[:db_counter - gstate.start_counter]
|
||||
if gstate.deferred_points:
|
||||
delta_timestamp = db_timestamp - (gstate.deferred_points[0]["time"] - 1)
|
||||
# to prevent +/- 1 second timestamp drift when the script restarts,
|
||||
# if time base is within 2 seconds of that of the last sample in
|
||||
# the database, correct back to that time base
|
||||
if delta_timestamp == 0:
|
||||
if opts.verbose:
|
||||
print("Exactly synced with database time base")
|
||||
elif -2 <= delta_timestamp <= 2:
|
||||
if opts.verbose:
|
||||
print("Replacing with existing time base: {0} -> {1}".format(
|
||||
db_counter, datetime.fromtimestamp(db_timestamp, tz=timezone.utc)))
|
||||
for point in gstate.deferred_points:
|
||||
db_timestamp += 1
|
||||
if point["time"] + delta_timestamp == db_timestamp:
|
||||
point["time"] = db_timestamp
|
||||
else:
|
||||
# lost time sync when recording data, leave the rest
|
||||
break
|
||||
else:
|
||||
gstate.timestamp = db_timestamp
|
||||
else:
|
||||
if opts.verbose:
|
||||
print("Database time base out of sync by {0} seconds".format(delta_timestamp))
|
||||
|
||||
gstate.points.extend(gstate.deferred_points)
|
||||
gstate.deferred_points.clear()
|
||||
|
||||
|
||||
def loop_body(opts, gstate, shutdown=False):
|
||||
fields = {"status": {}, "ping_stats": {}, "usage": {}, "power": {}}
|
||||
|
||||
def cb_add_item(key, val, category):
|
||||
fields[category][key] = val
|
||||
|
||||
def cb_add_sequence(key, val, category, start):
|
||||
for i, subval in enumerate(val, start=start):
|
||||
fields[category]["{0}_{1}".format(key, i)] = subval
|
||||
|
||||
def cb_add_bulk(bulk, count, timestamp, counter):
|
||||
if gstate.start_timestamp is None:
|
||||
gstate.start_timestamp = timestamp
|
||||
gstate.start_counter = counter
|
||||
points = gstate.points if gstate.timebase_synced else gstate.deferred_points
|
||||
for i in range(count):
|
||||
timestamp += 1
|
||||
points.append({
|
||||
"measurement": BULK_MEASUREMENT,
|
||||
"tags": {
|
||||
"id": gstate.dish_id
|
||||
},
|
||||
"time": timestamp,
|
||||
"fields": {key: val[i] for key, val in bulk.items() if val[i] is not None},
|
||||
})
|
||||
if points:
|
||||
# save off counter value for script restart
|
||||
points[-1]["fields"]["counter"] = counter + count
|
||||
|
||||
rc, status_ts, hist_ts = dish_common.get_data(opts,
|
||||
gstate,
|
||||
cb_add_item,
|
||||
cb_add_sequence,
|
||||
add_bulk=cb_add_bulk,
|
||||
flush_history=shutdown)
|
||||
if rc:
|
||||
return rc
|
||||
|
||||
for category, cat_fields in fields.items():
|
||||
if cat_fields:
|
||||
timestamp = status_ts if category == "status" else hist_ts
|
||||
gstate.points.append({
|
||||
"measurement": "spacex.starlink.user_terminal." + category,
|
||||
"tags": {
|
||||
"id": gstate.dish_id
|
||||
},
|
||||
"time": timestamp,
|
||||
"fields": cat_fields,
|
||||
})
|
||||
|
||||
# This is here and not before the points being processed because if the
|
||||
# query previously failed, there will be points that were processed in
|
||||
# a prior loop. This avoids having to handle that as a special case.
|
||||
if opts.bulk_mode and not gstate.timebase_synced:
|
||||
sync_timebase(opts, gstate)
|
||||
|
||||
if opts.verbose:
|
||||
print("Data points queued: " + str(len(gstate.points)))
|
||||
|
||||
if len(gstate.points) >= FLUSH_LIMIT:
|
||||
return flush_points(opts, gstate)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
def main():
|
||||
opts = parse_args()
|
||||
|
||||
logging.basicConfig(format="%(levelname)s: %(message)s")
|
||||
|
||||
gstate = dish_common.GlobalState(target=opts.target)
|
||||
gstate.points = []
|
||||
gstate.deferred_points = []
|
||||
gstate.timebase_synced = opts.skip_query
|
||||
gstate.start_timestamp = None
|
||||
gstate.start_counter = None
|
||||
|
||||
if "verify_ssl" in opts.icargs and not opts.icargs["verify_ssl"]:
|
||||
# user has explicitly said be insecure, so don't warn about it
|
||||
warnings.filterwarnings("ignore", message="Unverified HTTPS request")
|
||||
|
||||
signal.signal(signal.SIGTERM, handle_sigterm)
|
||||
gstate.influx_client = InfluxDBClient(**opts.icargs)
|
||||
|
||||
rc = 0
|
||||
try:
|
||||
next_loop = time.monotonic()
|
||||
while True:
|
||||
rc = loop_body(opts, gstate)
|
||||
if opts.loop_interval > 0.0:
|
||||
now = time.monotonic()
|
||||
next_loop = max(next_loop + opts.loop_interval, now)
|
||||
time.sleep(next_loop - now)
|
||||
else:
|
||||
break
|
||||
except (KeyboardInterrupt, Terminated):
|
||||
pass
|
||||
finally:
|
||||
loop_body(opts, gstate, shutdown=True)
|
||||
if gstate.points:
|
||||
rc = flush_points(opts, gstate)
|
||||
gstate.influx_client.close()
|
||||
gstate.shutdown()
|
||||
|
||||
sys.exit(rc)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,212 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Publish Starlink user terminal data to a MQTT broker.
|
||||
|
||||
This script pulls the current status info and/or metrics computed from the
|
||||
history data and publishes them to the specified MQTT broker either once or
|
||||
in a periodic loop.
|
||||
|
||||
Data will be published to the following topic names:
|
||||
|
||||
: starlink/dish_status/*id_value*/*field_name* : Current status data
|
||||
: starlink/dish_ping_stats/*id_value*/*field_name* : Ping history statistics
|
||||
: starlink/dish_usage/*id_value*/*field_name* : Usage history statistics
|
||||
: starlink/dish_power/*id_value*/*field_name* : Power history statistics
|
||||
|
||||
Where *id_value* is the *id* value from the dish status information.
|
||||
|
||||
Unless the --json command line option is used, in which case, JSON-formatted
|
||||
data will be published to topic name:
|
||||
|
||||
: starlink/*id_value*
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
import math
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
import time
|
||||
|
||||
try:
|
||||
import ssl
|
||||
ssl_ok = True
|
||||
except ImportError:
|
||||
ssl_ok = False
|
||||
|
||||
import paho.mqtt.publish
|
||||
|
||||
import dish_common
|
||||
|
||||
HOST_DEFAULT = "localhost"
|
||||
|
||||
|
||||
class Terminated(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def handle_sigterm(signum, frame):
|
||||
# Turn SIGTERM into an exception so main loop can clean up
|
||||
raise Terminated
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = dish_common.create_arg_parser(output_description="publish it to a MQTT broker",
|
||||
bulk_history=False)
|
||||
|
||||
group = parser.add_argument_group(title="MQTT broker options")
|
||||
group.add_argument("-n",
|
||||
"--hostname",
|
||||
default=HOST_DEFAULT,
|
||||
help="Hostname of MQTT broker, default: " + HOST_DEFAULT)
|
||||
group.add_argument("-p", "--port", type=int, help="Port number to use on MQTT broker")
|
||||
group.add_argument("-P", "--password", help="Set password for username/password authentication")
|
||||
group.add_argument("-U", "--username", help="Set username for authentication")
|
||||
group.add_argument("-J", "--json", action="store_true", help="Publish data as JSON")
|
||||
if ssl_ok:
|
||||
|
||||
def wrap_ca_arg(arg):
|
||||
return {"ca_certs": arg}
|
||||
|
||||
group.add_argument("-C",
|
||||
"--ca-cert",
|
||||
type=wrap_ca_arg,
|
||||
dest="tls",
|
||||
help="Enable SSL/TLS using specified CA cert to verify broker",
|
||||
metavar="FILENAME")
|
||||
group.add_argument("-I",
|
||||
"--insecure",
|
||||
action="store_const",
|
||||
const={"cert_reqs": ssl.CERT_NONE},
|
||||
dest="tls",
|
||||
help="Enable SSL/TLS but disable certificate verification (INSECURE!)")
|
||||
group.add_argument("-S",
|
||||
"--secure",
|
||||
action="store_const",
|
||||
const={},
|
||||
dest="tls",
|
||||
help="Enable SSL/TLS using default CA cert")
|
||||
else:
|
||||
parser.epilog += "\nSSL support options not available due to missing ssl module"
|
||||
|
||||
env_map = (
|
||||
("MQTT_HOST", "hostname"),
|
||||
("MQTT_PORT", "port"),
|
||||
("MQTT_USERNAME", "username"),
|
||||
("MQTT_PASSWORD", "password"),
|
||||
("MQTT_SSL", "tls"),
|
||||
)
|
||||
env_defaults = {}
|
||||
for var, opt in env_map:
|
||||
# check both set and not empty string
|
||||
val = os.environ.get(var)
|
||||
if val:
|
||||
if var == "MQTT_SSL":
|
||||
if ssl_ok and val != "false":
|
||||
if val == "insecure":
|
||||
env_defaults[opt] = {"cert_reqs": ssl.CERT_NONE}
|
||||
elif val == "secure":
|
||||
env_defaults[opt] = {}
|
||||
else:
|
||||
env_defaults[opt] = {"ca_certs": val}
|
||||
else:
|
||||
env_defaults[opt] = val
|
||||
parser.set_defaults(**env_defaults)
|
||||
|
||||
opts = dish_common.run_arg_parser(parser, need_id=True)
|
||||
|
||||
if opts.username is None and opts.password is not None:
|
||||
parser.error("Password authentication requires username to be set")
|
||||
|
||||
opts.mqargs = {}
|
||||
for key in ["hostname", "port", "tls"]:
|
||||
val = getattr(opts, key)
|
||||
if val is not None:
|
||||
opts.mqargs[key] = val
|
||||
|
||||
if opts.username is not None:
|
||||
opts.mqargs["auth"] = {"username": opts.username}
|
||||
if opts.password is not None:
|
||||
opts.mqargs["auth"]["password"] = opts.password
|
||||
|
||||
return opts
|
||||
|
||||
|
||||
def loop_body(opts, gstate):
|
||||
msgs = []
|
||||
|
||||
if opts.json:
|
||||
|
||||
data = {}
|
||||
|
||||
def cb_add_item(key, val, category):
|
||||
if not "dish_{0}".format(category) in data:
|
||||
data["dish_{0}".format(category)] = {}
|
||||
|
||||
# Skip NaN values that occur on startup because they can upset Javascript JSON parsers
|
||||
if not (isinstance(val, float) and math.isnan(val)):
|
||||
data["dish_{0}".format(category)].update({key: val})
|
||||
|
||||
def cb_add_sequence(key, val, category, _):
|
||||
if not "dish_{0}".format(category) in data:
|
||||
data["dish_{0}".format(category)] = {}
|
||||
|
||||
data["dish_{0}".format(category)].update({key: list(val)})
|
||||
|
||||
else:
|
||||
|
||||
def cb_add_item(key, val, category):
|
||||
msgs.append(("starlink/dish_{0}/{1}/{2}".format(category, gstate.dish_id,
|
||||
key), val, 0, False))
|
||||
|
||||
def cb_add_sequence(key, val, category, _):
|
||||
msgs.append(("starlink/dish_{0}/{1}/{2}".format(category, gstate.dish_id, key),
|
||||
",".join("" if x is None else str(x) for x in val), 0, False))
|
||||
|
||||
rc = dish_common.get_data(opts, gstate, cb_add_item, cb_add_sequence)[0]
|
||||
|
||||
if opts.json:
|
||||
msgs.append(("starlink/{0}".format(gstate.dish_id), json.dumps(data), 0, False))
|
||||
|
||||
if msgs:
|
||||
try:
|
||||
paho.mqtt.publish.multiple(msgs, client_id=gstate.dish_id, **opts.mqargs)
|
||||
if opts.verbose:
|
||||
print("Successfully published to MQTT broker")
|
||||
except Exception as e:
|
||||
dish_common.conn_error(opts, "Failed publishing to MQTT broker: %s", str(e))
|
||||
rc = 1
|
||||
|
||||
return rc
|
||||
|
||||
|
||||
def main():
|
||||
opts = parse_args()
|
||||
|
||||
logging.basicConfig(format="%(levelname)s: %(message)s")
|
||||
|
||||
gstate = dish_common.GlobalState(target=opts.target)
|
||||
|
||||
signal.signal(signal.SIGTERM, handle_sigterm)
|
||||
|
||||
rc = 0
|
||||
try:
|
||||
next_loop = time.monotonic()
|
||||
while True:
|
||||
rc = loop_body(opts, gstate)
|
||||
if opts.loop_interval > 0.0:
|
||||
now = time.monotonic()
|
||||
next_loop = max(next_loop + opts.loop_interval, now)
|
||||
time.sleep(next_loop - now)
|
||||
else:
|
||||
break
|
||||
except (KeyboardInterrupt, Terminated):
|
||||
pass
|
||||
finally:
|
||||
gstate.shutdown()
|
||||
|
||||
sys.exit(rc)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,298 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Prometheus exporter for Starlink user terminal data info.
|
||||
|
||||
This script pulls the current status info and/or metrics computed from the
|
||||
history data and makes it available via HTTP in the format Prometheus expects.
|
||||
"""
|
||||
|
||||
from http import HTTPStatus
|
||||
from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
|
||||
import logging
|
||||
import signal
|
||||
import sys
|
||||
import threading
|
||||
|
||||
import dish_common
|
||||
|
||||
|
||||
class Terminated(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def handle_sigterm(signum, frame):
|
||||
# Turn SIGTERM into an exception so main loop can clean up
|
||||
raise Terminated
|
||||
|
||||
|
||||
class MetricInfo:
|
||||
unit = ""
|
||||
kind = "gauge"
|
||||
help = ""
|
||||
|
||||
def __init__(self, unit=None, kind=None, help=None) -> None:
|
||||
if unit:
|
||||
self.unit = f"_{unit}"
|
||||
if kind:
|
||||
self.kind = kind
|
||||
if help:
|
||||
self.help = help
|
||||
pass
|
||||
|
||||
|
||||
METRICS_INFO = {
|
||||
"status_uptime": MetricInfo(unit="seconds", kind="counter"),
|
||||
"status_longitude": MetricInfo(),
|
||||
"status_latitude": MetricInfo(),
|
||||
"status_altitude": MetricInfo(),
|
||||
"status_gps_enabled": MetricInfo(),
|
||||
"status_gps_ready": MetricInfo(),
|
||||
"status_gps_sats": MetricInfo(),
|
||||
"status_seconds_to_first_nonempty_slot": MetricInfo(),
|
||||
"status_pop_ping_drop_rate": MetricInfo(),
|
||||
"status_downlink_throughput_bps": MetricInfo(),
|
||||
"status_uplink_throughput_bps": MetricInfo(),
|
||||
"status_pop_ping_latency_ms": MetricInfo(),
|
||||
"status_alerts": MetricInfo(),
|
||||
"status_fraction_obstructed": MetricInfo(),
|
||||
"status_currently_obstructed": MetricInfo(),
|
||||
"status_seconds_obstructed": MetricInfo(),
|
||||
"status_obstruction_duration": MetricInfo(),
|
||||
"status_obstruction_interval": MetricInfo(),
|
||||
"status_direction_azimuth": MetricInfo(),
|
||||
"status_direction_elevation": MetricInfo(),
|
||||
"status_is_snr_above_noise_floor": MetricInfo(),
|
||||
"status_alert_motors_stuck": MetricInfo(),
|
||||
"status_alert_thermal_throttle": MetricInfo(),
|
||||
"status_alert_thermal_shutdown": MetricInfo(),
|
||||
"status_alert_mast_not_near_vertical": MetricInfo(),
|
||||
"status_alert_unexpected_location": MetricInfo(),
|
||||
"status_alert_slow_ethernet_speeds": MetricInfo(),
|
||||
"status_alert_roaming": MetricInfo(),
|
||||
"status_alert_install_pending": MetricInfo(),
|
||||
"status_alert_is_heating": MetricInfo(),
|
||||
"status_alert_power_supply_thermal_throttle": MetricInfo(),
|
||||
"status_alert_slow_ethernet_speeds_100": MetricInfo(),
|
||||
"status_alert_is_power_save_idle": MetricInfo(),
|
||||
"status_alert_moving_while_not_mobile": MetricInfo(),
|
||||
"status_alert_moving_too_fast_for_policy": MetricInfo(),
|
||||
"status_alert_dbf_telem_stale": MetricInfo(),
|
||||
"status_alert_low_motor_current": MetricInfo(),
|
||||
"status_alert_obstruction_map_reset": MetricInfo(),
|
||||
"status_alert_lower_signal_than_predicted": MetricInfo(),
|
||||
"ping_stats_samples": MetricInfo(kind="counter"),
|
||||
"ping_stats_end_counter": MetricInfo(kind="counter"),
|
||||
"usage_download_usage": MetricInfo(unit="bytes", kind="counter"),
|
||||
"usage_upload_usage": MetricInfo(unit="bytes", kind="counter"),
|
||||
"power_latest_power": MetricInfo(),
|
||||
"power_mean_power": MetricInfo(),
|
||||
"power_min_power": MetricInfo(),
|
||||
"power_max_power": MetricInfo(),
|
||||
"power_total_energy": MetricInfo(),
|
||||
}
|
||||
|
||||
STATE_VALUES = [
|
||||
"UNKNOWN",
|
||||
"CONNECTED",
|
||||
"BOOTING",
|
||||
"SEARCHING",
|
||||
"STOWED",
|
||||
"THERMAL_SHUTDOWN",
|
||||
"NO_SATS",
|
||||
"OBSTRUCTED",
|
||||
"NO_DOWNLINK",
|
||||
"NO_PINGS",
|
||||
"DISH_UNREACHABLE",
|
||||
]
|
||||
|
||||
|
||||
class Metric:
|
||||
name = ""
|
||||
timestamp = ""
|
||||
kind = None
|
||||
help = None
|
||||
values = None
|
||||
|
||||
def __init__(self, name, timestamp, kind="gauge", help="", values=None):
|
||||
self.name = name
|
||||
self.timestamp = timestamp
|
||||
self.kind = kind
|
||||
self.help = help
|
||||
if values:
|
||||
self.values = values
|
||||
else:
|
||||
self.values = []
|
||||
pass
|
||||
|
||||
def __str__(self):
|
||||
if not self.values:
|
||||
return ""
|
||||
|
||||
lines = []
|
||||
lines.append(f"# HELP {self.name} {self.help}")
|
||||
lines.append(f"# TYPE {self.name} {self.kind}")
|
||||
for value in self.values:
|
||||
lines.append(f"{self.name}{value} {self.timestamp*1000}")
|
||||
lines.append("")
|
||||
return str.join("\n", lines)
|
||||
|
||||
|
||||
class MetricValue:
|
||||
value = 0
|
||||
labels = None
|
||||
|
||||
def __init__(self, value, labels=None) -> None:
|
||||
self.value = value
|
||||
self.labels = labels
|
||||
|
||||
def __str__(self):
|
||||
label_str = ""
|
||||
if self.labels:
|
||||
label_str = ("{" + str.join(",", [f'{v[0]}="{v[1]}"'
|
||||
for v in self.labels.items()]) + "}")
|
||||
return f"{label_str} {self.value}"
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = dish_common.create_arg_parser(output_description="Prometheus exporter",
|
||||
bulk_history=False)
|
||||
|
||||
group = parser.add_argument_group(title="HTTP server options")
|
||||
group.add_argument("--address", default="0.0.0.0", help="IP address to listen on")
|
||||
group.add_argument("--port", default=8080, type=int, help="Port to listen on")
|
||||
|
||||
return dish_common.run_arg_parser(parser, modes=["status", "alert_detail", "usage", "location", "power"])
|
||||
|
||||
|
||||
def prometheus_export(opts, gstate):
|
||||
raw_data = {}
|
||||
|
||||
def data_add_item(name, value, category):
|
||||
raw_data[category + "_" + name] = value
|
||||
pass
|
||||
|
||||
def data_add_sequencem(name, value, category, start):
|
||||
raise NotImplementedError("Did not expect sequence data")
|
||||
|
||||
with gstate.lock:
|
||||
rc, status_ts, hist_ts = dish_common.get_data(opts, gstate, data_add_item,
|
||||
data_add_sequencem)
|
||||
|
||||
metrics = []
|
||||
|
||||
# snr is not supported by starlink any more but still returned by the grpc
|
||||
# service for backwards compatibility
|
||||
if "status_snr" in raw_data:
|
||||
del raw_data["status_snr"]
|
||||
|
||||
metrics.append(
|
||||
Metric(
|
||||
name="starlink_status_state",
|
||||
timestamp=status_ts,
|
||||
values=[
|
||||
MetricValue(
|
||||
value=int(raw_data["status_state"] == state_value),
|
||||
labels={"state": state_value},
|
||||
) for state_value in STATE_VALUES
|
||||
],
|
||||
))
|
||||
del raw_data["status_state"]
|
||||
|
||||
info_metrics = ["status_id", "status_hardware_version", "status_software_version"]
|
||||
metrics_not_found = []
|
||||
metrics_not_found.extend([x for x in info_metrics if x not in raw_data])
|
||||
|
||||
if len(metrics_not_found) < len(info_metrics):
|
||||
metrics.append(
|
||||
Metric(
|
||||
name="starlink_info",
|
||||
timestamp=status_ts,
|
||||
values=[
|
||||
MetricValue(
|
||||
value=1,
|
||||
labels={
|
||||
x.replace("status_", ""): raw_data.pop(x) for x in info_metrics
|
||||
if x in raw_data
|
||||
},
|
||||
)
|
||||
],
|
||||
))
|
||||
|
||||
for name, metric_info in METRICS_INFO.items():
|
||||
if name in raw_data:
|
||||
metrics.append(
|
||||
Metric(
|
||||
name=f"starlink_{name}{metric_info.unit}",
|
||||
timestamp=status_ts,
|
||||
kind=metric_info.kind,
|
||||
values=[MetricValue(value=float(raw_data.pop(name) or 0))],
|
||||
))
|
||||
else:
|
||||
metrics_not_found.append(name)
|
||||
|
||||
metrics.append(
|
||||
Metric(
|
||||
name="starlink_exporter_unprocessed_metrics",
|
||||
timestamp=status_ts,
|
||||
values=[MetricValue(value=1, labels={"metric": name}) for name in raw_data],
|
||||
))
|
||||
|
||||
metrics.append(
|
||||
Metric(
|
||||
name="starlink_exporter_missing_metrics",
|
||||
timestamp=status_ts,
|
||||
values=[MetricValue(
|
||||
value=1,
|
||||
labels={"metric": name},
|
||||
) for name in metrics_not_found],
|
||||
))
|
||||
|
||||
return str.join("\n", [str(metric) for metric in metrics])
|
||||
|
||||
|
||||
class MetricsRequestHandler(BaseHTTPRequestHandler):
|
||||
def do_GET(self):
|
||||
path = self.path.partition("?")[0]
|
||||
if path.lower() == "/favicon.ico":
|
||||
self.send_error(HTTPStatus.NOT_FOUND)
|
||||
return
|
||||
|
||||
opts = self.server.opts
|
||||
gstate = self.server.gstate
|
||||
|
||||
content = prometheus_export(opts, gstate)
|
||||
self.send_response(HTTPStatus.OK)
|
||||
self.send_header("Content-type", "text/plain")
|
||||
self.send_header("Content-Length", len(content))
|
||||
self.end_headers()
|
||||
self.wfile.write(content.encode())
|
||||
|
||||
|
||||
def main():
|
||||
opts = parse_args()
|
||||
|
||||
logging.basicConfig(format="%(levelname)s: %(message)s", stream=sys.stderr)
|
||||
|
||||
gstate = dish_common.GlobalState(target=opts.target)
|
||||
gstate.lock = threading.Lock()
|
||||
|
||||
httpd = ThreadingHTTPServer((opts.address, opts.port), MetricsRequestHandler)
|
||||
httpd.daemon_threads = False
|
||||
httpd.opts = opts
|
||||
httpd.gstate = gstate
|
||||
|
||||
signal.signal(signal.SIGTERM, handle_sigterm)
|
||||
|
||||
print("HTTP listening on port", opts.port)
|
||||
try:
|
||||
httpd.serve_forever()
|
||||
except (KeyboardInterrupt, Terminated):
|
||||
pass
|
||||
finally:
|
||||
httpd.server_close()
|
||||
httpd.gstate.shutdown()
|
||||
|
||||
sys.exit()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,326 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Write Starlink user terminal data to a sqlite database.
|
||||
|
||||
This script pulls the current status info and/or metrics computed from the
|
||||
history data and writes them to the specified sqlite database either once or
|
||||
in a periodic loop.
|
||||
|
||||
Requested data will be written into the following tables:
|
||||
|
||||
: status : Current status data
|
||||
: history : Bulk history data
|
||||
: ping_stats : Ping history statistics
|
||||
: usage : Bandwidth usage history statistics
|
||||
: power : Power consumption history statistics
|
||||
|
||||
Array data is currently written to the database as text strings of comma-
|
||||
separated values, which may not be the best method for some use cases. If you
|
||||
find yourself wishing they were handled better, please open a feature request
|
||||
at https://github.com/sparky8512/starlink-grpc-tools/issues explaining the use
|
||||
case and how you would rather see it. This only affects a few fields, since
|
||||
most of the useful data is not in arrays.
|
||||
|
||||
Note that using this script to record the alert_detail group mode will tend to
|
||||
trip schema-related errors when new alert types are added to the dish
|
||||
software. The error message will include something like "table status has no
|
||||
column named alert_foo", where "foo" is the newly added alert type. To work
|
||||
around this rare occurrence, you can pass the -f option to force a schema
|
||||
update. Alternatively, instead of using the alert_detail mode, you can use the
|
||||
alerts bitmask in the status group.
|
||||
|
||||
NOTE: The Starlink user terminal does not include time values with its
|
||||
history or status data, so this script uses current system time to compute
|
||||
the timestamps it writes into the database. It is recommended to run this
|
||||
script on a host that has its system clock synced via NTP. Otherwise, the
|
||||
timestamps may get out of sync with real time.
|
||||
"""
|
||||
|
||||
from datetime import datetime
|
||||
from datetime import timezone
|
||||
from itertools import repeat
|
||||
import logging
|
||||
import signal
|
||||
import sqlite3
|
||||
import sys
|
||||
import time
|
||||
|
||||
import dish_common
|
||||
import starlink_grpc
|
||||
|
||||
SCHEMA_VERSION = 5
|
||||
|
||||
|
||||
class Terminated(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def handle_sigterm(signum, frame):
|
||||
# Turn SIGTERM into an exception so main loop can clean up
|
||||
raise Terminated
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = dish_common.create_arg_parser(output_description="write it to a sqlite database")
|
||||
|
||||
parser.add_argument("database", help="Database file to use")
|
||||
|
||||
group = parser.add_argument_group(title="sqlite database options")
|
||||
group.add_argument("-f",
|
||||
"--force",
|
||||
action="store_true",
|
||||
help="Force schema conversion, even if it results in downgrade; may "
|
||||
"result in discarded data")
|
||||
group.add_argument("-k",
|
||||
"--skip-query",
|
||||
action="store_true",
|
||||
help="Skip querying for prior sample write point in history modes")
|
||||
|
||||
opts = dish_common.run_arg_parser(parser, need_id=True)
|
||||
|
||||
opts.skip_query |= opts.no_counter
|
||||
|
||||
return opts
|
||||
|
||||
|
||||
def query_counter(opts, gstate, column, table):
|
||||
now = time.time()
|
||||
cur = gstate.sql_conn.cursor()
|
||||
cur.execute(
|
||||
'SELECT "time", "{0}" FROM "{1}" WHERE "time"<? AND "id"=? '
|
||||
'ORDER BY "time" DESC LIMIT 1'.format(column, table), (now, gstate.dish_id))
|
||||
row = cur.fetchone()
|
||||
cur.close()
|
||||
|
||||
if row and row[0] and row[1]:
|
||||
if opts.verbose:
|
||||
print("Existing time base: {0} -> {1}".format(
|
||||
row[1], datetime.fromtimestamp(row[0], tz=timezone.utc)))
|
||||
return row
|
||||
else:
|
||||
return 0, None
|
||||
|
||||
|
||||
def loop_body(opts, gstate, shutdown=False):
|
||||
tables = {"status": {}, "ping_stats": {}, "usage": {}, "power": {}}
|
||||
hist_cols = ["time", "id"]
|
||||
hist_rows = []
|
||||
|
||||
def cb_add_item(key, val, category):
|
||||
tables[category][key] = val
|
||||
|
||||
def cb_add_sequence(key, val, category, start):
|
||||
tables[category][key] = ",".join(str(subv) if subv is not None else "" for subv in val)
|
||||
|
||||
def cb_add_bulk(bulk, count, timestamp, counter):
|
||||
if len(hist_cols) == 2:
|
||||
hist_cols.extend(bulk.keys())
|
||||
hist_cols.append("counter")
|
||||
for i in range(count):
|
||||
timestamp += 1
|
||||
counter += 1
|
||||
row = [timestamp, gstate.dish_id]
|
||||
row.extend(val[i] for val in bulk.values())
|
||||
row.append(counter)
|
||||
hist_rows.append(row)
|
||||
|
||||
rc = 0
|
||||
status_ts = None
|
||||
hist_ts = None
|
||||
|
||||
if not shutdown:
|
||||
rc, status_ts = dish_common.get_status_data(opts, gstate, cb_add_item, cb_add_sequence)
|
||||
|
||||
if opts.history_stats_mode and (not rc or opts.poll_loops > 1):
|
||||
if gstate.counter_stats is None and not opts.skip_query and opts.samples < 0:
|
||||
_, gstate.counter_stats = query_counter(opts, gstate, "end_counter", "ping_stats")
|
||||
hist_rc, hist_ts = dish_common.get_history_stats(opts, gstate, cb_add_item, cb_add_sequence,
|
||||
shutdown)
|
||||
if not rc:
|
||||
rc = hist_rc
|
||||
|
||||
if not shutdown and opts.bulk_mode and not rc:
|
||||
if gstate.counter is None and not opts.skip_query and opts.bulk_samples < 0:
|
||||
gstate.timestamp, gstate.counter = query_counter(opts, gstate, "counter", "history")
|
||||
rc = dish_common.get_bulk_data(opts, gstate, cb_add_bulk)
|
||||
|
||||
rows_written = 0
|
||||
|
||||
try:
|
||||
cur = gstate.sql_conn.cursor()
|
||||
for category, fields in tables.items():
|
||||
if fields:
|
||||
timestamp = status_ts if category == "status" else hist_ts
|
||||
sql = 'INSERT OR REPLACE INTO "{0}" ("time","id",{1}) VALUES ({2})'.format(
|
||||
category, ",".join('"' + x + '"' for x in fields),
|
||||
",".join(repeat("?",
|
||||
len(fields) + 2)))
|
||||
values = [timestamp, gstate.dish_id]
|
||||
values.extend(fields.values())
|
||||
cur.execute(sql, values)
|
||||
rows_written += 1
|
||||
|
||||
if hist_rows:
|
||||
sql = 'INSERT OR REPLACE INTO "history" ({0}) VALUES({1})'.format(
|
||||
",".join('"' + x + '"' for x in hist_cols), ",".join(repeat("?", len(hist_cols))))
|
||||
cur.executemany(sql, hist_rows)
|
||||
rows_written += len(hist_rows)
|
||||
|
||||
cur.close()
|
||||
gstate.sql_conn.commit()
|
||||
except sqlite3.OperationalError as e:
|
||||
# these are not necessarily fatal, but also not much can do about
|
||||
logging.error("Unexpected error from database, discarding data: %s", e)
|
||||
rc = 1
|
||||
else:
|
||||
if opts.verbose:
|
||||
print("Rows written to db:", rows_written)
|
||||
|
||||
return rc
|
||||
|
||||
|
||||
def ensure_schema(opts, conn, context):
|
||||
cur = conn.cursor()
|
||||
cur.execute("PRAGMA user_version")
|
||||
version = cur.fetchone()
|
||||
if version and version[0] == SCHEMA_VERSION and not opts.force:
|
||||
cur.close()
|
||||
return 0
|
||||
|
||||
try:
|
||||
if not version or not version[0]:
|
||||
if opts.verbose:
|
||||
print("Initializing new database")
|
||||
create_tables(conn, context, "")
|
||||
elif version[0] > SCHEMA_VERSION and not opts.force:
|
||||
logging.error("Cowardly refusing to downgrade from schema version %s", version[0])
|
||||
return 1
|
||||
else:
|
||||
print("Converting from schema version:", version[0])
|
||||
convert_tables(conn, context)
|
||||
cur.execute("PRAGMA user_version={0}".format(SCHEMA_VERSION))
|
||||
conn.commit()
|
||||
return 0
|
||||
except starlink_grpc.GrpcError as e:
|
||||
dish_common.conn_error(opts, "Failure reflecting status fields: %s", str(e))
|
||||
return 1
|
||||
finally:
|
||||
cur.close()
|
||||
|
||||
|
||||
def create_tables(conn, context, suffix):
|
||||
tables = {}
|
||||
name_groups = (starlink_grpc.status_field_names(context=context) +
|
||||
(starlink_grpc.location_field_names(),))
|
||||
type_groups = (starlink_grpc.status_field_types(context=context) +
|
||||
(starlink_grpc.location_field_types(),))
|
||||
tables["status"] = zip(name_groups, type_groups)
|
||||
|
||||
name_groups = starlink_grpc.history_stats_field_names()
|
||||
type_groups = starlink_grpc.history_stats_field_types()
|
||||
tables["ping_stats"] = zip(name_groups[0:5], type_groups[0:5])
|
||||
tables["usage"] = ((name_groups[5], type_groups[5]),)
|
||||
tables["power"] = ((name_groups[6], type_groups[6]),)
|
||||
|
||||
name_groups = starlink_grpc.history_bulk_field_names()
|
||||
type_groups = starlink_grpc.history_bulk_field_types()
|
||||
tables["history"] = ((name_groups[1], type_groups[1]), (["counter"], [int]))
|
||||
|
||||
def sql_type(type_class):
|
||||
if issubclass(type_class, float):
|
||||
return "REAL"
|
||||
if issubclass(type_class, bool):
|
||||
# advisory only, stores as int:
|
||||
return "BOOLEAN"
|
||||
if issubclass(type_class, int):
|
||||
return "INTEGER"
|
||||
if issubclass(type_class, str):
|
||||
return "TEXT"
|
||||
raise TypeError
|
||||
|
||||
column_info = {}
|
||||
cur = conn.cursor()
|
||||
for table, group_pairs in tables.items():
|
||||
column_names = ["time", "id"]
|
||||
columns = ['"time" INTEGER NOT NULL', '"id" TEXT NOT NULL']
|
||||
for name_group, type_group in group_pairs:
|
||||
for name_item, type_item in zip(name_group, type_group):
|
||||
name_item = dish_common.BRACKETS_RE.match(name_item).group(1)
|
||||
if name_item != "id":
|
||||
columns.append('"{0}" {1}'.format(name_item, sql_type(type_item)))
|
||||
column_names.append(name_item)
|
||||
cur.execute('DROP TABLE IF EXISTS "{0}{1}"'.format(table, suffix))
|
||||
sql = 'CREATE TABLE "{0}{1}" ({2}, PRIMARY KEY("time","id"))'.format(
|
||||
table, suffix, ", ".join(columns))
|
||||
cur.execute(sql)
|
||||
column_info[table] = column_names
|
||||
cur.close()
|
||||
|
||||
return column_info
|
||||
|
||||
|
||||
def convert_tables(conn, context):
|
||||
new_column_info = create_tables(conn, context, "_new")
|
||||
conn.row_factory = sqlite3.Row
|
||||
old_cur = conn.cursor()
|
||||
new_cur = conn.cursor()
|
||||
for table, new_columns in new_column_info.items():
|
||||
try:
|
||||
old_cur.execute('SELECT * FROM "{0}"'.format(table))
|
||||
table_ok = True
|
||||
except sqlite3.OperationalError:
|
||||
table_ok = False
|
||||
if table_ok:
|
||||
old_columns = set(x[0] for x in old_cur.description)
|
||||
new_columns = tuple(x for x in new_columns if x in old_columns)
|
||||
sql = 'INSERT OR REPLACE INTO "{0}_new" ({1}) VALUES ({2})'.format(
|
||||
table, ",".join('"' + x + '"' for x in new_columns),
|
||||
",".join(repeat("?", len(new_columns))))
|
||||
new_cur.executemany(sql, (tuple(row[col] for col in new_columns) for row in old_cur))
|
||||
new_cur.execute('DROP TABLE "{0}"'.format(table))
|
||||
new_cur.execute('ALTER TABLE "{0}_new" RENAME TO "{0}"'.format(table))
|
||||
old_cur.close()
|
||||
new_cur.close()
|
||||
conn.row_factory = None
|
||||
|
||||
|
||||
def main():
|
||||
opts = parse_args()
|
||||
|
||||
logging.basicConfig(format="%(levelname)s: %(message)s")
|
||||
|
||||
gstate = dish_common.GlobalState(target=opts.target)
|
||||
gstate.points = []
|
||||
gstate.deferred_points = []
|
||||
|
||||
signal.signal(signal.SIGTERM, handle_sigterm)
|
||||
gstate.sql_conn = sqlite3.connect(opts.database)
|
||||
|
||||
rc = 0
|
||||
try:
|
||||
rc = ensure_schema(opts, gstate.sql_conn, gstate.context)
|
||||
if rc:
|
||||
sys.exit(rc)
|
||||
next_loop = time.monotonic()
|
||||
while True:
|
||||
rc = loop_body(opts, gstate)
|
||||
if opts.loop_interval > 0.0:
|
||||
now = time.monotonic()
|
||||
next_loop = max(next_loop + opts.loop_interval, now)
|
||||
time.sleep(next_loop - now)
|
||||
else:
|
||||
break
|
||||
except sqlite3.Error as e:
|
||||
logging.error("Database error: %s", e)
|
||||
rc = 1
|
||||
except (KeyboardInterrupt, Terminated):
|
||||
pass
|
||||
finally:
|
||||
loop_body(opts, gstate, shutdown=True)
|
||||
gstate.sql_conn.close()
|
||||
gstate.shutdown()
|
||||
|
||||
sys.exit(rc)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,304 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Output Starlink user terminal data info in text format.
|
||||
|
||||
This script pulls the current status info and/or metrics computed from the
|
||||
history data and prints them to a file or stdout either once or in a periodic
|
||||
loop. By default, it will print the results in CSV format.
|
||||
|
||||
Note that using this script to record the alert_detail group mode as CSV
|
||||
data is not recommended, because the number of alerts and their relative
|
||||
order in the output can change with the dish software. Instead of using
|
||||
the alert_detail mode, you can use the alerts bitmask in the status group.
|
||||
"""
|
||||
|
||||
import datetime
|
||||
import logging
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
import time
|
||||
|
||||
import dish_common
|
||||
import starlink_grpc
|
||||
|
||||
COUNTER_FIELD = "end_counter"
|
||||
VERBOSE_FIELD_MAP = {
|
||||
# status fields (the remainder are either self-explanatory or I don't
|
||||
# know with confidence what they mean)
|
||||
"alerts": "Alerts bit field",
|
||||
|
||||
# ping_drop fields
|
||||
"samples": "Parsed samples",
|
||||
"end_counter": "Sample counter",
|
||||
"total_ping_drop": "Total ping drop",
|
||||
"count_full_ping_drop": "Count of drop == 1",
|
||||
"count_obstructed": "Obstructed",
|
||||
"total_obstructed_ping_drop": "Obstructed ping drop",
|
||||
"count_full_obstructed_ping_drop": "Obstructed drop == 1",
|
||||
"count_unscheduled": "Unscheduled",
|
||||
"total_unscheduled_ping_drop": "Unscheduled ping drop",
|
||||
"count_full_unscheduled_ping_drop": "Unscheduled drop == 1",
|
||||
|
||||
# ping_run_length fields
|
||||
"init_run_fragment": "Initial drop run fragment",
|
||||
"final_run_fragment": "Final drop run fragment",
|
||||
"run_seconds": "Per-second drop runs",
|
||||
"run_minutes": "Per-minute drop runs",
|
||||
|
||||
# ping_latency fields
|
||||
"mean_all_ping_latency": "Mean RTT, drop < 1",
|
||||
"deciles_all_ping_latency": "RTT deciles, drop < 1",
|
||||
"mean_full_ping_latency": "Mean RTT, drop == 0",
|
||||
"deciles_full_ping_latency": "RTT deciles, drop == 0",
|
||||
"stdev_full_ping_latency": "RTT standard deviation, drop == 0",
|
||||
|
||||
# ping_loaded_latency is still experimental, so leave those unexplained
|
||||
|
||||
# usage fields
|
||||
"download_usage": "Bytes downloaded",
|
||||
"upload_usage": "Bytes uploaded",
|
||||
}
|
||||
|
||||
|
||||
class Terminated(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def handle_sigterm(signum, frame):
|
||||
# Turn SIGTERM into an exception so main loop can clean up
|
||||
raise Terminated
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = dish_common.create_arg_parser(
|
||||
output_description="print it in text format; by default, will print in CSV format")
|
||||
|
||||
group = parser.add_argument_group(title="CSV output options")
|
||||
group.add_argument("-H",
|
||||
"--print-header",
|
||||
action="store_true",
|
||||
help="Print CSV header instead of parsing data")
|
||||
group.add_argument("-O",
|
||||
"--out-file",
|
||||
default="-",
|
||||
help="Output file path; if set, can also be used to resume from prior "
|
||||
"history sample counter, default: write to standard output")
|
||||
group.add_argument("-k",
|
||||
"--skip-query",
|
||||
action="store_true",
|
||||
help="Skip querying for prior sample write point in history modes")
|
||||
|
||||
opts = dish_common.run_arg_parser(parser)
|
||||
|
||||
if (opts.history_stats_mode or opts.status_mode) and opts.bulk_mode and not opts.verbose:
|
||||
parser.error("bulk_history cannot be combined with other modes for CSV output")
|
||||
|
||||
# Technically possible, but a pain to implement, so just disallow it. User
|
||||
# probably doesn't realize how weird it would be, anyway, given that stats
|
||||
# data reports at a different rate from status data in this case.
|
||||
if opts.history_stats_mode and opts.status_mode and not opts.verbose and opts.poll_loops > 1:
|
||||
parser.error("usage of --poll-loops with history stats modes cannot be mixed with status "
|
||||
"modes for CSV output")
|
||||
|
||||
opts.skip_query |= opts.no_counter | opts.verbose
|
||||
if opts.out_file == "-":
|
||||
opts.no_stdout_errors = True
|
||||
|
||||
return opts
|
||||
|
||||
|
||||
def open_out_file(opts, mode):
|
||||
if opts.out_file == "-":
|
||||
# open new file, so it can be closed later without affecting sys.stdout
|
||||
return os.fdopen(sys.stdout.fileno(), "w", buffering=1, closefd=False)
|
||||
return open(opts.out_file, mode, buffering=1)
|
||||
|
||||
|
||||
def print_header(opts, print_file):
|
||||
header = ["datetimestamp_utc"]
|
||||
|
||||
def header_add(names):
|
||||
for name in names:
|
||||
name, start, end = dish_common.BRACKETS_RE.match(name).group(1, 4, 5)
|
||||
if start:
|
||||
header.extend(name + "_" + str(x) for x in range(int(start), int(end)))
|
||||
elif end:
|
||||
header.extend(name + "_" + str(x) for x in range(int(end)))
|
||||
else:
|
||||
header.append(name)
|
||||
|
||||
if opts.status_mode:
|
||||
if opts.pure_status_mode:
|
||||
context = starlink_grpc.ChannelContext(target=opts.target)
|
||||
try:
|
||||
name_groups = starlink_grpc.status_field_names(context=context)
|
||||
except starlink_grpc.GrpcError as e:
|
||||
dish_common.conn_error(opts, "Failure reflecting status field names: %s", str(e))
|
||||
return 1
|
||||
if "status" in opts.mode:
|
||||
header_add(name_groups[0])
|
||||
if "obstruction_detail" in opts.mode:
|
||||
header_add(name_groups[1])
|
||||
if "alert_detail" in opts.mode:
|
||||
header_add(name_groups[2])
|
||||
if "location" in opts.mode:
|
||||
header_add(starlink_grpc.location_field_names())
|
||||
|
||||
if opts.bulk_mode:
|
||||
general, bulk = starlink_grpc.history_bulk_field_names()
|
||||
header_add(bulk)
|
||||
|
||||
if opts.history_stats_mode:
|
||||
groups = starlink_grpc.history_stats_field_names()
|
||||
general, ping, runlen, latency, loaded, usage, power = groups[0:7]
|
||||
header_add(general)
|
||||
if "ping_drop" in opts.mode:
|
||||
header_add(ping)
|
||||
if "ping_run_length" in opts.mode:
|
||||
header_add(runlen)
|
||||
if "ping_latency" in opts.mode:
|
||||
header_add(latency)
|
||||
if "ping_loaded_latency" in opts.mode:
|
||||
header_add(loaded)
|
||||
if "usage" in opts.mode:
|
||||
header_add(usage)
|
||||
if "power" in opts.mode:
|
||||
header_add(power)
|
||||
|
||||
print(",".join(header), file=print_file)
|
||||
return 0
|
||||
|
||||
|
||||
def get_prior_counter(opts, gstate):
|
||||
# This implementation is terrible in that it makes a bunch of assumptions.
|
||||
# Those assumptions should be true for files generated by this script, but
|
||||
# it would be better not to make them. However, it also only works if the
|
||||
# CSV file has a header that correctly matches the last line of the file,
|
||||
# and there's really no way to verify that, so it's garbage in, garbage
|
||||
# out, anyway. It also reads the entire file line-by-line, which is not
|
||||
# great.
|
||||
try:
|
||||
with open_out_file(opts, "r") as csv_file:
|
||||
header = csv_file.readline().split(",")
|
||||
column = header.index(COUNTER_FIELD)
|
||||
last_line = None
|
||||
for last_line in csv_file:
|
||||
pass
|
||||
if last_line is not None:
|
||||
gstate.counter_stats = int(last_line.split(",")[column])
|
||||
except (IndexError, OSError, ValueError):
|
||||
pass
|
||||
|
||||
|
||||
def loop_body(opts, gstate, print_file, shutdown=False):
|
||||
csv_data = []
|
||||
|
||||
def xform(val):
|
||||
return "" if val is None else str(val)
|
||||
|
||||
def cb_data_add_item(name, val, category):
|
||||
if opts.verbose:
|
||||
csv_data.append("{0:22} {1}".format(
|
||||
VERBOSE_FIELD_MAP.get(name, name) + ":", xform(val)))
|
||||
else:
|
||||
# special case for get_status failure: this will be the lone item added
|
||||
if name == "state" and val == "DISH_UNREACHABLE":
|
||||
csv_data.extend(["", "", "", val])
|
||||
else:
|
||||
csv_data.append(xform(val))
|
||||
|
||||
def cb_data_add_sequence(name, val, category, start):
|
||||
if opts.verbose:
|
||||
csv_data.append("{0:22} {1}".format(
|
||||
VERBOSE_FIELD_MAP.get(name, name) + ":",
|
||||
", ".join(xform(subval) for subval in val)))
|
||||
else:
|
||||
csv_data.extend(xform(subval) for subval in val)
|
||||
|
||||
def cb_add_bulk(bulk, count, timestamp, counter):
|
||||
if opts.verbose:
|
||||
print("Time range (UTC): {0} -> {1}".format(
|
||||
datetime.datetime.fromtimestamp(timestamp, datetime.timezone.utc).replace(tzinfo=None).isoformat(),
|
||||
datetime.datetime.fromtimestamp(timestamp + count, datetime.timezone.utc).replace(tzinfo=None).isoformat()),
|
||||
file=print_file)
|
||||
for key, val in bulk.items():
|
||||
print("{0:22} {1}".format(key + ":", ", ".join(xform(subval) for subval in val)),
|
||||
file=print_file)
|
||||
if opts.loop_interval > 0.0:
|
||||
print(file=print_file)
|
||||
else:
|
||||
for i in range(count):
|
||||
timestamp += 1
|
||||
fields = [datetime.datetime.fromtimestamp(timestamp, datetime.timezone.utc).replace(tzinfo=None).isoformat()]
|
||||
fields.extend([xform(val[i]) for val in bulk.values()])
|
||||
print(",".join(fields), file=print_file)
|
||||
|
||||
rc, status_ts, hist_ts = dish_common.get_data(opts,
|
||||
gstate,
|
||||
cb_data_add_item,
|
||||
cb_data_add_sequence,
|
||||
add_bulk=cb_add_bulk,
|
||||
flush_history=shutdown)
|
||||
|
||||
if opts.verbose:
|
||||
if csv_data:
|
||||
print("\n".join(csv_data), file=print_file)
|
||||
if opts.loop_interval > 0.0:
|
||||
print(file=print_file)
|
||||
else:
|
||||
if csv_data:
|
||||
timestamp = status_ts if status_ts is not None else hist_ts
|
||||
csv_data.insert(0, datetime.datetime.fromtimestamp(timestamp, datetime.timezone.utc).replace(tzinfo=None).isoformat())
|
||||
print(",".join(csv_data), file=print_file)
|
||||
|
||||
return rc
|
||||
|
||||
|
||||
def main():
|
||||
opts = parse_args()
|
||||
|
||||
logging.basicConfig(format="%(levelname)s: %(message)s")
|
||||
|
||||
if opts.print_header:
|
||||
try:
|
||||
with open_out_file(opts, "a") as print_file:
|
||||
rc = print_header(opts, print_file)
|
||||
except OSError as e:
|
||||
logging.error("Failed opening output file: %s", str(e))
|
||||
rc = 1
|
||||
sys.exit(rc)
|
||||
|
||||
gstate = dish_common.GlobalState(target=opts.target)
|
||||
if opts.out_file != "-" and not opts.skip_query and opts.history_stats_mode:
|
||||
get_prior_counter(opts, gstate)
|
||||
|
||||
try:
|
||||
print_file = open_out_file(opts, "a")
|
||||
except OSError as e:
|
||||
logging.error("Failed opening output file: %s", str(e))
|
||||
sys.exit(1)
|
||||
signal.signal(signal.SIGTERM, handle_sigterm)
|
||||
|
||||
rc = 0
|
||||
try:
|
||||
next_loop = time.monotonic()
|
||||
while True:
|
||||
rc = loop_body(opts, gstate, print_file)
|
||||
if opts.loop_interval > 0.0:
|
||||
now = time.monotonic()
|
||||
next_loop = max(next_loop + opts.loop_interval, now)
|
||||
time.sleep(next_loop - now)
|
||||
else:
|
||||
break
|
||||
except (KeyboardInterrupt, Terminated):
|
||||
pass
|
||||
finally:
|
||||
loop_body(opts, gstate, print_file, shutdown=True)
|
||||
print_file.close()
|
||||
gstate.shutdown()
|
||||
|
||||
sys.exit(rc)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,284 @@
|
||||
#!/usr/bin/env python3
|
||||
r"""Output Starlink user terminal data info in text format.
|
||||
|
||||
Expects input as from the following command:
|
||||
|
||||
grpcurl -plaintext -d {\"get_history\":{}} 192.168.100.1:9200 SpaceX.API.Device.Device/Handle
|
||||
|
||||
This script examines the most recent samples from the history data and
|
||||
prints several different metrics computed from them to stdout. By default,
|
||||
it will print the results in CSV format.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import datetime
|
||||
import logging
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
|
||||
import starlink_json
|
||||
|
||||
BRACKETS_RE = re.compile(r"([^[]*)(\[((\d+),|)(\d*)\]|)$")
|
||||
SAMPLES_DEFAULT = 3600
|
||||
HISTORY_STATS_MODES = [
|
||||
"ping_drop", "ping_run_length", "ping_latency", "ping_loaded_latency", "usage"
|
||||
]
|
||||
VERBOSE_FIELD_MAP = {
|
||||
# ping_drop fields
|
||||
"samples": "Parsed samples",
|
||||
"end_counter": "Sample counter",
|
||||
"total_ping_drop": "Total ping drop",
|
||||
"count_full_ping_drop": "Count of drop == 1",
|
||||
"count_obstructed": "Obstructed",
|
||||
"total_obstructed_ping_drop": "Obstructed ping drop",
|
||||
"count_full_obstructed_ping_drop": "Obstructed drop == 1",
|
||||
"count_unscheduled": "Unscheduled",
|
||||
"total_unscheduled_ping_drop": "Unscheduled ping drop",
|
||||
"count_full_unscheduled_ping_drop": "Unscheduled drop == 1",
|
||||
|
||||
# ping_run_length fields
|
||||
"init_run_fragment": "Initial drop run fragment",
|
||||
"final_run_fragment": "Final drop run fragment",
|
||||
"run_seconds": "Per-second drop runs",
|
||||
"run_minutes": "Per-minute drop runs",
|
||||
|
||||
# ping_latency fields
|
||||
"mean_all_ping_latency": "Mean RTT, drop < 1",
|
||||
"deciles_all_ping_latency": "RTT deciles, drop < 1",
|
||||
"mean_full_ping_latency": "Mean RTT, drop == 0",
|
||||
"deciles_full_ping_latency": "RTT deciles, drop == 0",
|
||||
"stdev_full_ping_latency": "RTT standard deviation, drop == 0",
|
||||
|
||||
# ping_loaded_latency is still experimental, so leave those unexplained
|
||||
|
||||
# usage fields
|
||||
"download_usage": "Bytes downloaded",
|
||||
"upload_usage": "Bytes uploaded",
|
||||
}
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Collect status and/or history data from a Starlink user terminal and "
|
||||
"print it to standard output in text format; by default, will print in CSV format",
|
||||
add_help=False)
|
||||
|
||||
group = parser.add_argument_group(title="General options")
|
||||
group.add_argument("-f", "--filename", default="-", help="The file to parse, default: stdin")
|
||||
group.add_argument("-h", "--help", action="help", help="Be helpful")
|
||||
group.add_argument("-t",
|
||||
"--timestamp",
|
||||
help="UTC time history data was pulled, as YYYY-MM-DD_HH:MM:SS or as "
|
||||
"seconds since Unix epoch, default: current time")
|
||||
group.add_argument("-v", "--verbose", action="store_true", help="Be verbose")
|
||||
|
||||
group = parser.add_argument_group(title="History mode options")
|
||||
group.add_argument("-a",
|
||||
"--all-samples",
|
||||
action="store_const",
|
||||
const=-1,
|
||||
dest="samples",
|
||||
help="Parse all valid samples")
|
||||
group.add_argument("-s",
|
||||
"--samples",
|
||||
type=int,
|
||||
help="Number of data samples to parse, default: all in bulk mode, "
|
||||
"else " + str(SAMPLES_DEFAULT))
|
||||
|
||||
group = parser.add_argument_group(title="CSV output options")
|
||||
group.add_argument("-H",
|
||||
"--print-header",
|
||||
action="store_true",
|
||||
help="Print CSV header instead of parsing data")
|
||||
|
||||
all_modes = HISTORY_STATS_MODES + ["bulk_history"]
|
||||
parser.add_argument("mode",
|
||||
nargs="+",
|
||||
choices=all_modes,
|
||||
help="The data group to record, one or more of: " + ", ".join(all_modes),
|
||||
metavar="mode")
|
||||
|
||||
opts = parser.parse_args()
|
||||
|
||||
# for convenience, set flags for whether any mode in a group is selected
|
||||
opts.history_stats_mode = bool(set(HISTORY_STATS_MODES).intersection(opts.mode))
|
||||
opts.bulk_mode = "bulk_history" in opts.mode
|
||||
|
||||
if opts.history_stats_mode and opts.bulk_mode:
|
||||
parser.error("bulk_history cannot be combined with other modes for CSV output")
|
||||
|
||||
if opts.samples is None:
|
||||
opts.samples = -1 if opts.bulk_mode else SAMPLES_DEFAULT
|
||||
|
||||
if opts.timestamp is None:
|
||||
opts.history_time = None
|
||||
else:
|
||||
try:
|
||||
opts.history_time = int(opts.timestamp)
|
||||
except ValueError:
|
||||
try:
|
||||
opts.history_time = int(
|
||||
datetime.datetime.strptime(opts.timestamp, "%Y-%m-%d_%H:%M:%S").timestamp())
|
||||
except ValueError:
|
||||
parser.error("Could not parse timestamp")
|
||||
if opts.verbose:
|
||||
print("Using timestamp", datetime.datetime.fromtimestamp(opts.history_time, tz=datetime.timezone.utc))
|
||||
|
||||
return opts
|
||||
|
||||
|
||||
def print_header(opts):
|
||||
header = ["datetimestamp_utc"]
|
||||
|
||||
def header_add(names):
|
||||
for name in names:
|
||||
name, start, end = BRACKETS_RE.match(name).group(1, 4, 5)
|
||||
if start:
|
||||
header.extend(name + "_" + str(x) for x in range(int(start), int(end)))
|
||||
elif end:
|
||||
header.extend(name + "_" + str(x) for x in range(int(end)))
|
||||
else:
|
||||
header.append(name)
|
||||
|
||||
if opts.bulk_mode:
|
||||
general, bulk = starlink_json.history_bulk_field_names()
|
||||
header_add(general)
|
||||
header_add(bulk)
|
||||
|
||||
if opts.history_stats_mode:
|
||||
groups = starlink_json.history_stats_field_names()
|
||||
general, ping, runlen, latency, loaded, usage = groups[0:6]
|
||||
header_add(general)
|
||||
if "ping_drop" in opts.mode:
|
||||
header_add(ping)
|
||||
if "ping_run_length" in opts.mode:
|
||||
header_add(runlen)
|
||||
if "ping_loaded_latency" in opts.mode:
|
||||
header_add(loaded)
|
||||
if "ping_latency" in opts.mode:
|
||||
header_add(latency)
|
||||
if "usage" in opts.mode:
|
||||
header_add(usage)
|
||||
|
||||
print(",".join(header))
|
||||
return 0
|
||||
|
||||
|
||||
def get_data(opts, add_item, add_sequence, add_bulk):
|
||||
def add_data(data):
|
||||
for key, val in data.items():
|
||||
name, seq = BRACKETS_RE.match(key).group(1, 5)
|
||||
if seq is None:
|
||||
add_item(name, val)
|
||||
else:
|
||||
add_sequence(name, val)
|
||||
|
||||
if opts.history_stats_mode:
|
||||
try:
|
||||
groups = starlink_json.history_stats(opts.filename, opts.samples, verbose=opts.verbose)
|
||||
except starlink_json.JsonError as e:
|
||||
logging.error("Failure getting history stats: %s", str(e))
|
||||
return 1
|
||||
general, ping, runlen, latency, loaded, usage = groups[0:6]
|
||||
add_data(general)
|
||||
if "ping_drop" in opts.mode:
|
||||
add_data(ping)
|
||||
if "ping_run_length" in opts.mode:
|
||||
add_data(runlen)
|
||||
if "ping_latency" in opts.mode:
|
||||
add_data(latency)
|
||||
if "ping_loaded_latency" in opts.mode:
|
||||
add_data(loaded)
|
||||
if "usage" in opts.mode:
|
||||
add_data(usage)
|
||||
|
||||
if opts.bulk_mode and add_bulk:
|
||||
timestamp = int(time.time()) if opts.history_time is None else opts.history_time
|
||||
try:
|
||||
general, bulk = starlink_json.history_bulk_data(opts.filename,
|
||||
opts.samples,
|
||||
verbose=opts.verbose)
|
||||
except starlink_json.JsonError as e:
|
||||
logging.error("Failure getting bulk history: %s", str(e))
|
||||
return 1
|
||||
parsed_samples = general["samples"]
|
||||
new_counter = general["end_counter"]
|
||||
if opts.verbose:
|
||||
print("Establishing time base: {0} -> {1}".format(
|
||||
new_counter, datetime.datetime.fromtimestamp(timestamp, tz=datetime.timezone.utc)))
|
||||
timestamp -= parsed_samples
|
||||
|
||||
add_bulk(bulk, parsed_samples, timestamp, new_counter - parsed_samples)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
def loop_body(opts):
|
||||
if opts.verbose:
|
||||
csv_data = []
|
||||
else:
|
||||
history_time = int(time.time()) if opts.history_time is None else opts.history_time
|
||||
csv_data = [datetime.datetime.fromtimestamp(history_time, datetime.timezone.utc).replace(tzinfo=None).isoformat()]
|
||||
|
||||
def cb_data_add_item(name, val):
|
||||
if opts.verbose:
|
||||
csv_data.append("{0:22} {1}".format(VERBOSE_FIELD_MAP.get(name, name) + ":", val))
|
||||
else:
|
||||
# special case for get_status failure: this will be the lone item added
|
||||
if name == "state" and val == "DISH_UNREACHABLE":
|
||||
csv_data.extend(["", "", "", val])
|
||||
else:
|
||||
csv_data.append(str(val))
|
||||
|
||||
def cb_data_add_sequence(name, val):
|
||||
if opts.verbose:
|
||||
csv_data.append("{0:22} {1}".format(
|
||||
VERBOSE_FIELD_MAP.get(name, name) + ":", ", ".join(str(subval) for subval in val)))
|
||||
else:
|
||||
csv_data.extend(str(subval) for subval in val)
|
||||
|
||||
def cb_add_bulk(bulk, count, timestamp, counter):
|
||||
if opts.verbose:
|
||||
print("Time range (UTC): {0} -> {1}".format(
|
||||
datetime.datetime.fromtimestamp(timestamp, datetime.timezone.utc).replace(tzinfo=None).isoformat(),
|
||||
datetime.datetime.fromtimestamp(timestamp + count, datetime.timezone.utc).replace(tzinfo=None).isoformat()))
|
||||
for key, val in bulk.items():
|
||||
print("{0:22} {1}".format(key + ":", ", ".join(str(subval) for subval in val)))
|
||||
else:
|
||||
for i in range(count):
|
||||
timestamp += 1
|
||||
fields = [datetime.datetime.fromtimestamp(timestamp, datetime.timezone.utc).replace(tzinfo=None).isoformat()]
|
||||
fields.extend(["" if val[i] is None else str(val[i]) for val in bulk.values()])
|
||||
print(",".join(fields))
|
||||
|
||||
rc = get_data(opts, cb_data_add_item, cb_data_add_sequence, cb_add_bulk)
|
||||
|
||||
if opts.verbose:
|
||||
if csv_data:
|
||||
print("\n".join(csv_data))
|
||||
else:
|
||||
# skip if only timestamp
|
||||
if len(csv_data) > 1:
|
||||
print(",".join(csv_data))
|
||||
|
||||
return rc
|
||||
|
||||
|
||||
def main():
|
||||
opts = parse_args()
|
||||
|
||||
logging.basicConfig(format="%(levelname)s: %(message)s")
|
||||
|
||||
if opts.print_header:
|
||||
rc = print_header(opts)
|
||||
sys.exit(rc)
|
||||
|
||||
# for consistency with dish_grpc_text, pretend there was a loop
|
||||
rc = loop_body(opts)
|
||||
|
||||
sys.exit(rc)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,227 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Write a PNG image representing Starlink obstruction map data.
|
||||
|
||||
This scripts queries obstruction map data from the Starlink user terminal
|
||||
(dish) reachable on the local network and writes a PNG image based on that
|
||||
data.
|
||||
|
||||
Each pixel in the image represents the signal quality in a particular
|
||||
direction, as observed by the dish. If the dish has not communicated with
|
||||
satellites located in that direction, the pixel will be the "no data" color;
|
||||
otherwise, it will be a color in the range from the "obstructed" color (no
|
||||
signal at all) to the "unobstructed" color (sufficient signal quality for full
|
||||
signal).
|
||||
|
||||
The coordinates of the pixels are the altitude and azimuth angles from the
|
||||
horizontal coordinate system representation of the sky, converted to Cartesian
|
||||
(rectangular) coordinates. The conversion is done in a way that maps all valid
|
||||
directions into a circle that touches the edges of the image. Pixels outside
|
||||
that circle will show up as "no data".
|
||||
|
||||
Azimuth is represented as angle from a line drawn from the center of the image
|
||||
to the center of the top edge of the image, where center-top is 0 degrees
|
||||
(North), the center of the right edge is 90 degrees (East), etc.
|
||||
|
||||
Altitude (elevation) is represented as distance from the center of the image,
|
||||
where the center of the image represents vertical up from the point of view of
|
||||
an observer located at the dish (zenith, which is usually not the physical
|
||||
direction the dish is pointing) and the further away from the center a pixel
|
||||
is, the closer to the horizon it is, down to a minimum altitude angle at the
|
||||
edge of the circle.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
from datetime import datetime
|
||||
import logging
|
||||
import os
|
||||
import png
|
||||
import sys
|
||||
import time
|
||||
|
||||
import starlink_grpc
|
||||
|
||||
DEFAULT_OBSTRUCTED_COLOR = "FFFF0000"
|
||||
DEFAULT_UNOBSTRUCTED_COLOR = "FFFFFFFF"
|
||||
DEFAULT_NO_DATA_COLOR = "00000000"
|
||||
DEFAULT_OBSTRUCTED_GREYSCALE = "FF00"
|
||||
DEFAULT_UNOBSTRUCTED_GREYSCALE = "FFFF"
|
||||
DEFAULT_NO_DATA_GREYSCALE = "0000"
|
||||
LOOP_TIME_DEFAULT = 0
|
||||
|
||||
|
||||
def loop_body(opts, context):
|
||||
try:
|
||||
snr_data = starlink_grpc.obstruction_map(context)
|
||||
except starlink_grpc.GrpcError as e:
|
||||
logging.error("Failed getting obstruction map data: %s", str(e))
|
||||
return 1
|
||||
|
||||
def pixel_bytes(row):
|
||||
for point in row:
|
||||
if point > 1.0:
|
||||
# shouldn't happen, but just in case...
|
||||
point = 1.0
|
||||
|
||||
if point >= 0.0:
|
||||
if opts.greyscale:
|
||||
yield round(point * opts.unobstructed_color_g +
|
||||
(1.0-point) * opts.obstructed_color_g)
|
||||
else:
|
||||
yield round(point * opts.unobstructed_color_r +
|
||||
(1.0-point) * opts.obstructed_color_r)
|
||||
yield round(point * opts.unobstructed_color_g +
|
||||
(1.0-point) * opts.obstructed_color_g)
|
||||
yield round(point * opts.unobstructed_color_b +
|
||||
(1.0-point) * opts.obstructed_color_b)
|
||||
if not opts.no_alpha:
|
||||
yield round(point * opts.unobstructed_color_a +
|
||||
(1.0-point) * opts.obstructed_color_a)
|
||||
else:
|
||||
if opts.greyscale:
|
||||
yield opts.no_data_color_g
|
||||
else:
|
||||
yield opts.no_data_color_r
|
||||
yield opts.no_data_color_g
|
||||
yield opts.no_data_color_b
|
||||
if not opts.no_alpha:
|
||||
yield opts.no_data_color_a
|
||||
|
||||
if opts.filename == "-":
|
||||
# Open new stdout file to get binary mode
|
||||
out_file = os.fdopen(sys.stdout.fileno(), "wb", closefd=False)
|
||||
else:
|
||||
now = int(time.time())
|
||||
filename = opts.filename.replace("%u", str(now))
|
||||
filename = filename.replace("%d",
|
||||
datetime.utcfromtimestamp(now).strftime("%Y_%m_%d_%H_%M_%S"))
|
||||
filename = filename.replace("%s", str(opts.sequence))
|
||||
out_file = open(filename, "wb")
|
||||
if not snr_data or not snr_data[0]:
|
||||
logging.error("Invalid SNR map data: Zero-length")
|
||||
return 1
|
||||
writer = png.Writer(len(snr_data[0]),
|
||||
len(snr_data),
|
||||
alpha=(not opts.no_alpha),
|
||||
greyscale=opts.greyscale)
|
||||
writer.write(out_file, (bytes(pixel_bytes(row)) for row in snr_data))
|
||||
out_file.close()
|
||||
|
||||
opts.sequence += 1
|
||||
return 0
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Collect directional obstruction map data from a Starlink user terminal and "
|
||||
"emit it as a PNG image")
|
||||
parser.add_argument(
|
||||
"filename",
|
||||
nargs="?",
|
||||
help="The image file to write, or - to write to stdout; may be a template with the "
|
||||
"following to be filled in per loop iteration: %%s for sequence number, %%d for UTC date "
|
||||
"and time, %%u for seconds since Unix epoch.")
|
||||
parser.add_argument(
|
||||
"-o",
|
||||
"--obstructed-color",
|
||||
help="Color of obstructed areas, in RGB, ARGB, L, or AL hex notation, default: " +
|
||||
DEFAULT_OBSTRUCTED_COLOR + " or " + DEFAULT_OBSTRUCTED_GREYSCALE)
|
||||
parser.add_argument(
|
||||
"-u",
|
||||
"--unobstructed-color",
|
||||
help="Color of unobstructed areas, in RGB, ARGB, L, or AL hex notation, default: " +
|
||||
DEFAULT_UNOBSTRUCTED_COLOR + " or " + DEFAULT_UNOBSTRUCTED_GREYSCALE)
|
||||
parser.add_argument(
|
||||
"-n",
|
||||
"--no-data-color",
|
||||
help="Color of areas with no data, in RGB, ARGB, L, or AL hex notation, default: " +
|
||||
DEFAULT_NO_DATA_COLOR + " or " + DEFAULT_NO_DATA_GREYSCALE)
|
||||
parser.add_argument(
|
||||
"-g",
|
||||
"--greyscale",
|
||||
action="store_true",
|
||||
help="Emit a greyscale image instead of the default full color image; greyscale images "
|
||||
"use L or AL hex notation for the color options")
|
||||
parser.add_argument(
|
||||
"-z",
|
||||
"--no-alpha",
|
||||
action="store_true",
|
||||
help="Emit an image without alpha (transparency) channel instead of the default that "
|
||||
"includes alpha channel")
|
||||
parser.add_argument("-e",
|
||||
"--target",
|
||||
help="host:port of dish to query, default is the standard IP address "
|
||||
"and port (192.168.100.1:9200)")
|
||||
parser.add_argument("-t",
|
||||
"--loop-interval",
|
||||
type=float,
|
||||
default=float(LOOP_TIME_DEFAULT),
|
||||
help="Loop interval in seconds or 0 for no loop, default: " +
|
||||
str(LOOP_TIME_DEFAULT))
|
||||
parser.add_argument("-s",
|
||||
"--sequence",
|
||||
type=int,
|
||||
default=1,
|
||||
help="Starting sequence number for templatized filenames, default: 1")
|
||||
parser.add_argument("-r",
|
||||
"--reset",
|
||||
action="store_true",
|
||||
help="Reset obstruction map data before starting")
|
||||
opts = parser.parse_args()
|
||||
|
||||
if opts.filename is None and not opts.reset:
|
||||
parser.error("Must specify a filename unless resetting")
|
||||
|
||||
if opts.obstructed_color is None:
|
||||
opts.obstructed_color = DEFAULT_OBSTRUCTED_GREYSCALE if opts.greyscale else DEFAULT_OBSTRUCTED_COLOR
|
||||
if opts.unobstructed_color is None:
|
||||
opts.unobstructed_color = DEFAULT_UNOBSTRUCTED_GREYSCALE if opts.greyscale else DEFAULT_UNOBSTRUCTED_COLOR
|
||||
if opts.no_data_color is None:
|
||||
opts.no_data_color = DEFAULT_NO_DATA_GREYSCALE if opts.greyscale else DEFAULT_NO_DATA_COLOR
|
||||
|
||||
for option in ("obstructed_color", "unobstructed_color", "no_data_color"):
|
||||
try:
|
||||
color = int(getattr(opts, option), 16)
|
||||
if opts.greyscale:
|
||||
setattr(opts, option + "_a", (color >> 8) & 255)
|
||||
setattr(opts, option + "_g", color & 255)
|
||||
else:
|
||||
setattr(opts, option + "_a", (color >> 24) & 255)
|
||||
setattr(opts, option + "_r", (color >> 16) & 255)
|
||||
setattr(opts, option + "_g", (color >> 8) & 255)
|
||||
setattr(opts, option + "_b", color & 255)
|
||||
except ValueError:
|
||||
logging.error("Invalid hex number for %s", option)
|
||||
sys.exit(1)
|
||||
|
||||
return opts
|
||||
|
||||
|
||||
def main():
|
||||
opts = parse_args()
|
||||
|
||||
logging.basicConfig(format="%(levelname)s: %(message)s")
|
||||
|
||||
context = starlink_grpc.ChannelContext(target=opts.target)
|
||||
|
||||
try:
|
||||
if opts.reset:
|
||||
starlink_grpc.reset_obstruction_map(context)
|
||||
|
||||
if opts.filename is not None:
|
||||
next_loop = time.monotonic()
|
||||
while True:
|
||||
rc = loop_body(opts, context)
|
||||
if opts.loop_interval > 0.0:
|
||||
now = time.monotonic()
|
||||
next_loop = max(next_loop + opts.loop_interval, now)
|
||||
time.sleep(next_loop - now)
|
||||
else:
|
||||
break
|
||||
finally:
|
||||
context.close()
|
||||
|
||||
sys.exit(rc)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,29 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Simple example of get_status request using grpc call directly."""
|
||||
|
||||
import sys
|
||||
|
||||
import grpc
|
||||
|
||||
try:
|
||||
from spacex_api.device import device_pb2
|
||||
from spacex_api.device import device_pb2_grpc
|
||||
except ModuleNotFoundError:
|
||||
print("This script requires the generated gRPC protocol modules. See README file for details.",
|
||||
file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Note that if you remove the 'with' clause here, you need to separately
|
||||
# call channel.close() when you're done with the gRPC connection.
|
||||
with grpc.insecure_channel("192.168.100.1:9200") as channel:
|
||||
stub = device_pb2_grpc.DeviceStub(channel)
|
||||
response = stub.Handle(device_pb2.Request(get_status={}), timeout=10)
|
||||
|
||||
# Dump everything
|
||||
print(response)
|
||||
|
||||
# Just the software version
|
||||
print("Software version:", response.dish_get_status.device_info.software_version)
|
||||
|
||||
# Check if connected
|
||||
print("Not connected" if response.dish_get_status.HasField("outage") else "Connected")
|
||||
@@ -0,0 +1,5 @@
|
||||
#!/bin/sh
|
||||
|
||||
printenv >> /etc/environment
|
||||
ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||
exec /usr/local/bin/python3 $@
|
||||
@@ -0,0 +1,142 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Poll and record service information from a gRPC reflection server
|
||||
|
||||
This script will query a gRPC reflection server for descriptor information of
|
||||
all services supported by the server, excluding the reflection service itself,
|
||||
and write a serialized FileDescriptorSet protobuf containing all returned
|
||||
descriptors to a file, either once or in a periodic loop. This file can then
|
||||
be used by any tool that accepts such data, including protoc, the protocol
|
||||
buffer compiler.
|
||||
|
||||
Output files are named with the CRC32 value and byte length of the serialized
|
||||
FileDescriptorSet data. If those match the name of a file written previously,
|
||||
the data is assumed not to have changed and no new file is written. For this
|
||||
reason, it is recommended to use an output directory specific to the server,
|
||||
to avoid mixing with files written with data from other servers.
|
||||
|
||||
Although the default target option is the local IP and port number used by the
|
||||
gRPC service on a Starlink user terminal, this script is otherwise not
|
||||
specific to Starlink and should work for any gRPC server that does not require
|
||||
SSL and that has the reflection service enabled.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import binascii
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
|
||||
import grpc
|
||||
from yagrc import dump
|
||||
from yagrc import reflector
|
||||
|
||||
TARGET_DEFAULT = "192.168.100.1:9200"
|
||||
LOOP_TIME_DEFAULT = 0
|
||||
RETRY_DELAY_DEFAULT = 0
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Poll a gRPC reflection server and record a serialized "
|
||||
"FileDescriptorSet (protoset) of the reflected information")
|
||||
|
||||
parser.add_argument("outdir",
|
||||
nargs="?",
|
||||
metavar="OUTDIR",
|
||||
help="Directory in which to write protoset files")
|
||||
parser.add_argument("-g",
|
||||
"--target",
|
||||
default=TARGET_DEFAULT,
|
||||
help="host:port of device to query, default: " + TARGET_DEFAULT)
|
||||
parser.add_argument("-n",
|
||||
"--print-only",
|
||||
action="store_true",
|
||||
help="Print the protoset filename instead of writing the data")
|
||||
parser.add_argument("-r",
|
||||
"--retry-delay",
|
||||
type=float,
|
||||
default=float(RETRY_DELAY_DEFAULT),
|
||||
help="Time in seconds to wait before retrying after network "
|
||||
"error or 0 for no retry, default: " + str(RETRY_DELAY_DEFAULT))
|
||||
parser.add_argument("-t",
|
||||
"--loop-interval",
|
||||
type=float,
|
||||
default=float(LOOP_TIME_DEFAULT),
|
||||
help="Loop interval in seconds or 0 for no loop, default: " +
|
||||
str(LOOP_TIME_DEFAULT))
|
||||
parser.add_argument("-v", "--verbose", action="store_true", help="Be verbose")
|
||||
|
||||
opts = parser.parse_args()
|
||||
|
||||
if opts.outdir is None and not opts.print_only:
|
||||
parser.error("Output dir is required unless --print-only option set")
|
||||
|
||||
return opts
|
||||
|
||||
|
||||
def loop_body(opts):
|
||||
while True:
|
||||
try:
|
||||
with grpc.insecure_channel(opts.target) as channel:
|
||||
protoset = dump.dump_protocols(channel)
|
||||
break
|
||||
except reflector.ServiceError as e:
|
||||
logging.error("Problem with reflection service: %s", str(e))
|
||||
# Only retry on network-related errors, not service errors
|
||||
return
|
||||
except grpc.RpcError as e:
|
||||
# grpc.RpcError error message is not very useful, but grpc.Call has
|
||||
# something slightly better
|
||||
if isinstance(e, grpc.Call):
|
||||
msg = e.details()
|
||||
else:
|
||||
msg = "Unknown communication or service error"
|
||||
print("Problem communicating with reflection service:", msg)
|
||||
if opts.retry_delay > 0.0:
|
||||
time.sleep(opts.retry_delay)
|
||||
else:
|
||||
return
|
||||
|
||||
filename = "{0:08x}_{1}.protoset".format(binascii.crc32(protoset), len(protoset))
|
||||
if opts.print_only:
|
||||
print("Protoset:", filename)
|
||||
else:
|
||||
try:
|
||||
with open(filename, mode="xb") as outfile:
|
||||
outfile.write(protoset)
|
||||
print("New protoset found:", filename)
|
||||
except FileExistsError:
|
||||
if opts.verbose:
|
||||
print("Existing protoset:", filename)
|
||||
|
||||
|
||||
def goto_dir(outdir):
|
||||
try:
|
||||
outdir_abs = os.path.abspath(outdir)
|
||||
os.makedirs(outdir_abs, exist_ok=True)
|
||||
os.chdir(outdir)
|
||||
except OSError as e:
|
||||
logging.error("Output directory error: %s", str(e))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def main():
|
||||
opts = parse_args()
|
||||
logging.basicConfig(format="%(levelname)s: %(message)s")
|
||||
if not opts.print_only:
|
||||
goto_dir(opts.outdir)
|
||||
|
||||
next_loop = time.monotonic()
|
||||
while True:
|
||||
loop_body(opts)
|
||||
if opts.loop_interval > 0.0:
|
||||
now = time.monotonic()
|
||||
next_loop = max(next_loop + opts.loop_interval, now)
|
||||
time.sleep(next_loop - now)
|
||||
else:
|
||||
break
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,101 @@
|
||||
"""Shared logic for main loop control.
|
||||
|
||||
This module provides support for running a function from a loop at fixed
|
||||
intervals using monotonic time or on cron-like schedule using wall clock time.
|
||||
|
||||
The cron scheduler uses the same schedule format string that cron uses for
|
||||
crontab entries, and will do its best to remain on schedule despite clock
|
||||
adjustments.
|
||||
"""
|
||||
|
||||
try:
|
||||
from croniter import croniter
|
||||
import dateutil.tz
|
||||
croniter_ok = True
|
||||
except ImportError:
|
||||
croniter_ok = False
|
||||
from datetime import datetime
|
||||
import signal
|
||||
import time
|
||||
|
||||
# Max time to sleep when using non-monotonic time. This helps protect against
|
||||
# oversleeping as the result of large clock adjustments.
|
||||
MAX_SLEEP = 3600.0
|
||||
|
||||
|
||||
class Terminated(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def handle_sigterm(signum, frame):
|
||||
# Turn SIGTERM into an exception so main loop can clean up
|
||||
raise Terminated
|
||||
|
||||
|
||||
def add_args(parser):
|
||||
group = parser.add_argument_group(title="Loop options")
|
||||
group.add_argument("-t", "--loop-interval", type=float, help="Run loop at interval, in seconds")
|
||||
group.add_argument("-c",
|
||||
"--loop-cron",
|
||||
help="Run loop on schedule defined by cron format expression")
|
||||
group.add_argument("-m",
|
||||
"--cron-timezone",
|
||||
help='Timezone name (IANA name or "UTC") to use for --loop-cron '
|
||||
'schedule; default is system local time')
|
||||
|
||||
|
||||
def check_args(opts, parser):
|
||||
if opts.loop_interval is not None and opts.loop_cron is not None:
|
||||
parser.error("At most one of --loop-interval and --loop-cron may be used")
|
||||
|
||||
if opts.cron_timezone and not opts.loop_cron:
|
||||
parser.error("cron timezone specified, but not using cron scheduling")
|
||||
|
||||
if opts.loop_cron is not None:
|
||||
if not croniter_ok:
|
||||
parser.error("croniter is not installed, --loop-cron requires it")
|
||||
if not croniter.is_valid(opts.loop_cron):
|
||||
parser.error("Invalid cron format")
|
||||
opts.timezone = dateutil.tz.gettz(opts.cron_timezone)
|
||||
if opts.timezone is None:
|
||||
if opts.cron_timezone is None:
|
||||
parser.error("Failed to get local timezone, may need to use --cron-timezone")
|
||||
else:
|
||||
parser.error("Invalid timezone name")
|
||||
|
||||
if opts.loop_interval is None:
|
||||
opts.loop_interval = 0.0
|
||||
|
||||
|
||||
def run_loop(opts, loop_body, *loop_args):
|
||||
signal.signal(signal.SIGTERM, handle_sigterm)
|
||||
|
||||
rc = 0
|
||||
try:
|
||||
if opts.loop_interval <= 0.0 and not opts.loop_cron:
|
||||
rc = loop_body(*loop_args)
|
||||
elif opts.loop_cron:
|
||||
criter = croniter(opts.loop_cron, datetime.now(tz=opts.timezone))
|
||||
now = time.time()
|
||||
next_loop = criter.get_next(start_time=now)
|
||||
while True:
|
||||
while now < next_loop:
|
||||
# This is to protect against clock getting set backwards
|
||||
# by a large amount. Normally, it should do nothing:
|
||||
next_loop = criter.get_next(start_time=now)
|
||||
time.sleep(min(next_loop - now, MAX_SLEEP))
|
||||
now = time.time()
|
||||
next_loop = criter.get_next(start_time=now)
|
||||
rc = loop_body(*loop_args)
|
||||
now = time.time()
|
||||
else:
|
||||
next_loop = time.monotonic()
|
||||
while True:
|
||||
rc = loop_body(*loop_args)
|
||||
now = time.monotonic()
|
||||
next_loop = max(next_loop + opts.loop_interval, now)
|
||||
time.sleep(next_loop - now)
|
||||
except (KeyboardInterrupt, Terminated):
|
||||
pass
|
||||
|
||||
return rc
|
||||
@@ -0,0 +1,10 @@
|
||||
[build-system]
|
||||
requires = [
|
||||
"setuptools>=42",
|
||||
"setuptools_scm[toml]>=3.4",
|
||||
"wheel"
|
||||
]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[tool.setuptools_scm]
|
||||
root = ".."
|
||||
@@ -0,0 +1,27 @@
|
||||
[metadata]
|
||||
name = starlink-grpc-core
|
||||
url = https://github.com/sparky8512/starlink-grpc-tools
|
||||
author_email = sparky8512-py@yahoo.com
|
||||
license_files = ../LICENSE
|
||||
classifiers =
|
||||
Development Status :: 4 - Beta
|
||||
Intended Audience :: Developers
|
||||
License :: OSI Approved :: The Unlicense (Unlicense)
|
||||
Operating System :: OS Independent
|
||||
Programming Language :: Python :: 3
|
||||
Topic :: Software Development :: Libraries :: Python Modules
|
||||
description = Core functions for Starlink gRPC communication
|
||||
long_description = file: README.md
|
||||
long_description_content_type = text/markdown
|
||||
|
||||
[options]
|
||||
install_requires =
|
||||
grpcio>=1.12.0
|
||||
protobuf>=3.6.0
|
||||
yagrc>=1.1.1
|
||||
typing-extensions>=4.3.0
|
||||
package_dir =
|
||||
=..
|
||||
py_modules =
|
||||
starlink_grpc
|
||||
python_requires = >=3.7
|
||||
@@ -0,0 +1,3 @@
|
||||
import setuptools
|
||||
|
||||
setuptools.setup()
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user