NCAA Digital Transformation - Offline Operations & Sync Module — Software Requirements Specification (SRS)
Table of Contents
- 1 Document Information
- 2 Project Overview
- 3 User Requirements
- 4 Technical Requirements
- 5 External Dependencies
- 6 Release Planning
- 7 Risks Assumptions
- 8 Market Specific Considerations
- 9 Sign Off
- 10 Detailed Feature Requirements
- 10.1 Ft Sync Offline Mode
- 10.2 Ft Sync Offline Indicator
- 10.3 Ft Sync Offline Queue
- 10.4 Ft Sync Offline Performance
- 10.5 Ft Sync Gate To Gate
- 10.6 Ft Sync Priority
- 10.7 Ft Sync Batch
- 10.8 Ft Sync Conflict Detect
- 10.9 Ft Sync Conflict Resolve
- 10.10 Ft Sync Manual Trigger
- 10.11 Ft Sync Integrity
- 10.12 Ft Sync Backup Hourly
- 10.13 Ft Sync Backup Daily
- 10.14 Ft Sync Backup Weekly
- 10.15 Ft Sync Restore Hourly
- 10.16 Ft Sync Restore Point
- 10.17 Ft Sync Ups Integration
- 10.18 Ft Sync Ups Shutdown
- 10.19 Ft Sync Ups Alert
- 10.20 Ft Sync Power Recovery
- 10.21 Ft Sync Storage Nvme
- 10.22 Ft Sync Storage Nas
- 10.23 Ft Sync Storage Monitoring
- 10.24 Ft Sync Storage Cleanup
- 10.25 Ft Sync Net Detect
- 10.26 Ft Sync Net Slow
- 10.27 Ft Sync Net Retry
- 10.28 Ft Sync Net Cellular
- 10.29 Ft Sync Monitor Status
- 10.30 Ft Sync Monitor Alerts
- 10.31 Ft Sync Monitor Logs
- 10.32 Ft Sync Monitor Metrics
- 11 Additional Context
1 Document Information
| Field | Value |
|---|---|
| Project Name | NCAA Digital Transformation - Offline Operations & Sync Module |
| Version | 1.0 |
| Date | 2025-11-06 |
| Project Manager | TBD |
| Tech Lead | TBD |
| Qa Lead | TBD |
| Platforms | ['Ubuntu Server 22.04', 'PostgreSQL 15+', 'NAS Linux'] |
| Document Status | Draft |
| Module Code | OFFLINE_SYNC |
| Parent Project | NCAA Digital Transformation - Ngorongoro Gateway System |
2 Project Overview
2.1 What Are We Building
2.1.1 System Function
Offline-first data management and synchronization system enabling 9 gates to operate independently without network connectivity while maintaining data consistency and integrity. System includes local PostgreSQL storage on Intel NUC, hourly backups to NAS RAID 1, gate-to-gate synchronization with 15-minute maximum delay, and UPS power backup for 2-4 hour runtime.
2.1.2 Users
- Gate Staff: Operate system during network outages
- System Administrators: Monitor sync status and resolve conflicts
- Technical Support: Troubleshoot sync issues
- Management: Monitor system health and uptime
2.1.3 Problem Solved
Safari portal system frequently down due to network failures, 30+ minute wait times for data synchronization at Old HQ, manual workarounds when system unavailable, data loss risk from paper records, no backup systems at remote locations (Lemala 2 has no computer), slow internet at Ndutu and Olduvai causing service delays, system misbehavior at Main Gate showing false overstay errors
2.1.4 Key Success Metric
99% system uptime regardless of network status, gate-to-gate sync within 15 minutes maximum, zero data loss, hourly backups completed successfully, 2-4 hour operation on UPS during power outages, automatic conflict resolution for 95% of sync conflicts
2.2 Scope
2.2.1 In Scope
- Local PostgreSQL database on Intel NUC at each gate
- Fast NVMe SSD storage for live operations
- Hourly automated backup to NAS RAID 1
- Daily NAS snapshots for point-in-time recovery
- Weekly encrypted USB backup to safe
- Gate-to-gate data synchronization over HTTP API
- Conflict detection and resolution
- Network failure detection and automatic offline mode
- Sync queue management for batch processing
- UPS integration for power backup (2-4 hours)
- Data integrity checks and validation
- Sync monitoring and alerting
- Manual sync trigger for urgent updates
2.2.2 Out Of Scope
- Real-time streaming replication (eventual consistency acceptable)
- Distributed database clustering
- Automatic failover to cloud systems
- Satellite internet connectivity
- Multi-master write conflicts (single master per gate for gate-specific data)
3 User Requirements
3.1 Offline Operation
| Feature Code | I Want To | So That I Can | Priority | Notes |
|---|---|---|---|---|
| FT-SYNC-OFFLINE-MODE | Operate system fully offline when network unavailable | Continue processing visitors at gates with no/poor connectivity (Ndutu, Lemala, slow Olduvai) | Must | All core functions available offline: registration, permit verification, vehicle logging, capacity tracking. Network detection automatic. |
| FT-SYNC-OFFLINE-INDICATOR | See clear visual indicator of online/offline status | Know when system is operating offline and when sync will occur | Must | Status bar showing: Online (syncing), Offline (X hours since last sync), Syncing now. Color-coded: green/yellow/red. |
| FT-SYNC-OFFLINE-QUEUE | View pending changes waiting to sync | Monitor what data needs to be synchronized when network returns | Should | Queue shows: pending records count, oldest pending change time, estimated sync time. |
| FT-SYNC-OFFLINE-PERFORMANCE | Experience same performance whether online or offline | Maintain service quality regardless of network status | Must | Local NVMe SSD ensures fast query response. No performance degradation offline. |
3.2 Data Synchronization
| Feature Code | I Want To | So That I Can | Priority | Notes |
|---|---|---|---|---|
| FT-SYNC-GATE-TO-GATE | Sync data between gates within 15 minutes maximum | Maintain near real-time capacity tracking and prevent duplicate entries | Must | Small payloads (10-50KB). HTTP API between NUCs. Works over 2G. Automatic retry on failure. |
| FT-SYNC-PRIORITY | Prioritize critical data for sync (permits, capacity) over lower priority (historical reports) | Ensure most important data syncs first on limited bandwidth | Must | Priority levels: Critical (permits, capacity - 5min target), High (vehicle logs - 15min), Normal (reports - 1hr), Low (archives - 24hr). |
| FT-SYNC-BATCH | Batch multiple changes for efficient sync | Minimize network overhead on slow connections (2G at remote gates) | Must | Batch size: 100 records or 50KB, whichever comes first. Compression enabled. |
| FT-SYNC-CONFLICT-DETECT | Automatically detect sync conflicts (same record modified at multiple gates) | Prevent data corruption and inconsistencies | Must | Timestamp-based conflict detection. Last-write-wins for most data. Manual resolution for critical conflicts. |
| FT-SYNC-CONFLICT-RESOLVE | Automatically resolve common sync conflicts | Minimize manual intervention (target 95% auto-resolution) | Must | Rules: Vehicle logs append-only, capacity updates merge, permit updates from originating gate wins. Conflict log maintained. |
| FT-SYNC-MANUAL-TRIGGER | Manually trigger sync for urgent updates | Immediately sync critical changes (emergency permit extensions) | Must | Manual sync button. Confirmation dialog showing sync scope. Progress indicator. |
| FT-SYNC-INTEGRITY | Validate data integrity during sync | Ensure no data corruption during transmission | Must | Checksums for each batch. Transaction rollback on failure. Retry mechanism. |
3.3 Backup Recovery
| Feature Code | I Want To | So That I Can | Priority | Notes |
|---|---|---|---|---|
| FT-SYNC-BACKUP-HOURLY | Automatically backup PostgreSQL to NAS every hour | Protect against NUC SSD failure with hourly granularity | Must | pg_dump + rsync to NAS. Backup time <10 minutes. Verification after backup. Retention: 7 days hourly. |
| FT-SYNC-BACKUP-DAILY | Create daily snapshots on NAS for point-in-time recovery | Recover from data corruption or accidental deletion | Must | Daily NAS snapshots at midnight. Retention: 30 days. Space-efficient incremental snapshots. |
| FT-SYNC-BACKUP-WEEKLY | Create weekly encrypted USB backups for disaster recovery | Maintain offline backup in safe for catastrophic failures | Must | Weekly USB backup. AES-256 encryption. Physical storage in safe. Retention: 12 weeks. |
| FT-SYNC-RESTORE-HOURLY | Restore from hourly NAS backup within 30 minutes | Quickly recover from NUC failure | Must | Documented restore procedure. Tested monthly. Restore script automated. |
| FT-SYNC-RESTORE-POINT | Restore to specific point in time from daily snapshots | Recover from data corruption or user errors | Must | Point-in-time recovery UI. Preview restore before commit. Backup current state before restore. |
3.4 Power Management
| Feature Code | I Want To | So That I Can | Priority | Notes |
|---|---|---|---|---|
| FT-SYNC-UPS-INTEGRATION | Integrate with UPS for 2-4 hour power backup | Continue operations during power outages | Must | UPS 1000VA powers NUC + NAS + Switch. Runtime: 2-4 hours. Low battery alerts. |
| FT-SYNC-UPS-SHUTDOWN | Gracefully shutdown systems when UPS battery critically low | Prevent data corruption from sudden power loss | Must | 10% battery triggers graceful shutdown. Save all pending changes. Close database connections. Shutdown sequence: PWA -> PostgreSQL -> OS. |
| FT-SYNC-UPS-ALERT | Receive alerts when on UPS power or battery low | Take action before system shutdown | Must | Visual alert on PWA. SMS to technical staff. Battery percentage displayed. |
| FT-SYNC-POWER-RECOVERY | Automatically restart and resume operations when power restored | Minimize downtime without manual intervention | Must | Auto-boot on power restore. Database integrity check. Resume pending syncs. Staff notification. |
3.5 Local Storage
| Feature Code | I Want To | So That I Can | Priority | Notes |
|---|---|---|---|---|
| FT-SYNC-STORAGE-NVME | Store live operational data on fast NVMe SSD | Ensure quick query response for gate operations | Must | 512GB NVMe SSD on NUC. PostgreSQL optimized for SSD. Query response <100ms. |
| FT-SYNC-STORAGE-NAS | Archive historical data on NAS RAID 1 | Protect against disk failure with redundancy | Must | 2x 2TB drives in RAID 1. Survives single disk failure. Hot-swap capability. |
| FT-SYNC-STORAGE-MONITORING | Monitor storage usage and receive alerts when space low | Prevent system failure due to full disk | Must | Alert at 80% full. Critical alert at 90%. Automatic archive to NAS when NUC SSD >80%. |
| FT-SYNC-STORAGE-CLEANUP | Automatically archive old data from NUC SSD to NAS | Manage limited SSD space efficiently | Must | Archive records >30 days old to NAS. Keep current month on NUC SSD for fast access. |
3.6 Network Resilience
| Feature Code | I Want To | So That I Can | Priority | Notes |
|---|---|---|---|---|
| FT-SYNC-NET-DETECT | Automatically detect network availability | Switch between online and offline modes seamlessly | Must | Ping test to Old HQ every 30 seconds. Exponential backoff on failures. Switch to offline mode after 3 failed pings. |
| FT-SYNC-NET-SLOW | Detect slow network and adjust sync strategy | Optimize for 2G connections at remote gates | Must | Bandwidth detection. Reduce batch size on slow connection. Increase priority threshold. |
| FT-SYNC-NET-RETRY | Automatically retry failed sync with exponential backoff | Handle intermittent network issues without manual intervention | Must | Retry: immediately, 30sec, 1min, 5min, 15min, then hourly. Max 10 retries before manual intervention needed. |
| FT-SYNC-NET-CELLULAR | Support cellular data (2G/3G) for sync | Sync even with minimal connectivity | Must | Works over 2G. Small payloads optimized for slow connections. Compression enabled. |
3.7 Monitoring Alerting
| Feature Code | I Want To | So That I Can | Priority | Notes |
|---|---|---|---|---|
| FT-SYNC-MONITOR-STATUS | View sync status dashboard for all 9 gates | Monitor system health from Old HQ | Must | Dashboard shows: last sync time, pending records, online/offline status, errors. Per-gate view. |
| FT-SYNC-MONITOR-ALERTS | Receive alerts for sync failures or delays | Proactively address issues before they impact operations | Must | Alert conditions: Sync delay >1 hour, backup failure, storage >80%, UPS on battery, conflict requiring manual resolution. |
| FT-SYNC-MONITOR-LOGS | Access detailed sync logs for troubleshooting | Diagnose and resolve sync issues | Must | Logs: sync attempts, success/failure, duration, records synced, conflicts, errors. Searchable. 30-day retention. |
| FT-SYNC-MONITOR-METRICS | View historical sync performance metrics | Identify trends and optimize sync strategy | Should | Metrics: avg sync time, success rate, network uptime, backup success rate. Charts and graphs. |
4 Technical Requirements
4.1 Performance Standards
| Requirement | Target | How To Test |
|---|---|---|
| Gate-to-gate sync delay | ≤ 15 minutes maximum | Create record at one gate, verify receipt at others within 15 min |
| Hourly backup duration | < 10 minutes | Monitor pg_dump + rsync time for typical database size |
| Database query response | < 100ms for 95% of queries | Load testing with typical query patterns |
| System uptime | 99% regardless of network status | Offline operation tests, uptime monitoring over 30 days |
| UPS runtime | 2-4 hours for NUC + NAS + Switch | Full load test with UPS disconnected from mains |
| Conflict auto-resolution rate | ≥ 95% | Synthetic conflict scenarios, measure manual intervention rate |
4.2 Platform Requirements
| Platform | Minimum Version | Target Version | Notes |
|---|---|---|---|
| Database | PostgreSQL 15 | PostgreSQL 16+ | Logical replication features, better performance |
| Operating System | Ubuntu Server 22.04 LTS | Ubuntu Server 24.04 LTS | Long-term support, security updates |
| Storage | 512GB NVMe SSD, 2x 2TB HDD RAID 1 | 1TB NVMe SSD, 2x 4TB HDD RAID 1 | Future-proofing for data growth |
4.3 Security Privacy
| Requirement | Must Have | Implementation |
|---|---|---|
| Data encryption at rest | True | AES-256 encryption for NAS backups and USB backups |
| Data encryption in transit | True | TLS 1.2+ for gate-to-gate sync over HTTP API |
| Backup integrity verification | True | Checksums verified after every backup, test restore monthly |
| Access control for backups | True | Role-based access to backup systems, audit trail for restore operations |
5 External Dependencies
5.1 Third Party Services
| Service | What It Does | Criticality | Backup Plan |
|---|---|---|---|
| SMS Gateway (optional) | Send alerts for critical sync failures | Nice to have | Email alerts only |
5.2 Device Requirements
| Feature | Required | Optional | Notes |
|---|---|---|---|
| UPS 1000VA | True | False | Powers NUC + NAS + Switch for 2-4 hours. USB or network management interface. |
| Network connectivity (intermittent OK) | True | False | Works with 2G minimum. Offline operation when unavailable. |
| NAS with RAID 1 | True | False | 4-bay Synology/QNAP, 2x 2TB minimum, supports snapshots |
6 Release Planning
6.1 Development Phases
| Phase | Features Included | Timeline | Success Criteria |
|---|---|---|---|
| Phase 1 (Single Gate Testing) | ['Local PostgreSQL setup', 'Offline operation', 'NAS backup', 'UPS integration', 'Basic monitoring'] | 6 weeks | Single gate operational offline for 7 days, hourly backups successful, restore tested |
| Phase 2 (Multi-Gate Sync - 3 Gates) | ['Gate-to-gate sync', 'Conflict detection & resolution', 'Priority sync', 'Sync monitoring dashboard'] | 8 weeks | 3 gates syncing within 15 minutes, 95% auto-conflict resolution, zero data loss |
| Phase 3 (Full Deployment - 9 Gates) | ['All 9 gates operational', 'Full monitoring & alerting', 'Performance optimization', 'Network resilience features'] | 8 weeks | All gates syncing reliably, 99% uptime, comprehensive monitoring in place |
6.2 Release Checklist
- PostgreSQL configured on all NUC units
- NAS backup systems operational at all gates
- UPS installed and tested (2-4hr runtime verified)
- Gate-to-gate sync tested and performing within 15min SLA
- Conflict resolution rules implemented and tested
- Offline operation tested at all gates for 24+ hours
- Backup and restore procedures documented and tested
- Monitoring dashboard operational at Old HQ
- Alert system configured and tested
- Weekly USB backups established
- Staff trained on sync monitoring and manual triggers
7 Risks Assumptions
7.1 Risks
| Risk | Probability | Impact | Mitigation |
|---|---|---|---|
| Network outages longer than 4 hours causing significant sync delays | Medium | Medium | Offline-first design, large sync queues, manual sync triggers when network restored, acceptable delay up to 24 hours for non-critical data |
| Sync conflicts requiring manual resolution overwhelming staff | Low | Medium | 95% auto-resolution target, conflict resolution training, clear escalation procedures |
| Storage capacity exhausted due to unexpected data growth | Low | High | Storage monitoring with 80% alerts, automatic archival, spare drives available, documented expansion procedure |
| UPS battery degradation reducing runtime below 2 hours | Medium | Medium | Annual UPS battery replacement, runtime testing quarterly, spare batteries stocked |
| Database corruption requiring restore | Low | High | Hourly backups with verification, daily snapshots, weekly offline backups, tested restore procedures |
7.2 Assumptions
- Network connectivity available intermittently (even if slow/unreliable)
- Gate staff can operate system with clear offline/online indicators
- 15-minute sync delay acceptable for capacity management
- Power outages typically shorter than 4 hours (UPS capacity)
- NUC hardware reliable for 24/7 operation in remote conditions
- RAID 1 on NAS provides sufficient redundancy for backup data
- Sync conflicts infrequent due to gate-specific data partitioning
8 Market Specific Considerations
8.1 Primary Market
- Ngorongoro Conservation Area, Tanzania - 9 remote gates
8.2 Target Demographics
- Gate staff operating in offline conditions
- System administrators monitoring from Old HQ
8.3 Local Considerations
- Very limited network connectivity at remote locations (Ndutu no cellular, Lemala 1&2 low connectivity)
- Slow internet even when available (2G speeds common)
- Power reliability issues requiring UPS backup
- Remote locations making physical repairs challenging (spare NUC units at Old HQ)
- Staff may have limited technical skills for troubleshooting
- Harsh environment conditions (dust, heat) affecting hardware
9 Sign Off
9.1 Approval
| Role | Name | Signature | Date |
|---|---|---|---|
9.2 Document History
| Version | Date | Changes Made | Changed By |
|---|---|---|---|
| 1.0 | 2025-11-06 | Initial draft based on gate nodes architecture and field report observations | Development Team |
10 Detailed Feature Requirements
10.1 Ft Sync Offline Mode
10.1.1 Priority
Must Have
10.1.2 User Story
As a gate staff member, I want to operate the system fully offline when network is unavailable so that I can continue processing visitors at gates with no/poor connectivity
10.1.3 Preconditions
Local PostgreSQL database operational; network detection configured; core functions available offline
10.1.4 Postconditions
All core operations functional offline; network status automatically detected; seamless mode switching
10.1.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-OFFLINE-TC-001 | Operate registration offline for 24 hours | High |
| SYNC-OFFLINE-TC-002 | Operate permit verification offline with local database | High |
| SYNC-OFFLINE-TC-003 | Operate vehicle logging offline | High |
| SYNC-OFFLINE-TC-004 | Track capacity offline at remote gates (Ndutu, Lemala) | High |
| SYNC-OFFLINE-TC-005 | Verify automatic network detection and mode switching | High |
10.2 Ft Sync Offline Indicator
10.2.1 Priority
Must Have
10.2.2 User Story
As a gate staff member, I want to see clear visual indicator of online/offline status so that I know when system is operating offline and when sync will occur
10.2.3 Preconditions
Status bar implemented in PWA; network monitoring active; status updates in real-time
10.2.4 Postconditions
Status clearly visible; color-coded indicators working; time since last sync displayed
10.2.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-OFFLINE-TC-006 | Display green indicator when online and syncing | High |
| SYNC-OFFLINE-TC-007 | Display yellow/red indicator when offline with hours since last sync | High |
| SYNC-OFFLINE-TC-008 | Display 'Syncing now' indicator during active sync | High |
| SYNC-OFFLINE-TC-009 | Update status indicator within 30 seconds of network changes | Medium |
10.3 Ft Sync Offline Queue
10.3.1 Priority
Should Have
10.3.2 User Story
As a gate staff member, I want to view pending changes waiting to sync so that I can monitor what data needs to be synchronized when network returns
10.3.3 Preconditions
Sync queue implemented; pending records tracked; queue display interface available
10.3.4 Postconditions
Queue visible to staff; pending count accurate; estimated sync time displayed
10.3.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-OFFLINE-TC-010 | Display pending records count in queue | Medium |
| SYNC-OFFLINE-TC-011 | Display oldest pending change timestamp | Medium |
| SYNC-OFFLINE-TC-012 | Display estimated sync time based on queue size | Medium |
10.4 Ft Sync Offline Performance
10.4.1 Priority
Must Have
10.4.2 User Story
As a gate staff member, I want same performance whether online or offline so that service quality is maintained regardless of network status
10.4.3 Preconditions
Local NVMe SSD storage; optimized database queries; performance benchmarks defined
10.4.4 Postconditions
Query response time <100ms offline; no performance degradation; user experience consistent
10.4.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-OFFLINE-TC-013 | Verify query response time <100ms offline | High |
| SYNC-OFFLINE-TC-014 | Load test with 300 visitors/day offline | High |
| SYNC-OFFLINE-TC-015 | Compare online vs offline performance (should be equivalent) | High |
10.5 Ft Sync Gate To Gate
10.5.1 Priority
Must Have
10.5.2 User Story
As a system administrator, I want to sync data between gates within 15 minutes maximum so that near real-time capacity tracking is maintained
10.5.3 Preconditions
HTTP API configured between NUCs; network connectivity available; sync scheduler running
10.5.4 Postconditions
Data synced within 15 minutes; capacity updates propagated; duplicate entries prevented
10.5.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-GATE-TC-001 | Sync data between gates within 15 minutes | High |
| SYNC-GATE-TC-002 | Verify small payload size (10-50KB per sync) | High |
| SYNC-GATE-TC-003 | Test sync over 2G connection | High |
| SYNC-GATE-TC-004 | Verify automatic retry on sync failure | High |
| SYNC-GATE-TC-005 | Test sync across all 9 gates simultaneously | High |
10.6 Ft Sync Priority
10.6.1 Priority
Must Have
10.6.2 User Story
As a system administrator, I want to prioritize critical data for sync so that most important data syncs first on limited bandwidth
10.6.3 Preconditions
Priority levels defined; sync queue prioritized; bandwidth limitations considered
10.6.4 Postconditions
Critical data synced within 5 minutes; high priority within 15 minutes; normal and low priority as bandwidth allows
10.6.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-PRIORITY-TC-001 | Sync critical data (permits, capacity) within 5 minutes | High |
| SYNC-PRIORITY-TC-002 | Sync high priority data (vehicle logs) within 15 minutes | High |
| SYNC-PRIORITY-TC-003 | Sync normal priority data (reports) within 1 hour | Medium |
| SYNC-PRIORITY-TC-004 | Sync low priority data (archives) within 24 hours | Medium |
10.7 Ft Sync Batch
10.7.1 Priority
Must Have
10.7.2 User Story
As a system administrator, I want to batch multiple changes for efficient sync so that network overhead is minimized on slow connections
10.7.3 Preconditions
Batch size configured (100 records or 50KB); compression enabled; batching logic implemented
10.7.4 Postconditions
Multiple records batched efficiently; compression reduces bandwidth; sync performance optimized
10.7.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-BATCH-TC-001 | Batch up to 100 records per sync | High |
| SYNC-BATCH-TC-002 | Limit batch size to 50KB maximum | High |
| SYNC-BATCH-TC-003 | Verify compression enabled and reducing payload size | High |
| SYNC-BATCH-TC-004 | Test batching on 2G connection performance | High |
10.8 Ft Sync Conflict Detect
10.8.1 Priority
Must Have
10.8.2 User Story
As a system administrator, I want to automatically detect sync conflicts so that data corruption and inconsistencies are prevented
10.8.3 Preconditions
Timestamp-based conflict detection implemented; conflict log maintained; detection rules configured
10.8.4 Postconditions
Conflicts detected automatically; conflicts logged; manual resolution triggered when needed
10.8.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-CONFLICT-TC-001 | Detect conflict when same record modified at multiple gates | High |
| SYNC-CONFLICT-TC-002 | Log all detected conflicts with timestamps | High |
| SYNC-CONFLICT-TC-003 | Trigger alert for critical conflicts requiring manual resolution | High |
| SYNC-CONFLICT-TC-004 | Prevent data corruption during conflict scenarios | High |
10.9 Ft Sync Conflict Resolve
10.9.1 Priority
Must Have
10.9.2 User Story
As a system administrator, I want to automatically resolve common sync conflicts so that manual intervention is minimized (target 95% auto-resolution)
10.9.3 Preconditions
Resolution rules defined; last-write-wins for most data; append-only for vehicle logs; conflict resolution logic implemented
10.9.4 Postconditions
95% of conflicts resolved automatically; resolution rules applied correctly; conflict log maintained
10.9.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-CONFLICT-TC-005 | Apply last-write-wins for general data conflicts | High |
| SYNC-CONFLICT-TC-006 | Use append-only strategy for vehicle log conflicts | High |
| SYNC-CONFLICT-TC-007 | Merge capacity updates from multiple gates | High |
| SYNC-CONFLICT-TC-008 | Permit updates from originating gate wins | High |
| SYNC-CONFLICT-TC-009 | Achieve ≥95% auto-resolution rate in testing | High |
10.10 Ft Sync Manual Trigger
10.10.1 Priority
Must Have
10.10.2 User Story
As a gate staff member, I want to manually trigger sync for urgent updates so that critical changes are immediately synchronized
10.10.3 Preconditions
Manual sync button available in UI; confirmation dialog implemented; progress indicator available
10.10.4 Postconditions
Manual sync triggered; urgent data synced immediately; progress visible to user
10.10.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-MANUAL-TC-001 | Display manual sync button in UI | High |
| SYNC-MANUAL-TC-002 | Show confirmation dialog with sync scope before manual sync | High |
| SYNC-MANUAL-TC-003 | Display progress indicator during manual sync | High |
| SYNC-MANUAL-TC-004 | Complete emergency permit extension sync within 2 minutes | High |
10.11 Ft Sync Integrity
10.11.1 Priority
Must Have
10.11.2 User Story
As a system administrator, I want to validate data integrity during sync so that no data corruption occurs during transmission
10.11.3 Preconditions
Checksums implemented for batches; transaction rollback on failure; retry mechanism configured
10.11.4 Postconditions
Data integrity verified; corrupted transmissions rejected; retries successful
10.11.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-INTEGRITY-TC-001 | Calculate and verify checksums for each batch | High |
| SYNC-INTEGRITY-TC-002 | Rollback transaction on checksum failure | High |
| SYNC-INTEGRITY-TC-003 | Retry failed sync with exponential backoff | High |
| SYNC-INTEGRITY-TC-004 | Log all integrity check failures | High |
10.12 Ft Sync Backup Hourly
10.12.1 Priority
Must Have
10.12.2 User Story
As a system administrator, I want to automatically backup PostgreSQL to NAS every hour so that NUC SSD failure protection is provided with hourly granularity
10.12.3 Preconditions
NAS configured and accessible; pg_dump installed; rsync configured; backup script scheduled
10.12.4 Postconditions
Hourly backups complete successfully; backup verification successful; 7 days retention maintained
10.12.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-BACKUP-TC-001 | Execute hourly pg_dump backup to NAS | High |
| SYNC-BACKUP-TC-002 | Verify backup completes in <10 minutes | High |
| SYNC-BACKUP-TC-003 | Verify backup integrity after completion | High |
| SYNC-BACKUP-TC-004 | Maintain 7 days of hourly backups with rotation | High |
10.13 Ft Sync Backup Daily
10.13.1 Priority
Must Have
10.13.2 User Story
As a system administrator, I want to create daily snapshots on NAS for point-in-time recovery so that recovery from data corruption or accidental deletion is possible
10.13.3 Preconditions
NAS snapshot feature enabled; daily snapshot scheduled at midnight; retention policy configured
10.13.4 Postconditions
Daily snapshots created successfully; 30-day retention maintained; space-efficient incremental snapshots
10.13.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-BACKUP-TC-005 | Create daily NAS snapshot at midnight | High |
| SYNC-BACKUP-TC-006 | Maintain 30 days of daily snapshots | High |
| SYNC-BACKUP-TC-007 | Verify snapshots are space-efficient incremental | Medium |
| SYNC-BACKUP-TC-008 | Test snapshot restoration process | High |
10.14 Ft Sync Backup Weekly
10.14.1 Priority
Must Have
10.14.2 User Story
As a system administrator, I want to create weekly encrypted USB backups for disaster recovery so that offline backup is maintained in safe for catastrophic failures
10.14.3 Preconditions
USB drives available; AES-256 encryption configured; weekly backup scheduled; safe storage available
10.14.4 Postconditions
Weekly USB backups created and encrypted; physically stored in safe; 12 weeks retention
10.14.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-BACKUP-TC-009 | Create weekly USB backup with AES-256 encryption | High |
| SYNC-BACKUP-TC-010 | Verify encrypted backup can be restored | High |
| SYNC-BACKUP-TC-011 | Store USB backup in physical safe | High |
| SYNC-BACKUP-TC-012 | Maintain 12 weeks of weekly USB backups | Medium |
10.15 Ft Sync Restore Hourly
10.15.1 Priority
Must Have
10.15.2 User Story
As a system administrator, I want to restore from hourly NAS backup within 30 minutes so that quick recovery from NUC failure is possible
10.15.3 Preconditions
Restore procedure documented; restore script automated; tested monthly; spare NUC available
10.15.4 Postconditions
Restore completes within 30 minutes; data integrity verified; system operational
10.15.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-RESTORE-TC-001 | Restore PostgreSQL from latest hourly backup | High |
| SYNC-RESTORE-TC-002 | Complete restore process within 30 minutes | High |
| SYNC-RESTORE-TC-003 | Verify database integrity after restore | High |
| SYNC-RESTORE-TC-004 | Test monthly restore procedures proactively | High |
10.16 Ft Sync Restore Point
10.16.1 Priority
Must Have
10.16.2 User Story
As a system administrator, I want to restore to specific point in time from daily snapshots so that recovery from data corruption or user errors is possible
10.16.3 Preconditions
Point-in-time recovery UI implemented; snapshots available; restore preview functionality; backup current state before restore
10.16.4 Postconditions
Point-in-time restoration successful; data restored to specific snapshot; current state backed up before restore
10.16.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-RESTORE-TC-005 | Select specific daily snapshot for restoration | High |
| SYNC-RESTORE-TC-006 | Preview restore contents before committing | Medium |
| SYNC-RESTORE-TC-007 | Backup current state before point-in-time restore | High |
| SYNC-RESTORE-TC-008 | Complete point-in-time restore successfully | High |
10.17 Ft Sync Ups Integration
10.17.1 Priority
Must Have
10.17.2 User Story
As a gate staff member, I want UPS integration for 2-4 hour power backup so that operations continue during power outages
10.17.3 Preconditions
UPS 1000VA installed; NUC + NAS + Switch connected to UPS; UPS management interface configured
10.17.4 Postconditions
UPS powers critical systems for 2-4 hours; battery level monitoring active; low battery alerts configured
10.17.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-UPS-TC-001 | Power NUC + NAS + Switch on UPS for 2-4 hours | High |
| SYNC-UPS-TC-002 | Monitor UPS battery level in real-time | High |
| SYNC-UPS-TC-003 | Configure low battery alerts at 20% and 10% | High |
| SYNC-UPS-TC-004 | Test UPS failover during simulated power outage | High |
10.18 Ft Sync Ups Shutdown
10.18.1 Priority
Must Have
10.18.2 User Story
As a system administrator, I want graceful shutdown when UPS battery is critically low so that data corruption from sudden power loss is prevented
10.18.3 Preconditions
UPS monitoring configured; shutdown script implemented; 10% battery threshold set; shutdown sequence defined
10.18.4 Postconditions
Systems shutdown gracefully at 10% battery; all pending changes saved; database connections closed cleanly
10.18.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-UPS-TC-005 | Trigger graceful shutdown at 10% battery | High |
| SYNC-UPS-TC-006 | Save all pending changes before shutdown | High |
| SYNC-UPS-TC-007 | Close database connections cleanly | High |
| SYNC-UPS-TC-008 | Execute shutdown sequence: PWA → PostgreSQL → OS | High |
10.19 Ft Sync Ups Alert
10.19.1 Priority
Must Have
10.19.2 User Story
As a gate staff member, I want to receive alerts when on UPS power or battery low so that I can take action before system shutdown
10.19.3 Preconditions
Alert system configured; visual alerts in PWA; SMS alerts to technical staff; battery percentage monitoring
10.19.4 Postconditions
Alerts received promptly; staff aware of power status; action taken before critical shutdown
10.19.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-UPS-TC-009 | Display visual alert in PWA when on UPS power | High |
| SYNC-UPS-TC-010 | Send SMS to technical staff when battery <20% | High |
| SYNC-UPS-TC-011 | Display battery percentage in PWA status bar | High |
| SYNC-UPS-TC-012 | Escalate alert urgency as battery level decreases | Medium |
10.20 Ft Sync Power Recovery
10.20.1 Priority
Must Have
10.20.2 User Story
As a system administrator, I want automatic restart and resume operations when power is restored so that downtime is minimized without manual intervention
10.20.3 Preconditions
Auto-boot on power restore configured; database integrity check enabled; resume pending syncs; notification system operational
10.20.4 Postconditions
System boots automatically on power restore; integrity verified; operations resumed; staff notified
10.20.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-RECOVERY-TC-001 | Auto-boot system when power restored | High |
| SYNC-RECOVERY-TC-002 | Execute database integrity check on startup | High |
| SYNC-RECOVERY-TC-003 | Resume pending syncs automatically | High |
| SYNC-RECOVERY-TC-004 | Notify staff of successful power recovery and system status | Medium |
10.21 Ft Sync Storage Nvme
10.21.1 Priority
Must Have
10.21.2 User Story
As a system administrator, I want live operational data stored on fast NVMe SSD so that quick query response for gate operations is ensured
10.21.3 Preconditions
512GB NVMe SSD installed; PostgreSQL optimized for SSD; query performance tuned
10.21.4 Postconditions
Query response <100ms; SSD performance optimal; database operations fast
10.21.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-STORAGE-TC-001 | Verify PostgreSQL data stored on NVMe SSD | High |
| SYNC-STORAGE-TC-002 | Optimize PostgreSQL configuration for SSD performance | High |
| SYNC-STORAGE-TC-003 | Verify query response time <100ms for 95% of queries | High |
| SYNC-STORAGE-TC-004 | Monitor SSD performance and health metrics | Medium |
10.22 Ft Sync Storage Nas
10.22.1 Priority
Must Have
10.22.2 User Story
As a system administrator, I want historical data archived on NAS RAID 1 so that disk failure protection is provided with redundancy
10.22.3 Preconditions
NAS with 2x 2TB drives in RAID 1; hot-swap capability; archival process configured
10.22.4 Postconditions
Historical data safely archived; RAID 1 redundancy active; single drive failure survivable
10.22.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-STORAGE-TC-005 | Configure NAS with 2x 2TB drives in RAID 1 | High |
| SYNC-STORAGE-TC-006 | Archive historical data (>30 days) from NUC to NAS | High |
| SYNC-STORAGE-TC-007 | Test single drive failure and recovery | High |
| SYNC-STORAGE-TC-008 | Verify hot-swap drive replacement capability | Medium |
10.23 Ft Sync Storage Monitoring
10.23.1 Priority
Must Have
10.23.2 User Story
As a system administrator, I want to monitor storage usage and receive alerts when space is low so that system failure due to full disk is prevented
10.23.3 Preconditions
Storage monitoring configured; alert thresholds set (80%, 90%); automatic archival configured
10.23.4 Postconditions
Storage usage monitored continuously; alerts sent at thresholds; automatic archival prevents disk full
10.23.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-STORAGE-TC-009 | Monitor NUC SSD and NAS storage usage | High |
| SYNC-STORAGE-TC-010 | Send alert when storage reaches 80% | High |
| SYNC-STORAGE-TC-011 | Send critical alert when storage reaches 90% | High |
| SYNC-STORAGE-TC-012 | Trigger automatic archival to NAS when NUC SSD >80% | High |
10.24 Ft Sync Storage Cleanup
10.24.1 Priority
Must Have
10.24.2 User Story
As a system administrator, I want to automatically archive old data from NUC SSD to NAS so that limited SSD space is managed efficiently
10.24.3 Preconditions
Archival rules defined (>30 days); automatic archival process scheduled; NAS storage available
10.24.4 Postconditions
Old data archived automatically; current month data on NUC for fast access; SSD space managed efficiently
10.24.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-CLEANUP-TC-001 | Identify records >30 days old for archival | High |
| SYNC-CLEANUP-TC-002 | Archive old records from NUC SSD to NAS | High |
| SYNC-CLEANUP-TC-003 | Keep current month data on NUC SSD for fast access | High |
| SYNC-CLEANUP-TC-004 | Verify archived data accessible from NAS when needed | High |
10.25 Ft Sync Net Detect
10.25.1 Priority
Must Have
10.25.2 User Story
As a system administrator, I want automatic network availability detection so that seamless switching between online and offline modes occurs
10.25.3 Preconditions
Network monitoring configured; ping test to Old HQ every 30 seconds; offline mode trigger at 3 failed pings
10.25.4 Postconditions
Network status detected accurately; mode switching seamless; exponential backoff on failures
10.25.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-NET-TC-001 | Ping Old HQ every 30 seconds to detect network | High |
| SYNC-NET-TC-002 | Switch to offline mode after 3 consecutive failed pings | High |
| SYNC-NET-TC-003 | Apply exponential backoff on repeated failures | High |
| SYNC-NET-TC-004 | Switch back to online mode when network restored | High |
10.26 Ft Sync Net Slow
10.26.1 Priority
Must Have
10.26.2 User Story
As a system administrator, I want to detect slow network and adjust sync strategy so that optimization for 2G connections at remote gates is provided
10.26.3 Preconditions
Bandwidth detection implemented; sync strategy adjustable; priority threshold configurable
10.26.4 Postconditions
Slow network detected; batch size reduced; priority threshold increased; sync optimized for available bandwidth
10.26.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-NET-TC-005 | Detect 2G connection bandwidth | High |
| SYNC-NET-TC-006 | Reduce batch size on slow connection (<50KB) | High |
| SYNC-NET-TC-007 | Increase priority threshold (sync only critical/high priority) | High |
| SYNC-NET-TC-008 | Verify sync performance acceptable on 2G | High |
10.27 Ft Sync Net Retry
10.27.1 Priority
Must Have
10.27.2 User Story
As a system administrator, I want automatic retry of failed sync with exponential backoff so that intermittent network issues are handled without manual intervention
10.27.3 Preconditions
Retry logic implemented; exponential backoff configured; retry schedule defined; max retries set to 10
10.27.4 Postconditions
Failed syncs retried automatically; exponential backoff prevents network overload; manual intervention only after 10 retries
10.27.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-NET-TC-009 | Retry immediately on first sync failure | High |
| SYNC-NET-TC-010 | Retry with exponential backoff (30s, 1m, 5m, 15m, then hourly) | High |
| SYNC-NET-TC-011 | Limit retries to max 10 attempts | High |
| SYNC-NET-TC-012 | Alert for manual intervention after 10 failed retries | High |
10.28 Ft Sync Net Cellular
10.28.1 Priority
Must Have
10.28.2 User Story
As a system administrator, I want to support cellular data (2G/3G) for sync so that sync works even with minimal connectivity
10.28.3 Preconditions
Small payloads optimized for slow connections; compression enabled; 2G compatibility verified
10.28.4 Postconditions
Sync works over 2G; payloads optimized; compression reduces bandwidth requirements
10.28.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-NET-TC-013 | Test sync over 2G cellular connection | High |
| SYNC-NET-TC-014 | Verify small payload size optimized for 2G | High |
| SYNC-NET-TC-015 | Verify compression enabled and effective | High |
10.29 Ft Sync Monitor Status
10.29.1 Priority
Must Have
10.29.2 User Story
As an operations manager at Old HQ, I want to view sync status dashboard for all 9 gates so that system health can be monitored
10.29.3 Preconditions
Dashboard implemented; data collection from all gates; per-gate view available
10.29.4 Postconditions
Dashboard displays comprehensive status; all 9 gates visible; real-time updates
10.29.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-MONITOR-TC-001 | Display last sync time for all 9 gates | High |
| SYNC-MONITOR-TC-002 | Display pending records count per gate | High |
| SYNC-MONITOR-TC-003 | Display online/offline status per gate | High |
| SYNC-MONITOR-TC-004 | Display sync errors per gate | High |
| SYNC-MONITOR-TC-005 | Provide per-gate detailed view | Medium |
10.30 Ft Sync Monitor Alerts
10.30.1 Priority
Must Have
10.30.2 User Story
As an operations manager, I want to receive alerts for sync failures or delays so that issues can be proactively addressed before they impact operations
10.30.3 Preconditions
Alert conditions defined; notification system configured; alert recipients configured
10.30.4 Postconditions
Alerts sent for critical conditions; staff notified promptly; issues addressed proactively
10.30.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-MONITOR-TC-006 | Alert when sync delay exceeds 1 hour | High |
| SYNC-MONITOR-TC-007 | Alert on backup failure | High |
| SYNC-MONITOR-TC-008 | Alert when storage exceeds 80% | High |
| SYNC-MONITOR-TC-009 | Alert when UPS on battery power | High |
| SYNC-MONITOR-TC-010 | Alert for conflicts requiring manual resolution | High |
10.31 Ft Sync Monitor Logs
10.31.1 Priority
Must Have
10.31.2 User Story
As a system administrator, I want to access detailed sync logs for troubleshooting so that sync issues can be diagnosed and resolved
10.31.3 Preconditions
Comprehensive logging implemented; logs searchable; 30-day retention; log viewer available
10.31.4 Postconditions
All sync attempts logged; logs accessible and searchable; troubleshooting effective
10.31.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-MONITOR-TC-011 | Log all sync attempts with timestamps | High |
| SYNC-MONITOR-TC-012 | Log sync success/failure with details | High |
| SYNC-MONITOR-TC-013 | Log sync duration and records synced | High |
| SYNC-MONITOR-TC-014 | Log conflicts and errors with details | High |
| SYNC-MONITOR-TC-015 | Provide searchable log viewer interface | Medium |
| SYNC-MONITOR-TC-016 | Maintain 30-day log retention | Medium |
10.32 Ft Sync Monitor Metrics
10.32.1 Priority
Should Have
10.32.2 User Story
As an operations manager, I want to view historical sync performance metrics so that trends can be identified and sync strategy optimized
10.32.3 Preconditions
Metrics collection implemented; historical data stored; charts and graphs available
10.32.4 Postconditions
Historical metrics visible; trends identifiable; optimization opportunities clear
10.32.5 Test Cases
| Id | Description | Weight |
|---|---|---|
| SYNC-MONITOR-TC-017 | Display average sync time over 30 days | Medium |
| SYNC-MONITOR-TC-018 | Display sync success rate metrics | Medium |
| SYNC-MONITOR-TC-019 | Display network uptime per gate | Medium |
| SYNC-MONITOR-TC-020 | Display backup success rate metrics | Medium |
| SYNC-MONITOR-TC-021 | Present metrics in charts and graphs | Medium |
11 Additional Context
11.1 Success Metrics
11.1.1 System Uptime
99% regardless of network status (currently ~70%)
11.1.2 Sync Delay
≤ 15 minutes gate-to-gate (currently 30+ minutes to Old HQ)
11.1.3 Backup Success Rate
100% hourly backups completed
11.1.4 Data Loss Incidents
Zero data loss
11.1.5 Conflict Auto Resolution
≥ 95% conflicts resolved automatically
11.1.6 Restore Time
< 30 minutes from hourly backup