Skip to main content

NCAA Digital Transformation - Offline Operations & Sync Module — Software Requirements Specification (SRS)

Table of Contents

1 Document Information

FieldValue
Project NameNCAA Digital Transformation - Offline Operations & Sync Module
Version1.0
Date2025-11-06
Project ManagerTBD
Tech LeadTBD
Qa LeadTBD
Platforms['Ubuntu Server 22.04', 'PostgreSQL 15+', 'NAS Linux']
Document StatusDraft
Module CodeOFFLINE_SYNC
Parent ProjectNCAA Digital Transformation - Ngorongoro Gateway System

2 Project Overview

2.1 What Are We Building

2.1.1 System Function

Offline-first data management and synchronization system enabling 9 gates to operate independently without network connectivity while maintaining data consistency and integrity. System includes local PostgreSQL storage on Intel NUC, hourly backups to NAS RAID 1, gate-to-gate synchronization with 15-minute maximum delay, and UPS power backup for 2-4 hour runtime.

2.1.2 Users

  • Gate Staff: Operate system during network outages
  • System Administrators: Monitor sync status and resolve conflicts
  • Technical Support: Troubleshoot sync issues
  • Management: Monitor system health and uptime

2.1.3 Problem Solved

Safari portal system frequently down due to network failures, 30+ minute wait times for data synchronization at Old HQ, manual workarounds when system unavailable, data loss risk from paper records, no backup systems at remote locations (Lemala 2 has no computer), slow internet at Ndutu and Olduvai causing service delays, system misbehavior at Main Gate showing false overstay errors

2.1.4 Key Success Metric

99% system uptime regardless of network status, gate-to-gate sync within 15 minutes maximum, zero data loss, hourly backups completed successfully, 2-4 hour operation on UPS during power outages, automatic conflict resolution for 95% of sync conflicts

2.2 Scope

2.2.1 In Scope

  • Local PostgreSQL database on Intel NUC at each gate
  • Fast NVMe SSD storage for live operations
  • Hourly automated backup to NAS RAID 1
  • Daily NAS snapshots for point-in-time recovery
  • Weekly encrypted USB backup to safe
  • Gate-to-gate data synchronization over HTTP API
  • Conflict detection and resolution
  • Network failure detection and automatic offline mode
  • Sync queue management for batch processing
  • UPS integration for power backup (2-4 hours)
  • Data integrity checks and validation
  • Sync monitoring and alerting
  • Manual sync trigger for urgent updates

2.2.2 Out Of Scope

  • Real-time streaming replication (eventual consistency acceptable)
  • Distributed database clustering
  • Automatic failover to cloud systems
  • Satellite internet connectivity
  • Multi-master write conflicts (single master per gate for gate-specific data)

3 User Requirements

3.1 Offline Operation

Feature CodeI Want ToSo That I CanPriorityNotes
FT-SYNC-OFFLINE-MODEOperate system fully offline when network unavailableContinue processing visitors at gates with no/poor connectivity (Ndutu, Lemala, slow Olduvai)MustAll core functions available offline: registration, permit verification, vehicle logging, capacity tracking. Network detection automatic.
FT-SYNC-OFFLINE-INDICATORSee clear visual indicator of online/offline statusKnow when system is operating offline and when sync will occurMustStatus bar showing: Online (syncing), Offline (X hours since last sync), Syncing now. Color-coded: green/yellow/red.
FT-SYNC-OFFLINE-QUEUEView pending changes waiting to syncMonitor what data needs to be synchronized when network returnsShouldQueue shows: pending records count, oldest pending change time, estimated sync time.
FT-SYNC-OFFLINE-PERFORMANCEExperience same performance whether online or offlineMaintain service quality regardless of network statusMustLocal NVMe SSD ensures fast query response. No performance degradation offline.

3.2 Data Synchronization

Feature CodeI Want ToSo That I CanPriorityNotes
FT-SYNC-GATE-TO-GATESync data between gates within 15 minutes maximumMaintain near real-time capacity tracking and prevent duplicate entriesMustSmall payloads (10-50KB). HTTP API between NUCs. Works over 2G. Automatic retry on failure.
FT-SYNC-PRIORITYPrioritize critical data for sync (permits, capacity) over lower priority (historical reports)Ensure most important data syncs first on limited bandwidthMustPriority levels: Critical (permits, capacity - 5min target), High (vehicle logs - 15min), Normal (reports - 1hr), Low (archives - 24hr).
FT-SYNC-BATCHBatch multiple changes for efficient syncMinimize network overhead on slow connections (2G at remote gates)MustBatch size: 100 records or 50KB, whichever comes first. Compression enabled.
FT-SYNC-CONFLICT-DETECTAutomatically detect sync conflicts (same record modified at multiple gates)Prevent data corruption and inconsistenciesMustTimestamp-based conflict detection. Last-write-wins for most data. Manual resolution for critical conflicts.
FT-SYNC-CONFLICT-RESOLVEAutomatically resolve common sync conflictsMinimize manual intervention (target 95% auto-resolution)MustRules: Vehicle logs append-only, capacity updates merge, permit updates from originating gate wins. Conflict log maintained.
FT-SYNC-MANUAL-TRIGGERManually trigger sync for urgent updatesImmediately sync critical changes (emergency permit extensions)MustManual sync button. Confirmation dialog showing sync scope. Progress indicator.
FT-SYNC-INTEGRITYValidate data integrity during syncEnsure no data corruption during transmissionMustChecksums for each batch. Transaction rollback on failure. Retry mechanism.

3.3 Backup Recovery

Feature CodeI Want ToSo That I CanPriorityNotes
FT-SYNC-BACKUP-HOURLYAutomatically backup PostgreSQL to NAS every hourProtect against NUC SSD failure with hourly granularityMustpg_dump + rsync to NAS. Backup time <10 minutes. Verification after backup. Retention: 7 days hourly.
FT-SYNC-BACKUP-DAILYCreate daily snapshots on NAS for point-in-time recoveryRecover from data corruption or accidental deletionMustDaily NAS snapshots at midnight. Retention: 30 days. Space-efficient incremental snapshots.
FT-SYNC-BACKUP-WEEKLYCreate weekly encrypted USB backups for disaster recoveryMaintain offline backup in safe for catastrophic failuresMustWeekly USB backup. AES-256 encryption. Physical storage in safe. Retention: 12 weeks.
FT-SYNC-RESTORE-HOURLYRestore from hourly NAS backup within 30 minutesQuickly recover from NUC failureMustDocumented restore procedure. Tested monthly. Restore script automated.
FT-SYNC-RESTORE-POINTRestore to specific point in time from daily snapshotsRecover from data corruption or user errorsMustPoint-in-time recovery UI. Preview restore before commit. Backup current state before restore.

3.4 Power Management

Feature CodeI Want ToSo That I CanPriorityNotes
FT-SYNC-UPS-INTEGRATIONIntegrate with UPS for 2-4 hour power backupContinue operations during power outagesMustUPS 1000VA powers NUC + NAS + Switch. Runtime: 2-4 hours. Low battery alerts.
FT-SYNC-UPS-SHUTDOWNGracefully shutdown systems when UPS battery critically lowPrevent data corruption from sudden power lossMust10% battery triggers graceful shutdown. Save all pending changes. Close database connections. Shutdown sequence: PWA -> PostgreSQL -> OS.
FT-SYNC-UPS-ALERTReceive alerts when on UPS power or battery lowTake action before system shutdownMustVisual alert on PWA. SMS to technical staff. Battery percentage displayed.
FT-SYNC-POWER-RECOVERYAutomatically restart and resume operations when power restoredMinimize downtime without manual interventionMustAuto-boot on power restore. Database integrity check. Resume pending syncs. Staff notification.

3.5 Local Storage

Feature CodeI Want ToSo That I CanPriorityNotes
FT-SYNC-STORAGE-NVMEStore live operational data on fast NVMe SSDEnsure quick query response for gate operationsMust512GB NVMe SSD on NUC. PostgreSQL optimized for SSD. Query response <100ms.
FT-SYNC-STORAGE-NASArchive historical data on NAS RAID 1Protect against disk failure with redundancyMust2x 2TB drives in RAID 1. Survives single disk failure. Hot-swap capability.
FT-SYNC-STORAGE-MONITORINGMonitor storage usage and receive alerts when space lowPrevent system failure due to full diskMustAlert at 80% full. Critical alert at 90%. Automatic archive to NAS when NUC SSD >80%.
FT-SYNC-STORAGE-CLEANUPAutomatically archive old data from NUC SSD to NASManage limited SSD space efficientlyMustArchive records >30 days old to NAS. Keep current month on NUC SSD for fast access.

3.6 Network Resilience

Feature CodeI Want ToSo That I CanPriorityNotes
FT-SYNC-NET-DETECTAutomatically detect network availabilitySwitch between online and offline modes seamlesslyMustPing test to Old HQ every 30 seconds. Exponential backoff on failures. Switch to offline mode after 3 failed pings.
FT-SYNC-NET-SLOWDetect slow network and adjust sync strategyOptimize for 2G connections at remote gatesMustBandwidth detection. Reduce batch size on slow connection. Increase priority threshold.
FT-SYNC-NET-RETRYAutomatically retry failed sync with exponential backoffHandle intermittent network issues without manual interventionMustRetry: immediately, 30sec, 1min, 5min, 15min, then hourly. Max 10 retries before manual intervention needed.
FT-SYNC-NET-CELLULARSupport cellular data (2G/3G) for syncSync even with minimal connectivityMustWorks over 2G. Small payloads optimized for slow connections. Compression enabled.

3.7 Monitoring Alerting

Feature CodeI Want ToSo That I CanPriorityNotes
FT-SYNC-MONITOR-STATUSView sync status dashboard for all 9 gatesMonitor system health from Old HQMustDashboard shows: last sync time, pending records, online/offline status, errors. Per-gate view.
FT-SYNC-MONITOR-ALERTSReceive alerts for sync failures or delaysProactively address issues before they impact operationsMustAlert conditions: Sync delay >1 hour, backup failure, storage >80%, UPS on battery, conflict requiring manual resolution.
FT-SYNC-MONITOR-LOGSAccess detailed sync logs for troubleshootingDiagnose and resolve sync issuesMustLogs: sync attempts, success/failure, duration, records synced, conflicts, errors. Searchable. 30-day retention.
FT-SYNC-MONITOR-METRICSView historical sync performance metricsIdentify trends and optimize sync strategyShouldMetrics: avg sync time, success rate, network uptime, backup success rate. Charts and graphs.

4 Technical Requirements

4.1 Performance Standards

RequirementTargetHow To Test
Gate-to-gate sync delay≤ 15 minutes maximumCreate record at one gate, verify receipt at others within 15 min
Hourly backup duration< 10 minutesMonitor pg_dump + rsync time for typical database size
Database query response< 100ms for 95% of queriesLoad testing with typical query patterns
System uptime99% regardless of network statusOffline operation tests, uptime monitoring over 30 days
UPS runtime2-4 hours for NUC + NAS + SwitchFull load test with UPS disconnected from mains
Conflict auto-resolution rate≥ 95%Synthetic conflict scenarios, measure manual intervention rate

4.2 Platform Requirements

PlatformMinimum VersionTarget VersionNotes
DatabasePostgreSQL 15PostgreSQL 16+Logical replication features, better performance
Operating SystemUbuntu Server 22.04 LTSUbuntu Server 24.04 LTSLong-term support, security updates
Storage512GB NVMe SSD, 2x 2TB HDD RAID 11TB NVMe SSD, 2x 4TB HDD RAID 1Future-proofing for data growth

4.3 Security Privacy

RequirementMust HaveImplementation
Data encryption at restTrueAES-256 encryption for NAS backups and USB backups
Data encryption in transitTrueTLS 1.2+ for gate-to-gate sync over HTTP API
Backup integrity verificationTrueChecksums verified after every backup, test restore monthly
Access control for backupsTrueRole-based access to backup systems, audit trail for restore operations

5 External Dependencies

5.1 Third Party Services

ServiceWhat It DoesCriticalityBackup Plan
SMS Gateway (optional)Send alerts for critical sync failuresNice to haveEmail alerts only

5.2 Device Requirements

FeatureRequiredOptionalNotes
UPS 1000VATrueFalsePowers NUC + NAS + Switch for 2-4 hours. USB or network management interface.
Network connectivity (intermittent OK)TrueFalseWorks with 2G minimum. Offline operation when unavailable.
NAS with RAID 1TrueFalse4-bay Synology/QNAP, 2x 2TB minimum, supports snapshots

6 Release Planning

6.1 Development Phases

PhaseFeatures IncludedTimelineSuccess Criteria
Phase 1 (Single Gate Testing)['Local PostgreSQL setup', 'Offline operation', 'NAS backup', 'UPS integration', 'Basic monitoring']6 weeksSingle gate operational offline for 7 days, hourly backups successful, restore tested
Phase 2 (Multi-Gate Sync - 3 Gates)['Gate-to-gate sync', 'Conflict detection & resolution', 'Priority sync', 'Sync monitoring dashboard']8 weeks3 gates syncing within 15 minutes, 95% auto-conflict resolution, zero data loss
Phase 3 (Full Deployment - 9 Gates)['All 9 gates operational', 'Full monitoring & alerting', 'Performance optimization', 'Network resilience features']8 weeksAll gates syncing reliably, 99% uptime, comprehensive monitoring in place

6.2 Release Checklist

  • PostgreSQL configured on all NUC units
  • NAS backup systems operational at all gates
  • UPS installed and tested (2-4hr runtime verified)
  • Gate-to-gate sync tested and performing within 15min SLA
  • Conflict resolution rules implemented and tested
  • Offline operation tested at all gates for 24+ hours
  • Backup and restore procedures documented and tested
  • Monitoring dashboard operational at Old HQ
  • Alert system configured and tested
  • Weekly USB backups established
  • Staff trained on sync monitoring and manual triggers

7 Risks Assumptions

7.1 Risks

RiskProbabilityImpactMitigation
Network outages longer than 4 hours causing significant sync delaysMediumMediumOffline-first design, large sync queues, manual sync triggers when network restored, acceptable delay up to 24 hours for non-critical data
Sync conflicts requiring manual resolution overwhelming staffLowMedium95% auto-resolution target, conflict resolution training, clear escalation procedures
Storage capacity exhausted due to unexpected data growthLowHighStorage monitoring with 80% alerts, automatic archival, spare drives available, documented expansion procedure
UPS battery degradation reducing runtime below 2 hoursMediumMediumAnnual UPS battery replacement, runtime testing quarterly, spare batteries stocked
Database corruption requiring restoreLowHighHourly backups with verification, daily snapshots, weekly offline backups, tested restore procedures

7.2 Assumptions

  • Network connectivity available intermittently (even if slow/unreliable)
  • Gate staff can operate system with clear offline/online indicators
  • 15-minute sync delay acceptable for capacity management
  • Power outages typically shorter than 4 hours (UPS capacity)
  • NUC hardware reliable for 24/7 operation in remote conditions
  • RAID 1 on NAS provides sufficient redundancy for backup data
  • Sync conflicts infrequent due to gate-specific data partitioning

8 Market Specific Considerations

8.1 Primary Market

  • Ngorongoro Conservation Area, Tanzania - 9 remote gates

8.2 Target Demographics

  • Gate staff operating in offline conditions
  • System administrators monitoring from Old HQ

8.3 Local Considerations

  • Very limited network connectivity at remote locations (Ndutu no cellular, Lemala 1&2 low connectivity)
  • Slow internet even when available (2G speeds common)
  • Power reliability issues requiring UPS backup
  • Remote locations making physical repairs challenging (spare NUC units at Old HQ)
  • Staff may have limited technical skills for troubleshooting
  • Harsh environment conditions (dust, heat) affecting hardware

9 Sign Off

9.1 Approval

RoleNameSignatureDate

9.2 Document History

VersionDateChanges MadeChanged By
1.02025-11-06Initial draft based on gate nodes architecture and field report observationsDevelopment Team

10 Detailed Feature Requirements

10.1 Ft Sync Offline Mode

10.1.1 Priority

Must Have

10.1.2 User Story

As a gate staff member, I want to operate the system fully offline when network is unavailable so that I can continue processing visitors at gates with no/poor connectivity

10.1.3 Preconditions

Local PostgreSQL database operational; network detection configured; core functions available offline

10.1.4 Postconditions

All core operations functional offline; network status automatically detected; seamless mode switching

10.1.5 Test Cases

IdDescriptionWeight
SYNC-OFFLINE-TC-001Operate registration offline for 24 hoursHigh
SYNC-OFFLINE-TC-002Operate permit verification offline with local databaseHigh
SYNC-OFFLINE-TC-003Operate vehicle logging offlineHigh
SYNC-OFFLINE-TC-004Track capacity offline at remote gates (Ndutu, Lemala)High
SYNC-OFFLINE-TC-005Verify automatic network detection and mode switchingHigh

10.2 Ft Sync Offline Indicator

10.2.1 Priority

Must Have

10.2.2 User Story

As a gate staff member, I want to see clear visual indicator of online/offline status so that I know when system is operating offline and when sync will occur

10.2.3 Preconditions

Status bar implemented in PWA; network monitoring active; status updates in real-time

10.2.4 Postconditions

Status clearly visible; color-coded indicators working; time since last sync displayed

10.2.5 Test Cases

IdDescriptionWeight
SYNC-OFFLINE-TC-006Display green indicator when online and syncingHigh
SYNC-OFFLINE-TC-007Display yellow/red indicator when offline with hours since last syncHigh
SYNC-OFFLINE-TC-008Display 'Syncing now' indicator during active syncHigh
SYNC-OFFLINE-TC-009Update status indicator within 30 seconds of network changesMedium

10.3 Ft Sync Offline Queue

10.3.1 Priority

Should Have

10.3.2 User Story

As a gate staff member, I want to view pending changes waiting to sync so that I can monitor what data needs to be synchronized when network returns

10.3.3 Preconditions

Sync queue implemented; pending records tracked; queue display interface available

10.3.4 Postconditions

Queue visible to staff; pending count accurate; estimated sync time displayed

10.3.5 Test Cases

IdDescriptionWeight
SYNC-OFFLINE-TC-010Display pending records count in queueMedium
SYNC-OFFLINE-TC-011Display oldest pending change timestampMedium
SYNC-OFFLINE-TC-012Display estimated sync time based on queue sizeMedium

10.4 Ft Sync Offline Performance

10.4.1 Priority

Must Have

10.4.2 User Story

As a gate staff member, I want same performance whether online or offline so that service quality is maintained regardless of network status

10.4.3 Preconditions

Local NVMe SSD storage; optimized database queries; performance benchmarks defined

10.4.4 Postconditions

Query response time <100ms offline; no performance degradation; user experience consistent

10.4.5 Test Cases

IdDescriptionWeight
SYNC-OFFLINE-TC-013Verify query response time <100ms offlineHigh
SYNC-OFFLINE-TC-014Load test with 300 visitors/day offlineHigh
SYNC-OFFLINE-TC-015Compare online vs offline performance (should be equivalent)High

10.5 Ft Sync Gate To Gate

10.5.1 Priority

Must Have

10.5.2 User Story

As a system administrator, I want to sync data between gates within 15 minutes maximum so that near real-time capacity tracking is maintained

10.5.3 Preconditions

HTTP API configured between NUCs; network connectivity available; sync scheduler running

10.5.4 Postconditions

Data synced within 15 minutes; capacity updates propagated; duplicate entries prevented

10.5.5 Test Cases

IdDescriptionWeight
SYNC-GATE-TC-001Sync data between gates within 15 minutesHigh
SYNC-GATE-TC-002Verify small payload size (10-50KB per sync)High
SYNC-GATE-TC-003Test sync over 2G connectionHigh
SYNC-GATE-TC-004Verify automatic retry on sync failureHigh
SYNC-GATE-TC-005Test sync across all 9 gates simultaneouslyHigh

10.6 Ft Sync Priority

10.6.1 Priority

Must Have

10.6.2 User Story

As a system administrator, I want to prioritize critical data for sync so that most important data syncs first on limited bandwidth

10.6.3 Preconditions

Priority levels defined; sync queue prioritized; bandwidth limitations considered

10.6.4 Postconditions

Critical data synced within 5 minutes; high priority within 15 minutes; normal and low priority as bandwidth allows

10.6.5 Test Cases

IdDescriptionWeight
SYNC-PRIORITY-TC-001Sync critical data (permits, capacity) within 5 minutesHigh
SYNC-PRIORITY-TC-002Sync high priority data (vehicle logs) within 15 minutesHigh
SYNC-PRIORITY-TC-003Sync normal priority data (reports) within 1 hourMedium
SYNC-PRIORITY-TC-004Sync low priority data (archives) within 24 hoursMedium

10.7 Ft Sync Batch

10.7.1 Priority

Must Have

10.7.2 User Story

As a system administrator, I want to batch multiple changes for efficient sync so that network overhead is minimized on slow connections

10.7.3 Preconditions

Batch size configured (100 records or 50KB); compression enabled; batching logic implemented

10.7.4 Postconditions

Multiple records batched efficiently; compression reduces bandwidth; sync performance optimized

10.7.5 Test Cases

IdDescriptionWeight
SYNC-BATCH-TC-001Batch up to 100 records per syncHigh
SYNC-BATCH-TC-002Limit batch size to 50KB maximumHigh
SYNC-BATCH-TC-003Verify compression enabled and reducing payload sizeHigh
SYNC-BATCH-TC-004Test batching on 2G connection performanceHigh

10.8 Ft Sync Conflict Detect

10.8.1 Priority

Must Have

10.8.2 User Story

As a system administrator, I want to automatically detect sync conflicts so that data corruption and inconsistencies are prevented

10.8.3 Preconditions

Timestamp-based conflict detection implemented; conflict log maintained; detection rules configured

10.8.4 Postconditions

Conflicts detected automatically; conflicts logged; manual resolution triggered when needed

10.8.5 Test Cases

IdDescriptionWeight
SYNC-CONFLICT-TC-001Detect conflict when same record modified at multiple gatesHigh
SYNC-CONFLICT-TC-002Log all detected conflicts with timestampsHigh
SYNC-CONFLICT-TC-003Trigger alert for critical conflicts requiring manual resolutionHigh
SYNC-CONFLICT-TC-004Prevent data corruption during conflict scenariosHigh

10.9 Ft Sync Conflict Resolve

10.9.1 Priority

Must Have

10.9.2 User Story

As a system administrator, I want to automatically resolve common sync conflicts so that manual intervention is minimized (target 95% auto-resolution)

10.9.3 Preconditions

Resolution rules defined; last-write-wins for most data; append-only for vehicle logs; conflict resolution logic implemented

10.9.4 Postconditions

95% of conflicts resolved automatically; resolution rules applied correctly; conflict log maintained

10.9.5 Test Cases

IdDescriptionWeight
SYNC-CONFLICT-TC-005Apply last-write-wins for general data conflictsHigh
SYNC-CONFLICT-TC-006Use append-only strategy for vehicle log conflictsHigh
SYNC-CONFLICT-TC-007Merge capacity updates from multiple gatesHigh
SYNC-CONFLICT-TC-008Permit updates from originating gate winsHigh
SYNC-CONFLICT-TC-009Achieve ≥95% auto-resolution rate in testingHigh

10.10 Ft Sync Manual Trigger

10.10.1 Priority

Must Have

10.10.2 User Story

As a gate staff member, I want to manually trigger sync for urgent updates so that critical changes are immediately synchronized

10.10.3 Preconditions

Manual sync button available in UI; confirmation dialog implemented; progress indicator available

10.10.4 Postconditions

Manual sync triggered; urgent data synced immediately; progress visible to user

10.10.5 Test Cases

IdDescriptionWeight
SYNC-MANUAL-TC-001Display manual sync button in UIHigh
SYNC-MANUAL-TC-002Show confirmation dialog with sync scope before manual syncHigh
SYNC-MANUAL-TC-003Display progress indicator during manual syncHigh
SYNC-MANUAL-TC-004Complete emergency permit extension sync within 2 minutesHigh

10.11 Ft Sync Integrity

10.11.1 Priority

Must Have

10.11.2 User Story

As a system administrator, I want to validate data integrity during sync so that no data corruption occurs during transmission

10.11.3 Preconditions

Checksums implemented for batches; transaction rollback on failure; retry mechanism configured

10.11.4 Postconditions

Data integrity verified; corrupted transmissions rejected; retries successful

10.11.5 Test Cases

IdDescriptionWeight
SYNC-INTEGRITY-TC-001Calculate and verify checksums for each batchHigh
SYNC-INTEGRITY-TC-002Rollback transaction on checksum failureHigh
SYNC-INTEGRITY-TC-003Retry failed sync with exponential backoffHigh
SYNC-INTEGRITY-TC-004Log all integrity check failuresHigh

10.12 Ft Sync Backup Hourly

10.12.1 Priority

Must Have

10.12.2 User Story

As a system administrator, I want to automatically backup PostgreSQL to NAS every hour so that NUC SSD failure protection is provided with hourly granularity

10.12.3 Preconditions

NAS configured and accessible; pg_dump installed; rsync configured; backup script scheduled

10.12.4 Postconditions

Hourly backups complete successfully; backup verification successful; 7 days retention maintained

10.12.5 Test Cases

IdDescriptionWeight
SYNC-BACKUP-TC-001Execute hourly pg_dump backup to NASHigh
SYNC-BACKUP-TC-002Verify backup completes in <10 minutesHigh
SYNC-BACKUP-TC-003Verify backup integrity after completionHigh
SYNC-BACKUP-TC-004Maintain 7 days of hourly backups with rotationHigh

10.13 Ft Sync Backup Daily

10.13.1 Priority

Must Have

10.13.2 User Story

As a system administrator, I want to create daily snapshots on NAS for point-in-time recovery so that recovery from data corruption or accidental deletion is possible

10.13.3 Preconditions

NAS snapshot feature enabled; daily snapshot scheduled at midnight; retention policy configured

10.13.4 Postconditions

Daily snapshots created successfully; 30-day retention maintained; space-efficient incremental snapshots

10.13.5 Test Cases

IdDescriptionWeight
SYNC-BACKUP-TC-005Create daily NAS snapshot at midnightHigh
SYNC-BACKUP-TC-006Maintain 30 days of daily snapshotsHigh
SYNC-BACKUP-TC-007Verify snapshots are space-efficient incrementalMedium
SYNC-BACKUP-TC-008Test snapshot restoration processHigh

10.14 Ft Sync Backup Weekly

10.14.1 Priority

Must Have

10.14.2 User Story

As a system administrator, I want to create weekly encrypted USB backups for disaster recovery so that offline backup is maintained in safe for catastrophic failures

10.14.3 Preconditions

USB drives available; AES-256 encryption configured; weekly backup scheduled; safe storage available

10.14.4 Postconditions

Weekly USB backups created and encrypted; physically stored in safe; 12 weeks retention

10.14.5 Test Cases

IdDescriptionWeight
SYNC-BACKUP-TC-009Create weekly USB backup with AES-256 encryptionHigh
SYNC-BACKUP-TC-010Verify encrypted backup can be restoredHigh
SYNC-BACKUP-TC-011Store USB backup in physical safeHigh
SYNC-BACKUP-TC-012Maintain 12 weeks of weekly USB backupsMedium

10.15 Ft Sync Restore Hourly

10.15.1 Priority

Must Have

10.15.2 User Story

As a system administrator, I want to restore from hourly NAS backup within 30 minutes so that quick recovery from NUC failure is possible

10.15.3 Preconditions

Restore procedure documented; restore script automated; tested monthly; spare NUC available

10.15.4 Postconditions

Restore completes within 30 minutes; data integrity verified; system operational

10.15.5 Test Cases

IdDescriptionWeight
SYNC-RESTORE-TC-001Restore PostgreSQL from latest hourly backupHigh
SYNC-RESTORE-TC-002Complete restore process within 30 minutesHigh
SYNC-RESTORE-TC-003Verify database integrity after restoreHigh
SYNC-RESTORE-TC-004Test monthly restore procedures proactivelyHigh

10.16 Ft Sync Restore Point

10.16.1 Priority

Must Have

10.16.2 User Story

As a system administrator, I want to restore to specific point in time from daily snapshots so that recovery from data corruption or user errors is possible

10.16.3 Preconditions

Point-in-time recovery UI implemented; snapshots available; restore preview functionality; backup current state before restore

10.16.4 Postconditions

Point-in-time restoration successful; data restored to specific snapshot; current state backed up before restore

10.16.5 Test Cases

IdDescriptionWeight
SYNC-RESTORE-TC-005Select specific daily snapshot for restorationHigh
SYNC-RESTORE-TC-006Preview restore contents before committingMedium
SYNC-RESTORE-TC-007Backup current state before point-in-time restoreHigh
SYNC-RESTORE-TC-008Complete point-in-time restore successfullyHigh

10.17 Ft Sync Ups Integration

10.17.1 Priority

Must Have

10.17.2 User Story

As a gate staff member, I want UPS integration for 2-4 hour power backup so that operations continue during power outages

10.17.3 Preconditions

UPS 1000VA installed; NUC + NAS + Switch connected to UPS; UPS management interface configured

10.17.4 Postconditions

UPS powers critical systems for 2-4 hours; battery level monitoring active; low battery alerts configured

10.17.5 Test Cases

IdDescriptionWeight
SYNC-UPS-TC-001Power NUC + NAS + Switch on UPS for 2-4 hoursHigh
SYNC-UPS-TC-002Monitor UPS battery level in real-timeHigh
SYNC-UPS-TC-003Configure low battery alerts at 20% and 10%High
SYNC-UPS-TC-004Test UPS failover during simulated power outageHigh

10.18 Ft Sync Ups Shutdown

10.18.1 Priority

Must Have

10.18.2 User Story

As a system administrator, I want graceful shutdown when UPS battery is critically low so that data corruption from sudden power loss is prevented

10.18.3 Preconditions

UPS monitoring configured; shutdown script implemented; 10% battery threshold set; shutdown sequence defined

10.18.4 Postconditions

Systems shutdown gracefully at 10% battery; all pending changes saved; database connections closed cleanly

10.18.5 Test Cases

IdDescriptionWeight
SYNC-UPS-TC-005Trigger graceful shutdown at 10% batteryHigh
SYNC-UPS-TC-006Save all pending changes before shutdownHigh
SYNC-UPS-TC-007Close database connections cleanlyHigh
SYNC-UPS-TC-008Execute shutdown sequence: PWA → PostgreSQL → OSHigh

10.19 Ft Sync Ups Alert

10.19.1 Priority

Must Have

10.19.2 User Story

As a gate staff member, I want to receive alerts when on UPS power or battery low so that I can take action before system shutdown

10.19.3 Preconditions

Alert system configured; visual alerts in PWA; SMS alerts to technical staff; battery percentage monitoring

10.19.4 Postconditions

Alerts received promptly; staff aware of power status; action taken before critical shutdown

10.19.5 Test Cases

IdDescriptionWeight
SYNC-UPS-TC-009Display visual alert in PWA when on UPS powerHigh
SYNC-UPS-TC-010Send SMS to technical staff when battery <20%High
SYNC-UPS-TC-011Display battery percentage in PWA status barHigh
SYNC-UPS-TC-012Escalate alert urgency as battery level decreasesMedium

10.20 Ft Sync Power Recovery

10.20.1 Priority

Must Have

10.20.2 User Story

As a system administrator, I want automatic restart and resume operations when power is restored so that downtime is minimized without manual intervention

10.20.3 Preconditions

Auto-boot on power restore configured; database integrity check enabled; resume pending syncs; notification system operational

10.20.4 Postconditions

System boots automatically on power restore; integrity verified; operations resumed; staff notified

10.20.5 Test Cases

IdDescriptionWeight
SYNC-RECOVERY-TC-001Auto-boot system when power restoredHigh
SYNC-RECOVERY-TC-002Execute database integrity check on startupHigh
SYNC-RECOVERY-TC-003Resume pending syncs automaticallyHigh
SYNC-RECOVERY-TC-004Notify staff of successful power recovery and system statusMedium

10.21 Ft Sync Storage Nvme

10.21.1 Priority

Must Have

10.21.2 User Story

As a system administrator, I want live operational data stored on fast NVMe SSD so that quick query response for gate operations is ensured

10.21.3 Preconditions

512GB NVMe SSD installed; PostgreSQL optimized for SSD; query performance tuned

10.21.4 Postconditions

Query response <100ms; SSD performance optimal; database operations fast

10.21.5 Test Cases

IdDescriptionWeight
SYNC-STORAGE-TC-001Verify PostgreSQL data stored on NVMe SSDHigh
SYNC-STORAGE-TC-002Optimize PostgreSQL configuration for SSD performanceHigh
SYNC-STORAGE-TC-003Verify query response time <100ms for 95% of queriesHigh
SYNC-STORAGE-TC-004Monitor SSD performance and health metricsMedium

10.22 Ft Sync Storage Nas

10.22.1 Priority

Must Have

10.22.2 User Story

As a system administrator, I want historical data archived on NAS RAID 1 so that disk failure protection is provided with redundancy

10.22.3 Preconditions

NAS with 2x 2TB drives in RAID 1; hot-swap capability; archival process configured

10.22.4 Postconditions

Historical data safely archived; RAID 1 redundancy active; single drive failure survivable

10.22.5 Test Cases

IdDescriptionWeight
SYNC-STORAGE-TC-005Configure NAS with 2x 2TB drives in RAID 1High
SYNC-STORAGE-TC-006Archive historical data (>30 days) from NUC to NASHigh
SYNC-STORAGE-TC-007Test single drive failure and recoveryHigh
SYNC-STORAGE-TC-008Verify hot-swap drive replacement capabilityMedium

10.23 Ft Sync Storage Monitoring

10.23.1 Priority

Must Have

10.23.2 User Story

As a system administrator, I want to monitor storage usage and receive alerts when space is low so that system failure due to full disk is prevented

10.23.3 Preconditions

Storage monitoring configured; alert thresholds set (80%, 90%); automatic archival configured

10.23.4 Postconditions

Storage usage monitored continuously; alerts sent at thresholds; automatic archival prevents disk full

10.23.5 Test Cases

IdDescriptionWeight
SYNC-STORAGE-TC-009Monitor NUC SSD and NAS storage usageHigh
SYNC-STORAGE-TC-010Send alert when storage reaches 80%High
SYNC-STORAGE-TC-011Send critical alert when storage reaches 90%High
SYNC-STORAGE-TC-012Trigger automatic archival to NAS when NUC SSD >80%High

10.24 Ft Sync Storage Cleanup

10.24.1 Priority

Must Have

10.24.2 User Story

As a system administrator, I want to automatically archive old data from NUC SSD to NAS so that limited SSD space is managed efficiently

10.24.3 Preconditions

Archival rules defined (>30 days); automatic archival process scheduled; NAS storage available

10.24.4 Postconditions

Old data archived automatically; current month data on NUC for fast access; SSD space managed efficiently

10.24.5 Test Cases

IdDescriptionWeight
SYNC-CLEANUP-TC-001Identify records >30 days old for archivalHigh
SYNC-CLEANUP-TC-002Archive old records from NUC SSD to NASHigh
SYNC-CLEANUP-TC-003Keep current month data on NUC SSD for fast accessHigh
SYNC-CLEANUP-TC-004Verify archived data accessible from NAS when neededHigh

10.25 Ft Sync Net Detect

10.25.1 Priority

Must Have

10.25.2 User Story

As a system administrator, I want automatic network availability detection so that seamless switching between online and offline modes occurs

10.25.3 Preconditions

Network monitoring configured; ping test to Old HQ every 30 seconds; offline mode trigger at 3 failed pings

10.25.4 Postconditions

Network status detected accurately; mode switching seamless; exponential backoff on failures

10.25.5 Test Cases

IdDescriptionWeight
SYNC-NET-TC-001Ping Old HQ every 30 seconds to detect networkHigh
SYNC-NET-TC-002Switch to offline mode after 3 consecutive failed pingsHigh
SYNC-NET-TC-003Apply exponential backoff on repeated failuresHigh
SYNC-NET-TC-004Switch back to online mode when network restoredHigh

10.26 Ft Sync Net Slow

10.26.1 Priority

Must Have

10.26.2 User Story

As a system administrator, I want to detect slow network and adjust sync strategy so that optimization for 2G connections at remote gates is provided

10.26.3 Preconditions

Bandwidth detection implemented; sync strategy adjustable; priority threshold configurable

10.26.4 Postconditions

Slow network detected; batch size reduced; priority threshold increased; sync optimized for available bandwidth

10.26.5 Test Cases

IdDescriptionWeight
SYNC-NET-TC-005Detect 2G connection bandwidthHigh
SYNC-NET-TC-006Reduce batch size on slow connection (<50KB)High
SYNC-NET-TC-007Increase priority threshold (sync only critical/high priority)High
SYNC-NET-TC-008Verify sync performance acceptable on 2GHigh

10.27 Ft Sync Net Retry

10.27.1 Priority

Must Have

10.27.2 User Story

As a system administrator, I want automatic retry of failed sync with exponential backoff so that intermittent network issues are handled without manual intervention

10.27.3 Preconditions

Retry logic implemented; exponential backoff configured; retry schedule defined; max retries set to 10

10.27.4 Postconditions

Failed syncs retried automatically; exponential backoff prevents network overload; manual intervention only after 10 retries

10.27.5 Test Cases

IdDescriptionWeight
SYNC-NET-TC-009Retry immediately on first sync failureHigh
SYNC-NET-TC-010Retry with exponential backoff (30s, 1m, 5m, 15m, then hourly)High
SYNC-NET-TC-011Limit retries to max 10 attemptsHigh
SYNC-NET-TC-012Alert for manual intervention after 10 failed retriesHigh

10.28 Ft Sync Net Cellular

10.28.1 Priority

Must Have

10.28.2 User Story

As a system administrator, I want to support cellular data (2G/3G) for sync so that sync works even with minimal connectivity

10.28.3 Preconditions

Small payloads optimized for slow connections; compression enabled; 2G compatibility verified

10.28.4 Postconditions

Sync works over 2G; payloads optimized; compression reduces bandwidth requirements

10.28.5 Test Cases

IdDescriptionWeight
SYNC-NET-TC-013Test sync over 2G cellular connectionHigh
SYNC-NET-TC-014Verify small payload size optimized for 2GHigh
SYNC-NET-TC-015Verify compression enabled and effectiveHigh

10.29 Ft Sync Monitor Status

10.29.1 Priority

Must Have

10.29.2 User Story

As an operations manager at Old HQ, I want to view sync status dashboard for all 9 gates so that system health can be monitored

10.29.3 Preconditions

Dashboard implemented; data collection from all gates; per-gate view available

10.29.4 Postconditions

Dashboard displays comprehensive status; all 9 gates visible; real-time updates

10.29.5 Test Cases

IdDescriptionWeight
SYNC-MONITOR-TC-001Display last sync time for all 9 gatesHigh
SYNC-MONITOR-TC-002Display pending records count per gateHigh
SYNC-MONITOR-TC-003Display online/offline status per gateHigh
SYNC-MONITOR-TC-004Display sync errors per gateHigh
SYNC-MONITOR-TC-005Provide per-gate detailed viewMedium

10.30 Ft Sync Monitor Alerts

10.30.1 Priority

Must Have

10.30.2 User Story

As an operations manager, I want to receive alerts for sync failures or delays so that issues can be proactively addressed before they impact operations

10.30.3 Preconditions

Alert conditions defined; notification system configured; alert recipients configured

10.30.4 Postconditions

Alerts sent for critical conditions; staff notified promptly; issues addressed proactively

10.30.5 Test Cases

IdDescriptionWeight
SYNC-MONITOR-TC-006Alert when sync delay exceeds 1 hourHigh
SYNC-MONITOR-TC-007Alert on backup failureHigh
SYNC-MONITOR-TC-008Alert when storage exceeds 80%High
SYNC-MONITOR-TC-009Alert when UPS on battery powerHigh
SYNC-MONITOR-TC-010Alert for conflicts requiring manual resolutionHigh

10.31 Ft Sync Monitor Logs

10.31.1 Priority

Must Have

10.31.2 User Story

As a system administrator, I want to access detailed sync logs for troubleshooting so that sync issues can be diagnosed and resolved

10.31.3 Preconditions

Comprehensive logging implemented; logs searchable; 30-day retention; log viewer available

10.31.4 Postconditions

All sync attempts logged; logs accessible and searchable; troubleshooting effective

10.31.5 Test Cases

IdDescriptionWeight
SYNC-MONITOR-TC-011Log all sync attempts with timestampsHigh
SYNC-MONITOR-TC-012Log sync success/failure with detailsHigh
SYNC-MONITOR-TC-013Log sync duration and records syncedHigh
SYNC-MONITOR-TC-014Log conflicts and errors with detailsHigh
SYNC-MONITOR-TC-015Provide searchable log viewer interfaceMedium
SYNC-MONITOR-TC-016Maintain 30-day log retentionMedium

10.32 Ft Sync Monitor Metrics

10.32.1 Priority

Should Have

10.32.2 User Story

As an operations manager, I want to view historical sync performance metrics so that trends can be identified and sync strategy optimized

10.32.3 Preconditions

Metrics collection implemented; historical data stored; charts and graphs available

10.32.4 Postconditions

Historical metrics visible; trends identifiable; optimization opportunities clear

10.32.5 Test Cases

IdDescriptionWeight
SYNC-MONITOR-TC-017Display average sync time over 30 daysMedium
SYNC-MONITOR-TC-018Display sync success rate metricsMedium
SYNC-MONITOR-TC-019Display network uptime per gateMedium
SYNC-MONITOR-TC-020Display backup success rate metricsMedium
SYNC-MONITOR-TC-021Present metrics in charts and graphsMedium

11 Additional Context

11.1 Success Metrics

11.1.1 System Uptime

99% regardless of network status (currently ~70%)

11.1.2 Sync Delay

≤ 15 minutes gate-to-gate (currently 30+ minutes to Old HQ)

11.1.3 Backup Success Rate

100% hourly backups completed

11.1.4 Data Loss Incidents

Zero data loss

11.1.5 Conflict Auto Resolution

≥ 95% conflicts resolved automatically

11.1.6 Restore Time

< 30 minutes from hourly backup